Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Rust is now faster than C

Name: Anonymous 2017-02-21 18:14

http://benchmarksgame.alioth.debian.org/u64q/performance.php?test=knucleotide

How can this be? Since C is as fast as assembly, does this mean Rust does some microcode optimilzation under the hood?

Name: Anonymous 2017-02-22 5:35

>>11
it's because before the NULL check is even reached, the program has ALREADY entered an undefined state.
The kernel has control over the memory. NULL is just a pointer which, on every architecture Linux runs on, can be mapped to valid memory. There is nothing wrong with accessing page 0 or triggering a page fault because the kernel has control over the fault handler.

>>12
C makes undefined behavior trivial to invoke by accident
Bad compilers are the problem. C semantics is the problem. C was always an unsafe language, but standardization made it worse. If you add to an address, the C compiler can break your code. Compilers can magically know where a pointer comes from and use it to break code, not a real optimization like constant folding a pointer calculation, but actually making code that works no longer work.

People seem to forget that C wasn't standardized until 1989. None of that ``optimization'' and ``modern'' interpretation of undefined behavior was part of the C language. Most C compilers treated pointers like assembly did. Programmers used register to put something in a real register to use with inline assembly. That also means there is a ton of architectures C can't run on. C won't run on decimal computers, descriptor architectures, etc. and that's no problem because C wouldn't be a systems language on that hardware anyway. Why do you want to use a systems language on hardware where it couldn't be a systems language?

People with money were lobbying the C standards committee. The Lisp machine people wanted to be able to run C even though it couldn't be a systems language. They were adding undefined behavior to make C possible ``in letter''. The idiotic x86 architecture doesn't implement bit shifts correctly, but Intel and x86 compiler vendors have money and power. This ``optimization'' bullshit is a post hoc rationalization for powerful vendors wanting to say ``we can run C too'' and badly implemented CPU instructions.

It's a fine language for 1970s systems programming, but in the present day we have different concerns.
The designers and users of PL/I, Ada, Multics, and mainframes would very strongly disagree with you. They had those same concerns back then.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List