Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

Rust is now faster than C

Name: Anonymous 2017-02-21 18:14

http://benchmarksgame.alioth.debian.org/u64q/performance.php?test=knucleotide

How can this be? Since C is as fast as assembly, does this mean Rust does some microcode optimilzation under the hood?

Name: Anonymous 2017-02-21 18:18

I refuse to open the benchmarksgame ever since they removed the charts.

Name: Anonymous 2017-02-21 18:20

Benchmarks game doesn't mean anything in the real world.

Name: Anonymous 2017-02-21 18:25

Real programs are mostly implemented using database procedures and triggers. It doesn't really matter how fast the application code itself executes. I don't see much point in Rust.

Name: Anonymous 2017-02-21 18:57

>>4
LOL.

Name: Anonymous 2017-02-21 20:33

is this the first program ever written in rust?

Name: Anonymous 2017-02-21 23:39

C is one of the worst languages to optimize, even with all that fake bullshit that ``modern'' mainstream C compilers do.

All of that pointer junk that helped make things faster on a PDP-11 slows down modern CPUs and confuses modern compilers.

C compilers remove your bounds checks and null checks that you, the programmer, explicitly wrote in the program, even if they are known to fail, and then crash on input that your checks would have prevented. The real scary thing is that people had to write code to determine that you're checking array bounds and more code to remove the check.

Any other language compiler would insert checks if you don't write them, unless it's 100% proven that they won't fail (i.e. compile-time constants). C compilers do the total opposite.

C has no portable way to check for overflow. Overflow is almost always a bug.

And C programs still end up slower than languages that do all of these checks despite C compilers removing code you wrote. This is because higher-level languages can be optimized much better than C and its fictional pseudo-Brainfuck memory model can.

Name: Steve 2017-02-21 23:45

on the pulse side c is really sample u can write a compiler 4 it in bbcode

Name: Anonymous 2017-02-22 0:25

C compilers remove your bounds checks and null checks that you, the programmer, explicitly wrote in the program, even if they are known to fail, and then crash on input that your checks would have prevented. The real scary thing is that people had to write code to determine that you're checking array bounds and more code to remove the check.
No, observing a pointer is not the same as dereferencing it. if(ptr == NULL) puts("Hey, this pointer is NULL!"); does NOT get optimized out, because checking whether a pointer is NULL is NOT undefined behavior, and so the compiler cannot assume that the pointer is never null. Likewise with bounds checking, trying to ACCESS an array beyond its bounds is undefined behavior, but doing a simple arithmetical comparison on the number before using it as an array index is not. As for overflow, just compare to INT_MAX before doing any operation that will increase the value, and use unsigned wherever you can. The one serious flaw with C is that it has the "const" qualifier which really only means readonly; you can't use const objects as the size of static arrays, since the compiler can't guarantee that the object won't be modified via a pointer-to-mutable-object.

Name: Anonymous 2017-02-22 1:24

>>9
does NOT get optimized out, because checking whether a pointer is NULL is NOT undefined behavior, and so the compiler cannot assume that the pointer is never null
You're right that it doesn't ``optimize out'' anything. It removes code the programmer wrote.

Don't call that ``optimization''. It's offensive to non-C compiler writers because they care about the quality of the code. Non-C optimizers never do those things and can still be faster than C that does do those things.

http://blog.regehr.org/archives/213
The idiom here is to get a pointer to a device struct, test it for null, and then use it. But there’s a problem! In this function, the pointer is dereferenced before the null check. This leads an optimizing compiler (for example, gcc at -O2 or higher) to perform the following case analysis:

As we can now easily see, neither case necessitates a null pointer check. The check is removed, potentially creating an exploitable security vulnerability.

This is the Linux kernel, which has control over memory mapping. C is supposed to be a systems language usable for writing kernels, but it isn't.

Why does the compiler even need to do this? What's the harm in leaving a check the programmer wrote in the program if the compiler can't prove it's not necessary? It's because C is so incredibly hard to optimize.

Name: Anonymous 2017-02-22 2:26

>>10
That code dereferences a NULL pointer, which is just plain NOT ALLOWED. In most environments it results in an immediate program crash. You say that the compiler should do what the programmer says, but this is a terrible example of that, it's clearly obvious that the programmer made a horrendous mistake. This code IS an exploitable security vulnerability, but not BECAUSE the NULL check is optimized away - it's because before the NULL check is even reached, the program has ALREADY entered an undefined state. That a pointer dereference preceding a NULL check made it into kernel code indicates a flaw in coding practices, not a flaw in the compiler.

Name: Anonymous 2017-02-22 3:20

>>11
That a pointer dereference preceding a NULL check made it into kernel code indicates a flaw in coding practices, not a flaw in the compiler.

Who said the compiler was faulty? The real problem here is with the kernel coding practices (programmers writing security-critical code obviously aren't using static analyses to detect potential errors); AND with the language itself (C makes undefined behavior trivial to invoke by accident).

Programmers deifying C are an embarrassment. It's a fine language for 1970s systems programming, but in the present day we have different concerns. If you must program in C, use protection.

Name: suigin 2017-02-22 3:54

These benchmarks are a joke. It's not the language's fault. It's the programmer and the libraries in use.

Take a look at all of the optimization opportunities:

http://benchmarksgame.alioth.debian.org/u64q/program.php?test=knucleotide&lang=gcc&id=1

The element struct has 32-bits of padding, and it's not CPU cache friendly. You should use two orthogonal arrays instead of a struct.

qsort is slow as fug. Do a fucking parallel bitonic sort with OMP or a radix sort.

It uses khash. Who the fuck uses khash? https://github.com/attractivechaos/klib/blob/master/khash.h

This uses quadratic probing which is proven to be slow on modern CPUs with deep cache architectures, unless the hash table keys are suitably large in size. It also looks unoptimized as hell. You could do a better job with less code if you just hand-rolled the insert and lookup functions for a linearly addressed hash map.

fgets? If you're using non-standard shit like khash, why not use POSIX libraries like read and read in huge blocks from the file, instead of 4K at once. Way faster for large files.

The OMP calls just run the different searches in parallel. It doesn't actually partition the searches optimally.

I bet we could get the C version running in half the time of Rust.

Name: Anonymous 2017-02-22 5:35

>>11
it's because before the NULL check is even reached, the program has ALREADY entered an undefined state.
The kernel has control over the memory. NULL is just a pointer which, on every architecture Linux runs on, can be mapped to valid memory. There is nothing wrong with accessing page 0 or triggering a page fault because the kernel has control over the fault handler.

>>12
C makes undefined behavior trivial to invoke by accident
Bad compilers are the problem. C semantics is the problem. C was always an unsafe language, but standardization made it worse. If you add to an address, the C compiler can break your code. Compilers can magically know where a pointer comes from and use it to break code, not a real optimization like constant folding a pointer calculation, but actually making code that works no longer work.

People seem to forget that C wasn't standardized until 1989. None of that ``optimization'' and ``modern'' interpretation of undefined behavior was part of the C language. Most C compilers treated pointers like assembly did. Programmers used register to put something in a real register to use with inline assembly. That also means there is a ton of architectures C can't run on. C won't run on decimal computers, descriptor architectures, etc. and that's no problem because C wouldn't be a systems language on that hardware anyway. Why do you want to use a systems language on hardware where it couldn't be a systems language?

People with money were lobbying the C standards committee. The Lisp machine people wanted to be able to run C even though it couldn't be a systems language. They were adding undefined behavior to make C possible ``in letter''. The idiotic x86 architecture doesn't implement bit shifts correctly, but Intel and x86 compiler vendors have money and power. This ``optimization'' bullshit is a post hoc rationalization for powerful vendors wanting to say ``we can run C too'' and badly implemented CPU instructions.

It's a fine language for 1970s systems programming, but in the present day we have different concerns.
The designers and users of PL/I, Ada, Multics, and mainframes would very strongly disagree with you. They had those same concerns back then.

Name: Anonymous 2017-02-22 6:41

UB is generally caused by programmer error

Kek Yeah the programmers that left it in the standard

Name: Anonymous 2017-02-22 7:22

>>13
I bet we could get the C version running in half the time of Rust.

It would also be possible to create a Java version that runs faster than Rust.

Name: Anonymous 2017-02-22 7:40

>>11
This code IS an exploitable security vulnerability
ackshually, null derefs aren't exploitable in the kernel anymore because low memory addresses are not mappable

Name: Anonymous 2017-02-22 7:52

Rust beats C in a single benchmark that is also dependent on third party code on the C side (``use either khash or CK_HT for C language k-nucleotide programs. [..] Please don't implement your own custom "hash table" - it will not be accepted.'')

The headline reads

Rust is now faster than C

Name: Anonymous 2017-02-22 7:57

Remember when they removed D wuth vague excuse?

Name: Anonymous 2017-02-22 8:00

>>14
The Lisp machine people wanted to be able to run C even though it couldn't be a systems language
ah yes, the all-powerful Lisp machine industry which controls the world. and they want everyone to use C because undefined behaviour makes C an acceptable Lisp!

Name: Anonymous 2017-02-22 10:15

Name: Anonymous 2017-02-22 10:37

>>1
C version uses Khash http://attractivechaos.github.io/klib/#Khash%3A%20generic%20hash%20table:%5B%5BKhash%3A%20generic%20hash%20table%5D%5D

Rust uses a 2-op "xorshift hash"
struct NaiveHasher(u64);
impl Default for NaiveHasher {
fn default() -> Self {
NaiveHasher(0)
}
}
impl Hasher for NaiveHasher {
fn finish(&self) -> u64 {
self.0
}
fn write(&mut self, _: &[u8]) {
unimplemented!()
}
fn write_u64(&mut self, i: u64) {
self.0 = i ^ i >> 7;
}
}
type NaiveBuildHasher = BuildHasherDefault<NaiveHasher>;
type NaiveHashMap<K, V> = OrderMap<K, V, NaiveBuildHasher>;
type Map = NaiveHashMap<Code, u32>;

Name: Anonymous 2017-02-22 10:43

The fastest Rust version with a real hash is x3 slower
http://benchmarksgame.alioth.debian.org/u64q/program.php?test=knucleotide&lang=rust&id=2
Rust #2
17.10sec 162,840 1324 48.39 55% 87% 57% 87%

Name: Anonymous 2017-02-22 11:03

>>1 Was this your masterplan?
0.Gain control over benchmark conditions.
1.Rust shills gloat all over their rigged benchmark code has only 20% speedup over C code which is forced to use a specific hash function
2.benchmarks game becomes politifact for language advocacy
3.Mozilla , apple & google shills push their pet langs(Rust, Swift & Go) to the top
4."Unsafe" C is banned from projects, arguments about speed get redirected to rigged benchmarks.
5.Due enhanced "safety" and idiot-proof rules, software development can now be outsourced to third-world shitholes wholesale.
6.Demand for faster CPUs rises, as languages that were "as fast as C" in practice become slower and more demanding when outside of rigged/sterile benchmarks.
7. ...
8. Profit.

Name: Anonymous 2017-02-22 11:18

tfw PHP is faster than C
http://benchmarksgame.alioth.debian.org/u64q/performance.php?test=regexdna
1.0
Rust #2
1.93 219,284 699 4.83 79% 51% 51% 73%
1.2
PHP #4
2.23 106,284 832 5.09 62% 78% 49% 42%
1.3
C gcc
2.43 339,000 2579 5.68 46% 70% 51% 72%
1.3
TypeScript
2.59 535,964 433 2.58 2% 2% 0% 100%
1.4
Hack #4
2.61 219,700 832 5.89 67% 51% 66% 47%
1.4
Node.js #2
2.70 530,044 445 2.70 1% 1% 100% 1%

Name: Anonymous 2017-02-22 11:21

>>25
All new software should be written in PHP since its faster than C(proven by benchmarks) and more user-friendly.

Name: Anonymous 2017-02-22 12:05

>>24
Was this your masterplan?
CRASHING THIS BENCHMARK

Name: Cudder !cXCudderUE 2017-02-22 12:24

>>25
PHP is using a regex library written in... C

Name: Anonymous 2017-02-22 12:27

>>20
It's called Zeta-C and it is why C has all of these weird bugs like it being ``undefined'' to compare pointers (less than, greater than) to different objects (instead of just unspecified). C is a systems language, not some kind of JavaScript. You can't write memmove in Standard C without making a temporary copy because of CPU architectures that were invented for Lisp, where C wouldn't even be a systems language at all.

Other tagged architecture people weren't interested in running C because they thought C was an unsafe, error-prone language to begin with. Only the Lisp people had a C compiler, and it didn't follow the same rules as any other C compiler.

``Modern'' compiler maintainers are using this tagged memory as an excuse for making pointer comparisons undefined even though they are defined on segmented architectures like 16-bit x86. Pointer comparison isn't more difficult than pointer equality. They talk about segment aliasing, but you need to know the same thing for equality too.
https://stackoverflow.com/questions/4023320/how-to-implement-memmove-in-standard-c-without-an-intermediate-copy

Name: Anonymous 2017-02-22 19:57

>>22
// Define a custom hash function to use instead of khash's default hash
// function. This custom hash function uses a simpler bit shift and XOR which
// results in several percent faster performance compared to when khash's
// default hash function is used.
#define CUSTOM_HASH_FUNCTION(key) (khint32_t)((key) ^ (key)>>7)

kill yourself faggot

Name: Anonymous 2017-02-23 0:33

>>30
Leave >>7 out of this

Name: Anonymous 2017-02-23 2:25

>>1
All languages end up running as assembly, so they must all run as fast as assembly

Name: Anonymous 2017-02-23 3:13

DUBZ

Name: Anonymous 2017-02-23 15:24

https://www.reddit.com/r/programming/comments/5vddq2/rust_is_now_the_fastest_language_on_knucleotide/
It boils down to fact khash is trash and ordermap is really simple. Since C entries cannot use anything else, the benchmarks is rigged in favor of language which doesn't have this requirements...
https://github.com/bluss/ordermap/blob/master/src/lib.rs
https://github.com/attractivechaos/klib/blob/master/khash.h
Also Rust in real world:
https://medium.com/@robertgrosse/how-copying-an-int-made-my-code-11-times-faster-f76c66312e0f#.nv29iicug

Name: Anonymous 2017-02-23 21:07

>>34
Absolutely ridiculous that the benchmark can be so biased towards Rust this way. The whole thing is obviously a scam run by unapologetic Rust shills. What a disgrace.

Name: Anonymous 2017-02-23 22:17

>>35
This just goes to show that these benchmarks don't mean crap.

Name: Anonymous 2017-02-24 0:24

I'm a SJW who likes the simplicity of C and hates bloat, should I switch to Rust?

Name: Anonymous 2017-02-24 1:12

>>37
Tcl/tk is the best language for SJWs.

Name: Anonymous 2017-02-24 3:01

>>38
Why?

Name: Anonymous 2017-02-24 8:21

>>34
to be fair, the last part is less about things inherent to Rust and more about compiler stupidity. this happens in C too - I remember that there once was a contest to write an innocent-looking C program that works good on one system and breaks on the other. the winner was simplistic piece of code that involved nested loops - gcc did a decent job of optimizng it so it worked well on unix-likes but MSVC ended up being extremely slow.

Name: Anonymous 2017-02-24 10:51

>>37
It depends: if you like BDSM and fighting with the compiler, then yes.

Name: Anonymous 2017-02-24 15:30

>>37
Switch to Haskell. It's the best language for leftist cucks. Full on BDSM political correctness. You will never be able to do divisive far-right things like printing to a screen.

Name: Anonymous 2017-02-24 19:09

>>42
Why not both. double the fun

https://www.reddit.com/r/haskellgamedev/comments/5a58hr/is_haskell_the_best_functional_language_for/

While using Rust I may still use Haskell (recently some utilities popped up for calling Haskell code from Rust) in the way of some kind of asynchronous content generation when real-time performance isn't as much of an issue, but I would be interested in any other functional languages that are more suited to gamedev, especially for the core part of an engine.

Name: Anonymous 2017-02-26 13:47

age

Don't change these.
Name: Email:
Entire Thread Thread List