Why the hell would anyone use Python or Ruby over C. The software should be nice to use. It's not nice when the program is slow as fuck.
Their dynamic nature makes debugging software increasingly hard. Basically developing with these higher level languages takes more time than with C.
Every program should be written in C. In most cases, it would be good to also optimize tight loops with Assembly. This way programs would be fast and fun to use.
Languages such as C# and Java have no point at all. They are essentially crippled versions of C. Limited pointers and limited memory management. The virtual machine takes forever to JIT-optimize the code, thus harming the user experience. Not to mention GC, which slows everything down, providing nothing useful in return. GC is shit.
Then there are these C++-retards. Sure, you can in theory make as fast C++-code as C-code, but is it really worth it? Every C++ program in practice is slower, harder to debug, and harder to develop.
Functional languages, such as Haskell are no answer to problem. They abstract the hardware to hell and are very slow in practice.
So tell me: Why is C and Assembly not used for every program today?
Scheme and common lisp are the only high level languages you should use.
Name:
Anonymous2013-09-10 5:11
because youre moms a fagot
Name:
Anonymous2013-09-10 7:17
C is a high level language.
Name:
Anonymous2013-09-10 7:46
>>1 Because your assertion that C is faster to develop in is not borne out by evidence, and because it is not always necesary for a program to be fast. C# also runs as fast as an equivalent C program in many cases (try it), so you're now relying on optimization tricks to get any performance improvement at all.
Name:
Anonymous2013-09-10 8:03
I wish someone would come up with a language like C, but that has barebones OO functionality. Not the retarded clusterfuck that is C++, though. I just want something like this:
obj struct { int x; int y; } Point;
Point a = {5, 7};
a.scale(5) // would feed structure ``a'' to function ``scale'' as an implicit automatic variable (like ``this'')
That's all, really. I don't see the need for all the other OOP shit.
>>6 You can do that using structs and function pointers. OOP is not that.
Name:
Anonymous2013-09-10 11:40
>>6 Google (I think) came up with a language called rust, which is basically what you are asking for. It looked like a real piece of shit to me because of some minor but hugely bug-inducing syntax decisions, but you might come to a different conclusion.
>>15 Right, Mozilla. One of them, ya know. I got very excited on reading the overview, and then nearly puked reading the tutorial. Just weird shit all over.
>>17 Rust is the only one of the modern ``We're too cool for C'' languages that has a chance at not being utter shit, honestly. Optional GC means it might actually be usable for systems programming, and sure, the syntax is shit, but that never stopped your favorite. What it will need is a good compiler, a good library strategy (as in, I can deploy libs written in rust, plug in executables written in rust, and not even think about installing a rust runtime/compiler/dancing cursor set on the target machine).
They'll also need a good spec so that a competing implementation can spring up. But hell, all you need for that is a bunch of drooling retards and a ping-pong table. It worked for Ruby, after all.
>>18 Why are you forgetting Go? Does it not have a chance at being a ``better'' C?
Name:
Anonymous2013-09-10 17:46
>>19 But it will keep having GC, no matter how well it performs. It will never be able to give you actual guarantees of performance that way. It may be useful, but it's just not in the same category as Rust, C, C++ or Ada because of that.
holy fucking shit the perofrmance fetish is pissing me off. for most of the programs people write, a high level programming language is the easier way, usually leads to more secure programs (not always because people can fuck everything up) and the performance difference is not noticeable outside of benchmarks. for everyday programming, Python is convenient as hell. for math shit, functional languages are elegant and intuitive. C is good when performance is critical: when you have to manage huge amounts of data or when you write drivers, OS components and shit that holds everything together. oh, and for AAA vidya but if you make a ASCII roguelike then you won't notice the performance difference.
now, the problem of slow software despite fast computers remains. but the reason is not that we're not using an old systems programming language that was cobbled together from many inconsistently named extensions to make up for its glaringly obvious flaws. the reason is that most of our programs are FUCKING BLOATED. desktop software suffers from feature creep and the desire to be the flashiest and most visually appealing to people who get hard while seeing an iPhone instead of doing what it is supposed to be done (preferably with a choice of a simple GUI for n00bs and a good old CLI). REAL ENTERPRISE LEVEL software is worse as it's a labirynthine mess of redundant, barely connected systems that are supposed to make things easier but end up making thigs worse.
basically, the problem is that most of software suffers from bloat and feature creep. it would suffer from the same things if it was made in C and the reduced overhead from not being made in Ruby or Python wouldn't help it because its design (or lack of thereof) has a much bigger impact on the end result. like orders of magnitude greater. classic Unix software might have been faster because of being made in C in 1970s, now it's faster because of Unix philosophy of not adding pointless shit everywhere (something that most of Linux distros forget).
now, some platforms are bloat incarnate and can't be made good - Java-based shit, mobile shit, browser-based shit. and C++ is a mess but again, with most software you won't notice performance difference. you'll just notice that the code looks stupid.
Name:
Cudder !cXCudderUE2016-05-15 14:37
usually leads to more secure programs
That could actually be a bad thing. Unrootable/unjailbreakable devices with DRM, only the corporate brainwashed drones would love that shit.
it would suffer from the same things if it was made in C
because its design (or lack of thereof) has a much bigger impact on the end result. like orders of magnitude greater.
That's the point. C makes it a lot harder to add bloat. You can write one line in an HLL that'll take a few thousand to implement in a lower level language. That tends to make you reconsider whether you should do anything and cause you to come up with a simpler solution. It's possible to write bloated code in C or even Asm, but you have to try really, really hard to.
Name:
Anonymous2016-05-15 15:32
ease of use and implementation don't have to write thousands of lines to implement basics or sort through snippets if you aren't a fucking idiot you will still be able to write a program without exponential growth. more time to work on other programs
It's simple cost-benefit analysis. Honestly I think the best thing would just be a smart interpreter.
That could actually be a bad thing. Unrootable/unjailbreakable devices with DRM, only the corporate brainwashed drones would love that shit.
you're answering the wrong question. if you need to look for exploits and hack a device you already own to have full access to it, the problem is with the device itself. the answer is to use devices that are open by design and avoid closed systems as much as possible
That's the point. C makes it a lot harder to add bloat. You can write one line in an HLL that'll take a few thousand to implement in a lower level language. That tends to make you reconsider whether you should do anything and cause you to come up with a simpler solution. It's possible to write bloated code in C or even Asm, but you have to try really, really hard to.
from the perspective of an individual programmer - yes. from the perspective of a corporation - no. if higher ups decide that adding a fuckton of bloat will guarantee more sales, the bloat will be added even if you have to change opcodes with hex editor.
meanwhile, an intelligent individual programmer can write a reasonably efficient and non-bloated tool in a HLL without having to worry which of the similarly named functions (strcpy, strncpy, strncpy_s, strncpy_c, memcpy or whatever else) to use so that the string is still treated like string and does not cause buffer overflows.
BTW the whole string copying bullshit is the prime example of problems in C: it uses the fucktarded zero-terminated strings (instead of Pascal-style length-prefixed strings) because the difference in two or three bytes in memory was a big fucking deal back when the language was created in the fucking stone age. it then adds a safer version of an old function so your computer doesn't explode and when that function still makes your computer explode, they add another one. meanwhile, I still see programmers in a major multinational corporation use fucking gets().
everyday programming in C is like driving screws with a hammer
meanwhile, an intelligent individual programmer can write a reasonably efficient and non-bloated tool in a HLL without having to worry which of the similarly named functions (strcpy, strncpy, strncpy_s, strncpy_c, memcpy or whatever else) to use so that the string is still treated like string and does not cause buffer overflows.
First of all, the functions ending in _s are a scam by Microsoft. And secondly, Java, a prominent and popular 'HLL', has like 7 (wild guess, but it has things like string builders and buffers and immutable strings and what not) or so different string types. Admittedly, most HLLs probably fare better than this, but most of them still worse than C. Even C++ will always be more complex than C in this regard (not to mention C++ streams).
Name:
Anonymous2016-05-16 3:32
>>6 You're not going to want to hear this, but the language you're looking for is Go.
type Point struct { x, y int }
func (p *Point) moveLeft(amt int) *Point { p = p.x - amt }
That's the point. C makes it a lot harder to add bloat
Wrong. Very, very, very wrong. C doesn't do jack shit for you, so what you end up with is a bunch of terrible NIH cobbled together fuckpiles of unstable garbage to make up for the lack of features easily provided by other languages.
How many C or C++ programs out there use refcounting because the language doesn't help at all with memory management? Tons, and that shit is slower and less reliable than GC. How many C programs have to include shitty little brittle faggot-spec interpreters for configuration data and scripting? Tons, because you can't evaluate or compile the language at runtime.
Because C doesn't do anything, that forces the programmers to go all NIH, and you end up with a bunch of Cudders that shit up everything and send us back decades. You need to be shot, for the greater good.
Name:
Anonymous2016-05-16 4:11
If you want features in your code, and C makes you second guess your feature, your project has failed. And it has failed because of C.
Requirements first, and the computer should serve your project. You are first making your requests at the altar of C. C then brings out the whip and punishes you for wanting something that isn't asm-level bit fiddling. You say "Yes, master, I will keep my computing requests in the 1960s" and lick C's shoes.
Fuck that. The computer does my bidding. If I need a feature then that damned feature is going to be in there, because I need the computer to use that feature to serve me.
You have been pussy-whipped by your own computer. How sad and emasculated you are. Stop trying to promote that anything of your submission to computing is useful. It is backwards and it is cancer.
Name:
Anonymous2016-05-16 4:52
>>1 C was called a high level language when it came out, and everybody was questioning why anybody would use it, when everything can be done in asm under greater control. They were wrong as you are wrong.
Name:
Anonymous2016-05-16 5:22
>>35 securityfag/(wannabe-)haxanon here. I'm not talking about security industry, I'm talking about security as a concept. if you don't have non-executable stack and other hardening features, a badly written C program can be exploited by the same fucking code Aleph Zero published 20 years ago. if you have those (as they're usually on by default), it can still be attacked, just with a bit more work.
>>36 Java is a shitty language with annoyingly verbose syntax that runs on a shitty virtual machine, I said so myself. even then, consequences of using wrong string type in Java are minor when compared to the consequences of wrong string handling in C.
the answer is to use devices that are open by design and avoid closed systems as much as possible
Not everyone can live like Stallman.
>>39 That's a problem with programmers who will try to add bloat no matter what, but there's far less of them using C than other HLLs (presumably because they've jumped to those HLLs instead.)
>>40 Features != bloat. C can make you reconsider that ridiculously complex O(n3) algorithm that would be obvious and trivial to write in a different HLL, so you'll think harder and might come up with a simpler O(n) one instead.
As for string handling, using strcat() and friends is usually stupid, especially if it's in a loop. You're supposed to avoid copying or moving strings whenever possible.
>>44 do you have to live like Stallman though? a basic fucking computer allows you to install any OS you want, most of which give you root. many Android-based phones have unlocked bootloaders so you can flash a software with root on them without needing to hack anything. having to gain root through exploits has one big disadvantage: malware can do that too (see: recent Android ransomware which gets code exec through browser exploitation and root privs through towelroot).
C can make you reconsider that ridiculously complex O(n3) algorithm that would be obvious and trivial to write in a different HLL, so you'll think harder and might come up with a simpler O(n) one instead.
Are you speaking from experience or your ass? Because in my extensive experience that simply isn't true: where in high level languages an efficient algorithm might be slightly more verbose than a naive one, in C it's significantly more complicated than the already verbose naive case, plus there are several especially inefficient but at least somewhat succinct patterns like C-strings or linked lists that even experienced programmers end up using against their better judgment.
Name:
Anonymous2016-05-16 14:56
>>41 C was called a ``portable assembly'' when it came out. It wasn't the first of its kind either.
>>44 You look more ignorant and willfully so with every post you make.
That's a problem with programmers who will try to add bloat no matter what, but there's far less of them using C than other HLLs (presumably because they've jumped to those HLLs instead.)
Define bloat. You always shuffle around your goalposts and never actually have a point.
Features != bloat. C can make you reconsider that ridiculously complex O(n3) algorithm that would be obvious and trivial to write in a different HLL, so you'll think harder and might come up with a simpler O(n) one instead.
Algorithmic optimization is 100% orthogonal to the language used. And reducing the big-O qualities of an algorithm is far easier in higher level languages where you don't have to worry about the bit-level implementation details, so the very opposite of what you say is true: It is easier to do the algorithmic research work into a HLL than it is to do it in C.
Name:
Anonymous2016-05-16 21:04
>>42 I'm just saying that string handling isn't as complicated as you want to make it look like in C. And just adding more complexity doesn't necessarily fix real (==security, stability, etc) problems. What does fix most of C's string memory issues are the strn_() functions and things like snprintf() and %.*s. Also,
Pascal-style length-prefixed strings
doesn't fix shit. When people can mishandle/abuse string terminators, they can also mishandle/abuse length-prefixing.
Name:
Anonymous2016-05-16 21:37
>>52 it's not that it's complicated, the correct functions are easy to use. it's just that it's not consistent because you have a few different variants of same function and some of them do bounds checking, some of them don't, some of them append null byte, some of them don't and you won't guess which does which if you're not used to handling strings in C. again, I'm talking from experience of doing vuln research and code review at a major multinational corporation, people get that shit wrong all the fucking time and half of the time if not for OS- or compiler-level hardening you'd be able to exploit that shit with the most primitive shellcodes.
When people can mishandle/abuse string terminators, they can also mishandle/abuse length-prefixing.
theoretically they can but length-prefixing allows for bounds checking at runtime with very little overhead, meaning it will be harder to fuck it up. a well-designed length-prefixed string handler won't let you write past the assigned length without explicit reallocation and overwriting the prefix would require overflow in memory before the string so the attack isn't as trivial as AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA[insert shellocde here] at worst and AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA[insert lengthy ROPchain here] at best
Well, one could probably also write a memory-safe string system for C that uses zero termination then that does the things (or similar things) you described. So the problem isn't how the string is represented. Length-prefixed just isn't the silver bullet.
some of them append null byte, some of them don't
Yeah, well, that is indeed somewhat confusing sometimes. But you don't need the null byte sometimes, either (e.g. with %.*s, strncmp(), strncpy(), etc).
But let's be honest here: I think the one big problem with C and strings is that it doesn't offer dynamic allocation. I think C works just fine with fixed-length/pre-allocated strings. When you need dynamic strings, you have to at least write a wrapper for a dynamic string struct -- but even then some of the C string functions will come in pretty handy.
>>54 Well, one could probably also write a memory-safe string system for C that uses zero termination then that does the things (or similar things) you described. So the problem isn't how the string is represented. Length-prefixed just isn't the silver bullet.
it isn't a silver bullet but writing such a handler is easier here: you don't need to iterate through a string in search for a nullbyte. this means that the security vs muh performance becomes a false dilemma. it' also means that you can use the same functions to handle different kinds of data stored continously in memory as they don't start going crazy when they se \x00.
But let's be honest here: I think the one big problem with C and strings is that it doesn't offer dynamic allocation. I think C works just fine with fixed-length/pre-allocated strings. When you need dynamic strings, you have to at least write a wrapper for a dynamic string struct -- but even then some of the C string functions will come in pretty handy.
>>54 I'm really not following when you say C doesn't offer dynamic allocation with respect to strings. Any chance you can elaborate?
Name:
Anonymous2016-05-17 20:44
>>57 he probably means that there's no built-in dynamic data type for string data like in higher level languages (even C++), you need to handle reallocations yourself
it' also means that you can use the same functions to handle different kinds of data stored continously in memory as they don't start going crazy when they se \x00.
You mean like how EOF can be a valid char inside a FILE stream...? And strings can't hold any composite type other than strings? I guess you're right here -- I'm sure length-prefixing has it's benefits (personally, I'd most likely choose length-prefixing over null-termination, too) but in the end it would probably not fix a whole lot of problems (and introduce some others, like manipulating the length-field instead of the terminator for an attack vector; also: what about portability?)...
But you know what? I'm still not convinced we need HLLs (at least not some of the currently popular ones like Java, like I mentioned). One could make a lang similar to C (in scope and performance) AND have securities.
>>59 Because in this case, building them with hammers and wood costs only about 1/6th and the resulting skyscrapers still have 80% stability compared to the ones built with cement. And now convince the architect that hammers and wood is worse than cement and cranes.
Name:
Anonymous2016-05-18 1:40
The ``string system for C'' that exists is not less safe than the one in ``modern'' languages such as java.
building them with hammers and wood costs only about 1/6th
The opposite is true. It costs at least 6x more programmer hours to build some software in C than it does in FIOC or Lisp for example. 1 programmer hour = $20-$100. If you have numerous programmers, you can buy a new i7 every 15 minutes or so.
still have 80% stability compared to the ones built with cement
More like 20% the stability. Even Linus writes buffer overflows and mem leaks, so what hope do you have?
manipulating the length-field instead of the terminator for an attack vector
that wouldn't be easy because you'd need to overflow something before the string and if everything does prefix-based bounds-checking, you have a problem. attacks on heap (if the allocator uses metadata) work like that but they're based around bounds-checking not being present
what about portability
what about it? neither null-terminated strings nor length-prefixed ones are a hardware-level feature, they're both programming language constructs. C chose null termination because storing additional byte had less memory overhead than storing an additional integer (which, again, is not something you should care about on modern machines). I don't see how 'take the integer x and iterate through x bytes next to it' would be less portable than 'iterate through those bytes until you see \x00'.
But you know what? I'm still not convinced we need HLLs (at least not some of the currently popular ones like Java, like I mentioned). One could make a lang similar to C (in scope and performance) AND have securities.
sometimes, a high level language is just a better choice, especially if performance is not critical (and as I said before, for the vast majority of programs the performance will still be good if you don't add bloat). I don't really like Java though
If people used x86 segmentation properly or some other segmented or object-based architecture had caught on, this is how we would do strings.
Name:
Anonymous2016-05-18 19:03
>>64 Yes, it's unfortunate that VMS has mostly died out. Enjoy your loonix.
Name:
Anonymous2016-05-18 19:22
CHECKEMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMm
Name:
Anonymous2016-05-19 17:05
>>64 Segmentation gives automatic array bounds checking, unfortunately it needs assembly fiddling and OS support, so it is out of reach for majority of apps.
Name:
Anonymous2016-05-22 17:59
C programmers need to wait O(n) time to get the length of their string, and then they turn around and lecture people about efficiency! It seems like people here who use C and assembler (I'm talking about the people in this thread, not real programmers like kernel devs and demoscene members) only do so to reimplement basic software from 20-30 years ago and act infinitely smug about it. Look at the suckless people, or Cudder and that web browser. To the C lovers: can you show us an example of something you wrote that isn't a rehash of crust UNIX shit or a 1990s web browser?
C programmers need to wait O(n) time to get the length of their string
No, C programmers have the choice of storing the length in a separate structure in case they need it, or not storing and hence saving the 7 bytes. It's a win-win situation, something called "choice" that you don't get in the HLLs.
Name:
Anonymous2016-05-22 22:27
>>70 Choice is the worst thing to give programmers. This is how ENTERPRISE was spawned.
Name:
Anonymous2016-05-23 0:31
The entire point of higher level is not having to care about implementation. Being able to use a tuple without having to know whether it's an array, a linked list, a struct or whatever the fuck. Sometimes the actual computing done by the program is so little you won't really notice the difference, or the heavy-load part is implemented in a different language.
Name:
Anonymous2016-05-23 6:15
>>71 Isn't the whole point of to inhibit a programmers choice based on choices made by incompetent bosses?
>>77 Goylang is a shitty high level language (nice dubs btw)
Name:
Anonymous2016-05-24 4:31
>>77 Golang lacks generics, or anything similar (like sepples templates which are actually super powerful). Golang is shit. It also has GC. GC is shit. Rust is the future.
>>79 There is a preprocessor hack to bring templates to Golang, but it sucks. The whole Golang sucks because of their retarded fucking cat-v retro philosophy, the ``UNIX way''. They glorify the crippled nature of old school programming for no apparent reason other than hipster fanboyism. This is why they left out generics, because C doesn't have it and it's ``not simple''. Well no shit it's not simple, but we're not mental midgets here. The designers made the fatal mistake of assuming simplicity of implementation guarantees simplicity of usability. Golang intentionally makes you do things the hard and verbose way while simultaneously claiming to be simple and elegant, case in point: if err != nil { log.Fatal(err) } This line is repeated ad nauseum in Go codebases just because Go designers were too pretentious to implement error handling which literally every other post-2000 programming language has built in.
Go is stuck in a dead purgatory between the Python/Ruby/Perl space and systems programming (C, C++, Fortran, maybe Rust), where it is only tangentially applicable for each but totally applicable for neither.
does anyone still do systems programming in Fortran outside of legacy systems?
Name:
Anonymous2016-05-24 8:18
>>82 Lots of institutions use ancient code because it's not worth replacing. It's the same reason Facebook still uses PHP. Who's going to pay to rewrite 10^9 SLOC just because it's in a ``wrong'' language.
Name:
Anonymous2016-05-24 8:51
>>83 that's what I meant by 'legacy systems'. my question is: is anyone writing new stuff in Fortran as opposed to maintaining old Fortran shit?
Name:
Anonymous2016-05-24 12:16
>>84 Yes, HPC, because of mathematical codebases and advanced compiler tech for numeric problems.
Name:
Anonymous2016-05-25 1:05
>>84 It's actually really fast for math stuff. Intel maintains an optimized Fortran toolchain just for that.
Name:
Anonymous2016-05-25 4:03
The whole Fortran issue really points out why there are high level languages. C is such a steaming pile of shit that the compilers can't optimize it. It can't know that something somewhere won't mutate memory, and so to stay completely in-spec and hope not to introduce weird bugs in edge cases, it can't perform optimization transforms (especially vectorization) as fully as a language like Fortran.
C has always been a shitty hack. It's a steaming turd that everybody worships because they're supposed to. These pro-C absolute fuckheads don't even realize that this shit is constrained to be slow, except for shitty little microbenchmarks that fall in line with C's limited purview.
Name:
Anonymous2016-05-25 5:19
>>87 Show an alternative to C that could replace it, so that it couldn't be laughed off by Linus Torvalds.
Name:
Anonymous2016-05-25 6:20
>>85 >>86 makes sense. when I think 'math' I also think 'functional' but purfuncs are pretty slow so I wouldn't use them for anything performance-critical.
Name:
Anonymous2016-05-25 9:05
The C language is a high level language you fucking imbecile... Use assembly code for everything, including WEB PAGES
Name:
Anonymous2016-05-26 0:33
>>88 Linux Torvalds has a one track mind and laughs off anything not C, so fuck him and fuck you.
>>98 They will take as much memory as they're designed for. C kernels go through pains to not allocate memory, or to ensure there's enough preallocated buffer space to handle expected conditions. The same would be true as necessary for any other language.