Are they really better than handmade parsers? The code they produce is fucking ugly, it's cluttered with cryptic variable names and you need to read entire books to understand how they really work. Yet everyone claims writing parsers is stupid because we've had parser generators for years and they will always produce much better results.
How true is this statement? Is it akin to the ``you don't need to write ASM because compilers are much better than you'' bullshit? I, for one, have never been in a situation where I need to write my own assembly, but that doesn't make GCC any less of a piece of shit.
Parser generators Are they really better than handmade parsers?
I think you mean: Generated parsers Are they really better than handmade parsers?
Name:
Anonymous2014-09-19 17:53
Generated parser generators Are they really better than handmade parser generators?
Name:
Anonymous2014-09-20 10:28
Yes, it abstracts the implementation from the specification. If you use a parser for something, another person can read your code and use a different parser. It will be more time-consuming for him to read your handmade parser and write a different handmade parser (or use a parser).
Parser generators usually produce bottom-up, while typical handwritten parser is top-down. Bottom-up should be more efficient on deciding what it got. It also has potential for multi-threading.
Name:
Anonymous2014-09-25 9:06
For something as simple as writing a Scheme interpreter, would you rather use Flex/Bison or make your own parser?
Having written gramer specs in lex/yacc (and lots and lots of other parser generators, from antlr to attoparsec) and hand-rolled state machines, I can attest that you must be absolutely sure that you need a handrolled machine.
I have stepped into this pile of shit more than once. Yes, handmade machines usually perform better and give you more control over error reporting and passing grammar attributes around. BUT they are a device from hell when you suddenly need to change something in your grammar a month later. Then you are in a world of shit. With parser generators, easy.
>>30 Two words: recursive descent. Simple and more than fast enough. There's a reason GCC/Clang both use RD (and GCC used to use lex/yacc, changing to the RD one actually made it go a little faster: http://gcc.gnu.org/wiki/New_C_Parser ). otcc/tcc uses RD.
Fun fact: I heard that the bash bug wouldn't have happened if they used RD, since then they would've been far less inclined to use the whole bloody command parser/evaluator to evaluate a simple function definition instead of the piece of the parser that only handled funcdefs - you can't arbitrarily pull out a piece of a lex/yacc parser to reuse.
Name:
Anonymous2014-09-30 14:57
>>31 Gramps, it's 2014! Forget your "perl" already.
Name:
Anonymous2014-09-30 15:05
>>34 Hey, it'll be finished before the end of the century!and hopefully before I die
Name:
Anonymous2014-09-30 17:24
>>34 But >>31-san is right, Perl 6 grammars are good.
Fun fact: Rakudo bootstraps in essentially one pass by updating the language core incrementally at run time to provide the language features needed to complete the compilation of the compiler and standard library, including the grammar support needed to parse much of the code that the initial grammar can't parse.
Name:
Anonymous2014-09-30 18:03
Too bad Perl is SLOW AS BALLS
Name:
Anonymous2014-09-30 19:08
>>37 It's not Ruby slow, it's not even Python slow.
Name:
Anonymous2014-09-30 20:01
everything higher-order perl wrote of can be done better in lisp.
Name:
Anonymous2014-09-30 20:03
>>39 That's not the issue. When Lisp gets Perl 6 grammars we can talk.
Name:
Anonymous2014-09-30 20:05
>>40 lol u think common lisp isn't the best language looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool
Name:
Anonymous2014-09-30 20:22
>>40 Theoretically, you could write a lisp macro that compiles perl 6 to lisp.
Thanks to the recursive descent guy for pushing me in the right direction. I am in a process of rewriting shitty ply code in funcparserlib, having fun doing so, while also making code more compact (parsing code is 30% of what it was LOC-wise).
Name:
Anonymous2014-10-13 22:17
You are told to write a parser for a small subset of the following languages for a scripting engine. Do you make your own parser, or do you use a generated parser?
1. R5RS 2. C 3. Symta 4. Haskell
Name:
Anonymous2014-10-14 3:56
>>51 1. Own parser. 2. Generated parser. 3. It looks top downable. Own parser. 4. Generated parser.
The dual of mental limits is anal colimits, also known as "anal cock-limits".
Name:
Alexander Dubček2014-10-19 6:04
Doubles generators
Are they really any better than handmade doubles? The numbers they produce are fucking ugly, they're cluttered with cryptic non-repeating digits and you need to read entire posts to understand how they really work. Yet everyone claims getting doubles is stupid because we've had doubles generators for years and they will always produce much better results.
How true is this statement? Is it akin to the ``you don't need to check my doubles because imageboards are much better than you'' bullshit? I, for one, have never been in a situation where I need to check my own doubles, but that doesn't make /prog/ any less of a piece of shit.
68 Very fast in fact, if you consider that unrolled instructions happily run in parallel since the days of Pentium. This is basic knowledge for a modern compiler designer. You can get up to 8x (12x on desktop AMD chips, 16x on Xeon) execution speed (relative to naïve sequential computation) if you are very mindful of what pipelines are occupied at what moment.