Of course that is a prototype, but it shows how initial parentheses-ridden mess (root/boot/*.hit files) bootstraps into a readable language (root/lib/*.hit files).
... [macroexpand ¯oexpand] [macroexpands? ¯oexpands?] @Env) ) No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No
So far, so good. I was expecting more exclamation marks, though.
>>2 Wow, you weren't joking. That's in /root/boot/stage0.hit:224
Name:
Anonymous2014-03-14 16:56
>>2>>4 Because stage0 implements macro system, so `let` macro cannot be used, because it requires the macro system itself. Although stage0 does contain the defintion of `let` (which is immediately used in stage1):
(add_macro let (_fn (Bs @Xs) [[_fn (Bs map (_fn (B) (_if ((tag_of B) is text) B (B head)))) @Xs] [`@` [list @(Bs map (_fn (B) (_if ((tag_of B) is text) Void (B 1))))]]]))
It appears that efficiently compiling closures requires a lot of code analysis.
For now I wrote an SSA-translator, which looks very much like CPS translator, but a little more convoluted, because SSA's continuation (the value function pops to return to the previous context) is implicit and passed on the stack. Still basic operators like "load" and "funcall" require explicitly specifying, where you want to store value. Here is how I expand lambdas' into SSA (test-ssa '(|_fn| (a b) (|_fn| (x) (+ (* a x) b))))
Your piece of shit C code leaks memory. Also, it allocates memory with malloc for something as simple as multiply and addition. I would guess even normal Lisp interpreter would be faster than your shit.
I once considered renaming Symta to "The Hitler Programming Language" (there is already The Stalin Scheme Compiler) or "The Obama Programming Language" (Iraq and Afghanistan were a piece a cake), but since the Putin's invasion of Crimea, I will rename Symta to "The Putin Programming Language".
>>21 Still a weaker statement of analysis complexity than >>14-san's.
Name:
Anonymous2014-04-03 18:09
So basically a barebone Scheme to C or ASM compiler takes about 200 lines of Common Lisp code. Same should be true for Haskell (typechecker would be a set of macros and not a part of the compiler). The most complex part should be runtime, because support for bignums requires a lot of assembly hacking.
The cool point about the compiler, is that while it is the part of runtime, you can use it to compile itself, given that you take most of the macros from the host Lisp system. You would need at least DEFUN, IF, LET and COND macros to write a Scheme compiler.
Name:
Anonymous2014-04-03 18:36
>>24 Do you know of any computation system, which cannot be expressed in the terms of Lambda Calculus and therefore cannot be efficiently compiled using the techniques applicable for Lambda Calculus or Turing Machine, to which Lambda Calculus corresponds?
>>27 That's not the point I was trying to make. ``Analyzing code is hard'' is more stringent than ``Analyzing lambda calculus is hard''. The latter statement admits the possibility of a computation system which is easy to analyze, but which, when transformed into lambda calculus (by any and every method of transformation), is hard to analyze. The former does not.
I'm saying that if you have efficient compiler for lambda calculus, then efficiently compiling for example Java code is the matter of translating Java to lambda calculus and running the present compiler.
Name:
Anonymous2014-04-04 19:06
>>31 Still, doesn't that throw the work back onto the ``translating Java to lambda calculus'' phase?
Name:
Anonymous2014-04-05 6:30
>>32 Nope. Java's compiler already does this transformation, producing code linked with environment, and there are always special cases of continuations, although Java disallows TCO, it always has the program counter and it's environment on the stack.
Name:
Anonymous2014-04-05 6:37
>>33 So you're saying that, internally, the Java compiler explicitly translates input source code to lambda calculus representation, then translates that lambda calculus to Java bytecode? Interesting, I didn't know that.
Name:
Anonymous2014-04-05 8:57
>>34 Nope. Java compiler translates to a representation, closely matching continuation-passing-style
BTW, what does the ((λ (f) (f f)) (call/cc call/cc)) do?
Name:
Anonymous2014-04-05 10:51
this fucking slav nigger is crazier than tdavis. we should put him down.
Name:
Anonymous2014-04-05 14:39
>>35 I still don't think I'm following you. Which one is it? If the Java compiler perform its compilation by ``translating Java to lambda calculus and running the present compiler'', then why say ``Nope''? If the Java compiler doesn't perform its compilation by ``translating Java to lambda calculus and running the present compiler'', why say ``Java's compiler already does this transformation''?
Your example is also not illuminating. What does a modified form of the yin yang puzzle have to do with compiling Java?
Name:
Anonymous2014-04-05 15:27
>>37 "Nope" means you don't understand what Java compiler translates Java expressions into a form called SSA, which could be translated into continuation-passing-style (a simplified form of Lambda Calculus).
And wtf is "yin yang puzzle"? And where is the my "not illuminating" "example"?
>>38 It seems clear from your ``Nope'' that you aren't claiming that lambda calculus and SSA representation are the same thing, but that it is trivial to translate between them. I don't care enough to verify that, so that sounds reasonable. For your reference, what confused me was that, when you said ``Java's compiler already does this transformation'', you weren't referring to the full transformation of Java to lambda calculus that I was asking about from your ``efficiently compiling for example Java code is the matter of translating Java to lambda calculus and running the present compiler'' statement.
As to ``wtf is "yin yang puzzle"?'', I'll refer you to these search results for ``yin yang callcc'':
It seems clear from your ``Nope'' that you aren't claiming that lambda calculus and SSA representation are the same thing, but that it is trivial to translate between them.
Exactly! It requires less steps to translate between CPS and SSA than for example to translate between x86 and C++ assembly code, because translating machine code back into C++ also requires intermediate form. De-compilers usually construct jmp/branch graphs, nodes of which correspond to lambdas, and the next stage would be determining their environment (which values are alive for which node).
For your reference, what confused me was that, when you said ``Java's compiler already does this transformation'', you weren't referring to the full transformation of Java to lambda calculus that I was asking about from your ``efficiently compiling for example Java code is the matter of translating Java to lambda calculus and running the present compiler'' statement.
Oh yes. These are the nuances of compiler implementation. Nothing stops compiler designer to use CPS instead of other equivalent intermediate forms. These forms are different in the same way XML is different from SEXPs.