Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

Symta News

Name: Anonymous 2014-06-06 11:56

So I've finally solved the GC problem by abandoning the continuations, which allowed me to do compaction like RAII - on every stack pop. The only problem remains is efficiently implementing the double-headed stack, which would go like "0,2,4 ... 5,3,1" - i.e. compacting requires just one copy, instead of two.

I've optimized _let compilation to stack, which allowed following code to execute at 0.9 the the speed of C/C++

_let ((N 1024*1024*1024) (S 0))
(_label again)
(_set S (_add S N))
(_set N (_sub N 1))
(if _gt N 0 then _goto again else S)


so I have chances at using Symta as replacement for C/C++.

Moral: continuations are bad and make everything unpredictable and slow. So, please, don't use Haskell and other languages depending on continuations.

Name: Anonymous 2014-06-06 12:02

>>1
Anyways, here is what that code compiles to without continuations...


#include "../runtime.h"

#define f31807_size 0
DECL_LABEL(f31807)
#define f31813_size 0
static uint8_t b31847[] = {116,97,103,95,111,102,0};
static void *s31848;
static uint8_t b31850[] = {104,97,108,116,0};
static void *s31851;
static uint8_t b31853[] = {108,111,103,0};
static void *s31854;
static uint8_t b31856[] = {108,105,115,116,0};
static void *s31857;
static uint8_t b31859[] = {95,97,112,112,108,121,0};
static void *s31860;
static uint8_t b31862[] = {95,110,111,95,109,101,116,104,111,100,0};
static void *s31863;
static uint8_t b31865[] = {114,101,97,100,95,102,105,108,101,95,97,115,95,116,101,120,116,0};
static void *s31866;
DECL_LABEL(f31813)
static uint8_t b31835[] = {42,0};
static void *s31836;
static uint8_t b31840[] = {42,0};
static void *s31841;
BEGIN_CODE
ENTRY(entry)
VAR(result31804);
MOVE(NewBase, Top);
VAR(head31805);
LOCAL_ALLOC(head31805, t31869, f31807, 0);
VAR(env31867);
LOCAL_LIST(env31867, t31870, 1);
VAR(tmp31868);
MOVE(tmp31868, Host);
STORE(env31867, 0, tmp31868);
CALL_TAGGED(result31804, head31805, env31867);
RETURN(result31804);
ENTRY(setup)
TEXT(s31836, b31835);
TEXT(s31841, b31840);
TEXT(s31848, b31847);
TEXT(s31851, b31850);
TEXT(s31854, b31853);
TEXT(s31857, b31856);
TEXT(s31860, b31859);
TEXT(s31863, b31862);
TEXT(s31866, b31865);
RETURN_NO_GC(0);
LABEL(f31807)
CHECK_NARGS(1, f31807_size, Empty);
VAR(result31808);
MOVE(NewBase, Top);
VAR(head31809);
LOAD(head31809, E, 0);
VAR(env31810);
LOCAL_LIST(env31810, t31871, 8);
VAR(tmp31811);
LOCAL_ALLOC(tmp31811, t31872, f31813, 0);
STORE(env31810, 0, tmp31811);
VAR(tmp31846);
MOVE(tmp31846, s31848);
STORE(env31810, 1, tmp31846);
VAR(tmp31849);
MOVE(tmp31849, s31851);
STORE(env31810, 2, tmp31849);
VAR(tmp31852);
MOVE(tmp31852, s31854);
STORE(env31810, 3, tmp31852);
VAR(tmp31855);
MOVE(tmp31855, s31857);
STORE(env31810, 4, tmp31855);
VAR(tmp31858);
MOVE(tmp31858, s31860);
STORE(env31810, 5, tmp31858);
VAR(tmp31861);
MOVE(tmp31861, s31863);
STORE(env31810, 6, tmp31861);
VAR(tmp31864);
MOVE(tmp31864, s31866);
STORE(env31810, 7, tmp31864);
CALL_TAGGED(result31808, head31809, env31810);
RETURN(result31808);
LABEL(f31813)
CHECK_NARGS(7, f31813_size, Empty);
VAR(result31814);
VAR(p31828);
LOCAL_LIST(p31828, t31873, 0);
VAR(env31829);
LOCAL_LIST(env31829, t31874, 2);
VAR(tmp31830);
MOVE(NewBase, Top);
VAR(head31831);
MOVE(NewBase, Top);
VAR(head31832);
LOAD_FIXNUM(head31832, 1024);
VAR(env31833);
LOCAL_LIST(env31833, t31875, 2);
VAR(tmp31834);
MOVE(tmp31834, s31836);
STORE(env31833, 0, tmp31834);
VAR(tmp31837);
LOAD_FIXNUM(tmp31837, 1024);
STORE(env31833, 1, tmp31837);
CALL_TAGGED(head31831, head31832, env31833);
VAR(env31838);
LOCAL_LIST(env31838, t31876, 2);
VAR(tmp31839);
MOVE(tmp31839, s31841);
STORE(env31838, 0, tmp31839);
VAR(tmp31842);
LOAD_FIXNUM(tmp31842, 1024);
STORE(env31838, 1, tmp31842);
CALL_TAGGED(tmp31830, head31831, env31838);
STORE(env31829, 0, tmp31830);
VAR(tmp31843);
LOAD_FIXNUM(tmp31843, 0);
STORE(env31829, 1, tmp31843);
VAR(save_p31844);
MOVE(save_p31844, P);
VAR(save_e31845);
MOVE(save_e31845, E);
MOVE(E, env31829);
MOVE(P, p31828);
VAR(dummy31816);
LOCAL_LABEL(again);
VAR(r31817);
VAR(a31818);
VAR(b31819);
LOAD(a31818, E, 1);
LOAD(b31819, E, 0);
r31817 = FIXNUM_ADD(a31818, b31819);
STORE(E, 1, r31817);
MOVE(dummy31816, r31817);
VAR(r31820);
VAR(a31821);
VAR(b31822);
LOAD(a31821, E, 0);
LOAD_FIXNUM(b31822, 1);
r31820 = FIXNUM_SUB(a31821, b31822);
STORE(E, 0, r31820);
MOVE(dummy31816, r31820);
VAR(cnd31825);
VAR(a31826);
VAR(b31827);
LOAD(a31826, E, 0);
LOAD_FIXNUM(b31827, 0);
cnd31825 = FIXNUM_GT(a31826, b31827);
LOCAL_BRANCH(cnd31825, then31823);
LOAD(result31814, E, 1);
LOCAL_JMP(endif31824);
LOCAL_LABEL(then31823);
LOCAL_JMP(again);
LOCAL_LABEL(endif31824);
MOVE(E, save_e31845);
MOVE(P, save_p31844);
RETURN(result31814);END_CODE

Name: Anonymous 2014-06-06 12:07

Me no understando pedras lunares

Name: Anonymous 2014-06-06 12:19

>>3

simple english: haskell is slow because it uses continuations.

Name: Anonymous 2014-06-06 12:46

Symta's operators aren't capable of being composed in such a way as to represent Nth-order eigenfunctor algebra on the commutative ring of (N-1)th-differentiable endotensor matrices. Therefore, it is not turing-complete.

Name: Anonymous 2014-06-06 13:11

>>3

*runas lunares
Stupid american.

>>4

What about in nigger english?

Name: Anonymous 2014-06-06 13:11

>>5
Só brainfuck>symta?

Name: Anonymous 2014-06-06 13:41

looks like shit. good job.

Name: Anonymous 2014-06-06 13:54

>>6
Haskell be slow 'cause it uses continuations.

Name: Anonymous 2014-06-06 13:55

>>6
Haskell be slow 'cause it uses continuations.

Name: Anonymous 2014-06-06 13:55

>>6
Haskell be slow 'cause it uses continuations.

Name: Anonymous 2014-06-06 13:56

Sure, Symta is good, but is it brainfuck good?

Name: Anonymous 2014-06-06 13:57

Sorry, fucking triple post because my internet is shit.

Name: Anonymous 2014-06-06 14:14

>>13
I thought it was intentional.

Name: Anonymous 2014-06-06 14:27

CONTINUE MY ANUS

Name: Anonymous 2014-06-06 15:58

>>4
I know that continuations are slow, but couldn't it have them be optional? Ie the programmer decides whether he wants cont or speed.

Name: Anonymous 2014-06-06 16:02

>>15
CONTINUE MY ANUS

Name: Anonymous 2014-06-06 19:13

>>17
Tell me how you's do that.

Name: Anonymous 2014-06-06 19:15

anus anus anus[/o]

Name: Anonymous 2014-06-06 19:16

anus anus anus

Name: Anonymous 2014-06-06 19:42

I've optimized _let compilation to stack, which allowed following code to execute at 0.9 the the speed of C/C++

Languages don't have speed, you filthy homosexual Jew.

Name: Anonymous 2014-06-06 19:54

>>21
Then how do you optimize a language?

Name: Anonymous 2014-06-06 19:59

>>2
I'm curious why you're not using LLVM as a backend if you output the kind of C that's pretty much inline assembly already.

Name: Anonymous 2014-06-06 20:32

>>23
Maybe because this self-hating kike does not like that enterprise bs that named llvm
it is also much easier to generate C code in my opinion atleast

Name: Anonymous 2014-06-06 21:43

>>23
then the whole compiler has to be in sepples and who wants that

Name: Anonymous 2014-06-06 23:41

>>24

Yeah. C code is easier to debug and change. LLVM also requires some external dependencies (one of the reasons I'm not using SBCL). In the end I want to just GCC the code.

Name: Anonymous 2014-06-07 0:09

>>26

Although directly compiling to x86-64 wouldn't be a bad idea, but would add NASM as a dependency (GAS syntax is so confusing!).

Name: Anonymous 2014-06-07 0:10

>>1
Shouldn't it use the grade 3 optimization?

sum(1:n) = (n/2) * (n+1)

Name: Anonymous 2014-06-07 0:39

>>28

Actually, I had found a way to disable it, because GCC kept converting loop into expression.

Name: Anonymous 2014-06-07 1:49

I think "double-headed heap" can be implemented using two heaps with guard pages triggering heap growth.

Name: Anonymous 2014-06-07 2:07

>>1
but haskell doesn't depend on continuations

Name: Anonymous 2014-06-07 2:13

monads

Name: Anonymous 2014-06-07 2:38

>>31

monads reify continuations, meaning the underlying compiler have to support them

Name: CO to the rescue 2014-06-07 9:22

>>1
speed is about low-level hattrics
that is every bytewanker's pipedream
instead, speed is about program transfomations which requires reasoning about side-effects (which is problematic in most languages (the term side-effect is optimization context-dependant here) and more.

>>33
umena >haskell compiler have to ``support'' ``continuations''
unformalized blabling gets you nowhere

Name: Anonymous 2014-06-07 10:03

>>29
be hard to port the Symta -> C translator from C to Symta?
=)

Name: Anonymous 2014-06-07 13:16

>>35
the main part of the symta compiler is written in common lisp is it not???

Name: Anonymous 2014-06-07 13:34

Name: Anonymous 2014-06-07 15:03

>>37
https://github.com/saniv/symta/stargazers
Stargazers
affisz
Brazil
I can't believe that cretin is still alive. Why don't we gang up with the niggas and wipe him once and for all?

Name: Anonymous 2014-06-07 15:10

>>38
who's that guy?

Name: Anonymous 2014-06-07 16:01

>>39
The node.js cunt from several months ago.

Name: Anonymous 2014-06-07 17:03

>>39
The second Javashit kike.

Don't change these.
Name: Email:
Entire Thread Thread List