Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

best scheme/lisp for real time code

Name: Anonymous 2014-09-01 13:09

Is Chicken a good idea? I am worried about the way it does GC. Exhausting the stack seems a bit... Java. I'm looking at Gambit too but I don't know much about it yet.

Whatever I end up using needs to work on ARM. It doesn't have to target C as long as it has FFI and short GC pauses (I am aiming for less than 5ms.)

Name: Anonymous 2014-09-01 14:20

>>1

My latest implementation of Symta doesn't do GC, just stack, so local data (that doesn't get returned) gets freed immediately. The trade off is write barrier, because we need to detect write to the lower stack levels. Purely functional code doesn't really gets affected by that.

Name: Anonymous 2014-09-01 16:59

>>2
How does that work? It sounds like linear types, but how do you deal with references to stack variabls, eg. local-capturing lambdas? Does that interfere with TCO?

Name: Anonymous 2014-09-01 17:11

>>1
Use Gambit or stop being a faggot and move to SBCL.

Name: Anonymous 2014-09-01 18:27

>>4
Sorry, my criteria go a little deeper than "what is >>4-kun do?"

Name: Anonymous 2014-09-01 19:13

>>3

Data returned to the caller will be copied to the caller's stack frame and will be left there, unless GC is invoked manually on the stack frame in question.

Name: Anonymous 2014-09-01 19:26

>>6
Ok, that makes sense. I take it you refcount the data to find out what needs copying?

Name: Anonymous 2014-09-01 19:54

>>7

No. Just divide the stack into two halves (top and bottom), for even and odd frames, then copy between them on return, like with normal GC.

Name: Anonymous 2014-09-01 23:11

Nikita's explanations are gold shimmers in muddy waters.

Name: Anonymous 2014-09-03 23:29

>>8
Ok, so not like linear types at all.

So far, so good. The write barrierers--are they used to keep the references from older stack frames valid when moving data? Or have I misunderstood?

Another question: have you profiled the write barriers? How often are they needed? I'm not sure what you would use for a benchmark. Maybe I should just take a look at it myself. Is the source available?

Name: Anonymous 2014-09-04 3:30

I often think about this, because I like Lisp but dislike GC.

You can get a "real time automatic memory management" of sorts in any language by never freeing anything, only allocating by bumping a pointer and relying on the OS to clean things up when the program exits.

From there, it's easy to have separate spaces for different purposes, and be able to independently reset each one to free up space. One example would be a common pattern in games, where one space is used for storing per-level data, and the other for per-frame data. Apache uses a similar approach, with one memory "pool" per request serviced. Another would be providing separate spaces for userspace threads. Obviously it's not truly automatic anymore because you have to think a bit about object lifetimes and whatnot, but at least you aren't calling malloc and free all the time or dealing with the drawbacks of mark-sweep or reference counting.

You could fairly easily integrate this into a Lispy language, either by explicitly passing an allocator to each function that allocates memory, or implicitly by setting an allocator per environment (but then things get hairy when you want to put short and long lived objects in different regions). In either case, purists would say it "isn't Lisp," but I think the result would be pretty usable if you were willing to bloat up code a bit and waste some memory for the sake of runtime simplicity.

Name: Anonymous 2014-09-04 4:37

Name: Anonymous 2014-09-04 7:12

>>10

The write barrierers--are they used to keep the references from older stack frames valid when moving data? Or have I misunderstood?
Write barrier used when code assigns reference to object O at frame N to a memory cell at frame M, where M < N. On return from frame M, memory manager manger propagates O from M to M-1, unless M=N or reference got overwritten by other object.

Name: Anonymous 2014-09-04 20:10

>>13
Ah yeah I see it now.

Why doesn't this show up in the memory management literature? Latency seems very low, I'm guessing somewhere between RC with and RC without cycle collector. I've always liked RC because it runs at each return.

>>11
From there, it's easy to have separate spaces for different purposes, and be able to independently reset each one to free up space.

Arenas. https://en.wikipedia.org/wiki/Region-based_memory_management

either by explicitly passing an allocator to each function that allocates memory, or implicitly by setting an allocator per environment

With arenas you usually have a default allocator and can override it (or in C, a utility function that bumps a pointer on the arena instead of calling malloc.) Keeps from having to tell things that won't outlive a stack frame where to allocate. This means you have your "per-level" data (good example) in an arena but other functions would use the standard allocator. The trick is: once the level is loaded, allocations are very rare outside of the arena and virtually nonexistent inside of it. You put re-usable object pools in the arena.

Name: Anonymous 2014-09-04 20:17

>>12
You have no idea how tempting OCaml is to me. I love types. I just hate installing libraries, then rebuilding utop and then figuring out what I did wrong.

Name: Anonymous 2014-09-04 20:30

>>14

Why doesn't this show up in the memory management literature? Latency seems very low, I'm guessing somewhere between RC with and RC without cycle collector. I've always liked RC because it runs at each return.
Probably because it is a variation of generational garbage collection, with heuristics being that returned values live longer than locals.

Name: Anonymous 2014-09-04 20:50

compile my anus

Name: Anonymous 2014-09-05 1:14

>>16
Sure, but when people are choosing GC there's a list of things they go through. This is like generational but with a profile more like RC. That is a certain sweet-spot people never get to entertain when choosing GC.

Name: Anonymous 2014-09-05 3:17

>>18

People don't choose GC. Language designers do. People choose languages, of which there is a fairly limited choice, if you want to be relevant.

Name: Anonymous 2014-09-05 3:40

>>19
I am talking about PL designers. A lot of it happens in the open, and when there is any discussion of GC it virtually always begins with recap of RC then on to other things, eventually touching on generational at some point. This is a step PL designers are missing, and an important one in my opinion.

Name: Anonymous 2014-09-05 4:05

>>20

Most PL designers today reuse existing frameworks, like JS, JVM and CLR. Memory management is one of more complex runtime parts. During Symta's implementation, most bugs came from runtime's memory management (despite it is just a few lines of code), and these are hard bugs to make sense of.

Name: Anonymous 2014-09-05 4:08

>>21

There is also this problem, where you pass lambda callback to the foreign code. You can never predict, when it will be safe to free the associated closure.

Name: Anonymous 2014-09-05 5:32

>>21
I'm not a fan of those languages. Not because they get memory management wrong, but because they insist on sticking you with some other platform when it's usually inappropriate. (I would like to use CLR but the fact is I can't for most things.)

To pick a few random ones: Go, Rust and Perl 6 don't stick you with a different language's platform, and have each decided to use their own GC.

Rust doesn't seem to have one yet, although Servo does use GC via JavaScript, it's only for the DOM where you can't avoid having JavaScript's GC in play. I think GC will be done in a library, where users can even provide their own.

>>22
That is a concern, but it's a common one. The two primary programmer-oriented solutions are fine for me: 1. ensure the lifetime of the closure exceeds that of the foreign environment or 2. manually deallocate the closure when it is no longer needed.

Name: Anonymous 2014-09-06 3:17

>>14
I'm aware of the "arena" name for the concept.
I can think of a lot of situations where you need to allocate both short and long term/semi-permanent storage to solve a problem. Obvious example: the compiler needs to allocate short lived buffers for reading a file (ok, we can use mmap for this one), producing the parse tree, expanding macros, producing an internal representation like SSA, while at the same time "permanently" adding data to a long lived arena like the code being generated and the symbol associated with it. Sure, you can use separate object pools per task, but in a dynamic program like a compiler that just punts the task further down the road until you run out of space for e.g. symbols and need to make another allocation for more "permanent" storage.

Name: Anonymous 2014-09-06 4:15

>>24
Sounds like you would have long pauses unless the arenas were very small.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List