Segmenting GC: - Partition heap into N segments; - Give each segment a bitmap, keeping what memory cells the segment references; - After collecting a segment, compare its resulting bitmap to the original, then free now unused objects (bits that went from 1 to 0, and which also have 0 in other segments).
Pros: - Incremental; - Parallelizable into N threads; - Collected memory can be distributed across N machines (share your memory accross Internet); - Each segment could be future partitioned into subsegments, which are collectible separately.
Cons: - Write barrier required; - Bitmap memory is proportional to N; - Execution time is proportional to memory_size/N.
Name:
Anonymous2014-05-27 0:14
if your hand is larger than your face
Name:
Anonymous2014-05-27 0:18
Cons: - Resolving cross-segment circular references would require compacting/collapsing cross-referenced data into single segment (which one has more references to the other).
Pros: - GC implementation will be forced to do compaction and memory defragmentation (especially useful with shared memory residing across several machines).
>>6 Shared-nothing greatly simplifies IPC for the user at least. It tends to be slower, but it's not that bad. You can use an ARC¹ for your shared objects need. [1] http://doc.rust-lang.org/0.10/sync/struct.Arc.html
Maybe check out LVars. I don't know what a good collector would look like with LVars or if its even helpful, but at least determinism would be enforced. I suppose the resulting lattice would be huge.
>>8 is being a shit, but Rust would allow you to avoid GC most of the time without abandoning it entirely. You also don't do manual memory management the way you normally would. The extent to which you have to write lifetime annotations is fairly small, and when you do they're checked for correctness.
>>10 In basically all of the languages with shared-nothing, serialization is automatic. In Rust the required trait can be derived by the compiler.
>>11 It's ugly, and will remain that way until/unless higher-kinded types arrive. That's probably not going to make it pretty for you, but it will for me. Go could be pretty as you like and it wouldn't help; it's not typesafe.
Rust isn't a C clone. It's more of a sweet spot between ML and C++.
Rust has to appeal to C++ programmers, hence the "less worse than C++" syntax. ML programmers don't need it, they can just use Haskell or whatever if they want the hottest shit.
>>14 Keep trying different combinations, eventually someone will get offended.
>>16 I've already been paid to write Rust. Only a few dozen people are being paid to write it today but it isn't even 1.0 yet. In 2-3 years we can talk about how well Rust is doing. It should see 1.0 later this year.
Name:
Anonymous2014-05-27 10:06
GC
Pros - It allows idiots and retards to do programming
Cons - It's shit - Idiots and retards can now be your coworkers
structured loops are too verbose. I'm always wrapping them inside of a macros like #define map(X,Xs,Body) ({ \ __typeof__(Xs) GENSYM(_map_Xs) = (Xs); \ __typeof__(GENSYM(_map_Xs)[0]) X; \ __typeof__(({Body;})) GENSYM(_map_Y); \ QVector<__typeof__(GENSYM(_map_Y))> GENSYM(_map_Ys); \ int GENSYM(_map_K); \ for(GENSYM(_map_K)=0; GENSYM(_map_K)<GENSYM(_map_Xs).size(); GENSYM(_map_K)++) { \ X = GENSYM(_map_Xs)[GENSYM(_map_K)]; \ GENSYM(_map_Ys).append(({Body;})); \ } \ GENSYM(_map_Ys); \ })
Name:
Anonymous2014-05-28 20:33
>>30 wtf this isn`t assembly class go back to 1980 LOL we have real languages now, high level ones, not that you`d be able to recognize one when you see it you assembler robot
Name:
Anonymous2014-05-28 21:51
>>32 Assembly will be relevant as long as machine code exists.
tfw no pauseless GC tfw GC defenders keep saying it can be done but refuse to tell you what GC algorithm works for this, they just try to intimidate you away from asking more by saying you have a lot to learn
GC fags are the worst: intellectually dishonest liars.
1. You can still have unmanaged objects. 2. Heap size can be limited to the one, which can be handled in real-time. 3. By avoid circular-references, you make way for algorithms, like the one in OP-post, which have guaranteed execution time.
Name:
Anonymous2014-05-29 9:31
>>41 How do circular references prevent having a guaranteed execution time?
Name:
Anonymous2014-05-29 14:01
Does the universe do garbage collection? Nope, it just lets the garbage collect until it either explodes or collapse. If its good enough for the universe, it is good enough for me.
>>44 That, or I just allocate all available system memory at the start and write wherever I please.
I don't understand this stupid obsession with memory consumption. If you don't want the memory to be used, then feel free to open up you computer, remove some chips, and put them in a drawer somewhere. That memory is gaurenteed to be free.
How do circular references prevent having a guaranteed execution time?
Circular references impede incremental approaches, like references counting or segmenting the heap. I.e. you can't simply divide and conquer the problem, having to do a full pass. Otherwise you could have just segmented the memory in N parts, collecting 1/N every M seconds, making GC behavior predictable.
Name:
Anonymous2014-05-30 5:30
>>49 Oh, so you're talking about GC. I thought we'd reached a general consensus that GC is shit. None of those problems would arise with "manual" memory management (which is actually quite automated yet allowing much more freedom and expressiveness to the programmer).
I thought we'd reached a general consensus that GC is shit.
Far from it.
None of those problems would arise with "manual" memory management (which is actually quite automated yet allowing much more freedom and expressiveness to the programmer).
I got one question for you, you highly persistent and annoying individual. If a programming language implementation tries really hard to resolve allocation lifetime extents, and then for whatever is left unresolved either uses GC or (depending on a switch) asks the programmer to specify extents manually (or restructure the code to make it more obvious (to the analyzer)), would you qualify that language as ``GC'' or ``"manual" memory management''?
Name:
Anonymous2014-05-30 10:10
>>51 If the programmer has the freedom to decide when any object in non-stack memory is allocated and freed (including the choice between pass-by-ref and pass-by-value), then there is no forced GC (i.e. the memory management is manual).
So yes, if the programmer is able to statically specify the lifetime extents whenever the compiler can't resolve them, that is manual memory management.
However, in practice there is another requirement: namely that there is no GC-related bookkeeping which hogs memory. E.g. if one allocates a linked list, there should be only one pointer that points to every node, not two or more.
Name:
Anonymous2014-05-30 10:17
why all the people here use words like stack and heap? these are just data structures
>>49 So GC is good but only until you start using doubly-linked lists, trees with parent and sibling pointers, and just about any data structure with circular links? Great, simply amazing. No wonder C and Sepples are still so popular.
If the programmer has the freedom to decide when any object in non-stack memory is allocated and freed (including the choice between pass-by-ref and pass-by-value), then there is no forced GC (i.e. the memory management is manual).
Then could you please update your personal catchphrase to ``forced GC is shit''?
However, in practice there is another requirement: namely that there is no GC-related bookkeeping which hogs memory. E.g. if one allocates a linked list, there should be only one pointer that points to every node, not two or more.
I'm not sure that's fair. Even malloc has a pointer-sized overhead per allocation.
Name:
Anonymous2014-05-30 11:32
>>58 So malloc is shit too? Almost as shit as forced GC. Why is the world shit? Why can't I have a perfect one pointer per node? Why is there so much memory wasted? Why do we needlessly flip bits and torture the computers? Why?
Ok can it at least be done in assembly? Allocate a singly-linked list with only one pointer per node? Then calculate its length and safely deallocate the whole thing? Is it possible?
Name:
Anonymous2014-05-30 14:56
>>63 Wow you're freakin' out. Yes. You can do it in C too. Just request memory from the OS, and do what you want with it.
>>63 Only if you keep the list contiguous in memory, meaning you'll need defragment everytime you remove a list element. And in that case you might as well just use a standard array.
Name:
Anonymous2014-05-30 15:48
>>66 So the extra pointer is to tell the memory allocator which addresses are free and which aren't? Thus it's unavoidable and when the garbage collector keeps an extra pointer for every node, it actually isn't wasting memory needlessly?
Name:
Anonymous2014-05-30 16:09
>>67 If you use equal-sized memory regions and allocation bitmaps, you can reduce the overhead to 1 bit per memory block.
>>69 It's not a perfect solution, however. You can waste a lot of memory through fragmentation. In fact, as a worst-case scenario, you could end up with 99% of ``free'' memory yet be unable to allocate anything larger than a word. The solutions vary: - Compact the memory (mind you, this isn't GC, you're literally just moving blocks around without peeking inside them). -- You'll have to peek inside objects to update the pointers though, so it's not a great solution. Either that, or use a pointer/handle table (which might or might not flush the cache more often that you'd like). - Use regions. -- This doesn't really prevent fragmentation, it merely restricts it somewhat.
Actually we should make a new thread on combating fragmentation.
>>72 I was talking about what is possible, not about what is currently being done.
Name:
Anonymous2014-05-30 20:46
Get N+1 bit RAM and use the extra bit for allocation status.
Name:
Anonymous2014-05-30 21:25
Refined the idea for purely functional languages (copying wont work with closure mutation): - Break heap into N equally sized segments; - When object A from segment S gets reference to object B from segment T, just deep copy the content of object B to S; - When a segment grows past it's threshold size (1/2 of max size), collect it; - If it cannot be collected, don use it for allocation anymore (unlink it from an alloc ring).
Name:
Anonymous2014-05-30 23:23
More ideas...
Break heap into master and slave segments. Then ensure the following laws hold: - a slave can have only one master, but can own its own slaves - an object in a master segment can reference any object of it's slave segments - an object in a slave segment cant reference it's master objects
With all these invariants, we can efficiently collect any master segment, because it's objects are guaranteed to be free from references from its slaves (no cycles), while the references for its master are part of GC root set. The joy of slavery!
Name:
Anonymous2014-05-30 23:34
More ideas...
Break heap into leader and follower segments. Then ensure the following laws hold: - a follower can have only one leader, but can have its own followers - an object in a leader segment can reference any object of it's follower segments - an object in a follower segment cant reference it's leader objects
With all these invariants, we can efficiently collect any leader segment, because it's objects are guaranteed to be free from references from its followers (no cycles), while the references for its leader are part of GC root set. The joy of leadership!
Name:
Anonymous2014-05-31 11:17
More ideas...
Break heap into white pimp and nigger whore segments. Then ensure the following laws hold: - a nigger whore can have only one white pimp, but can own its own nigger whores - an object in a white pimp segment can reference any object of it's nigger whore segments - an object in a nigger whore segment cant reference it's white pimp objects
With all these invariants, we can efficiently collect any white pimp segment, because it's objects are guaranteed to be free from references from its nigger whores (no cycles), while the references for its white pimp are part of GC root set. The joy of nigger whorery!