Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Got an Idea of GC. Please CC

Name: Anonymous 2014-05-27 0:07

Segmenting GC:
- Partition heap into N segments;
- Give each segment a bitmap, keeping what memory cells the segment references;
- After collecting a segment, compare its resulting bitmap to the original, then free now unused objects (bits that went from 1 to 0, and which also have 0 in other segments).

Pros:
- Incremental;
- Parallelizable into N threads;
- Collected memory can be distributed across N machines (share your memory accross Internet);
- Each segment could be future partitioned into subsegments, which are collectible separately.

Cons:
- Write barrier required;
- Bitmap memory is proportional to N;
- Execution time is proportional to memory_size/N.

Name: Anonymous 2014-05-29 6:52

>>40

1. You can still have unmanaged objects.
2. Heap size can be limited to the one, which can be handled in real-time.
3. By avoid circular-references, you make way for algorithms, like the one in OP-post, which have guaranteed execution time.

Name: Anonymous 2014-05-29 9:31

>>41
How do circular references prevent having a guaranteed execution time?

Name: Anonymous 2014-05-29 14:01

Does the universe do garbage collection? Nope, it just lets the garbage collect until it either explodes or collapse. If its good enough for the universe, it is good enough for me.

Name: Anonymous 2014-05-29 14:03

>>43
So instead of free() you use exit()?

Name: Anonymous 2014-05-29 14:06

>>44
That, or I just allocate all available system memory at the start and write wherever I please.

I don't understand this stupid obsession with memory consumption. If you don't want the memory to be used, then feel free to open up you computer, remove some chips, and put them in a drawer somewhere. That memory is gaurenteed to be free.

Name: Anonymous 2014-05-29 15:33

>>45
So what's it like using a single-tasked OS?

Name: Anonymous 2014-05-29 18:48

>>43
The garbage is self-collecting through gravity.

Name: Anonymous 2014-05-29 19:36

>>47

Gravity algorithm has O(n^2) time complexity.

Name: Anonymous 2014-05-29 19:42

>>42

How do circular references prevent having a guaranteed execution time?
Circular references impede incremental approaches, like references counting or segmenting the heap. I.e. you can't simply divide and conquer the problem, having to do a full pass. Otherwise you could have just segmented the memory in N parts, collecting 1/N every M seconds, making GC behavior predictable.

Name: Anonymous 2014-05-30 5:30

>>49
Oh, so you're talking about GC. I thought we'd reached a general consensus that GC is shit. None of those problems would arise with "manual" memory management (which is actually quite automated yet allowing much more freedom and expressiveness to the programmer).

Name: Anonymous 2014-05-30 9:00

>>50
I thought we'd reached a general consensus that GC is shit.
Far from it.

None of those problems would arise with "manual" memory management (which is actually quite automated yet allowing much more freedom and expressiveness to the programmer).
I got one question for you, you highly persistent and annoying individual. If a programming language implementation tries really hard to resolve allocation lifetime extents, and then for whatever is left unresolved either uses GC or (depending on a switch) asks the programmer to specify extents manually (or restructure the code to make it more obvious (to the analyzer)), would you qualify that language as ``GC'' or ``"manual" memory management''?

Name: Anonymous 2014-05-30 10:10

>>51
If the programmer has the freedom to decide when any object in non-stack memory is allocated and freed (including the choice between pass-by-ref and pass-by-value), then there is no forced GC (i.e. the memory management is manual).

So yes, if the programmer is able to statically specify the lifetime extents whenever the compiler can't resolve them, that is manual memory management.

However, in practice there is another requirement: namely that there is no GC-related bookkeeping which hogs memory. E.g. if one allocates a linked list, there should be only one pointer that points to every node, not two or more.

Name: Anonymous 2014-05-30 10:17

why all the people here use words like stack and heap? these are just data structures

Name: Anonymous 2014-05-30 10:26

>>52
so doubly-linked lists are out? STRANGE.

>>53
IABTWTPAJMA

Name: Anonymous 2014-05-30 10:27

>>54
I was talking about singly-linked lists, my bad.

Name: Anonymous 2014-05-30 10:28

>>55
I was just being a pedantic jerk about it

Name: Anonymous 2014-05-30 10:43

>>49
So GC is good but only until you start using doubly-linked lists, trees with parent and sibling pointers, and just about any data structure with circular links? Great, simply amazing. No wonder C and Sepples are still so popular.

Name: Anonymous 2014-05-30 10:49

>>52
If the programmer has the freedom to decide when any object in non-stack memory is allocated and freed (including the choice between pass-by-ref and pass-by-value), then there is no forced GC (i.e. the memory management is manual).
Then could you please update your personal catchphrase to ``forced GC is shit''?

However, in practice there is another requirement: namely that there is no GC-related bookkeeping which hogs memory. E.g. if one allocates a linked list, there should be only one pointer that points to every node, not two or more.
I'm not sure that's fair. Even malloc has a pointer-sized overhead per allocation.

Name: Anonymous 2014-05-30 11:32

>>58
So malloc is shit too? Almost as shit as forced GC. Why is the world shit? Why can't I have a perfect one pointer per node? Why is there so much memory wasted? Why do we needlessly flip bits and torture the computers? Why?

Name: Anonymous 2014-05-30 11:38

>>59
Dude take a chill pill.

Name: Anonymous 2014-05-30 11:39

>>59
I think we can get away from this problem by killing ourselfs!!!!!!!!!!!!!!!!!!!!!!!

Name: Anonymous 2014-05-30 11:41

>>57
That only applies to some collectors.

>>59
Most of the waste comes from fragmentation.

Name: Anonymous 2014-05-30 13:27

Ok can it at least be done in assembly? Allocate a singly-linked list with only one pointer per node? Then calculate its length and safely deallocate the whole thing? Is it possible?

Name: Anonymous 2014-05-30 14:56

>>63
Wow you're freakin' out. Yes. You can do it in C too. Just request memory from the OS, and do what you want with it.

Name: Anonymous 2014-05-30 15:02

>>64
Thank you, stranger.

Name: Anonymous 2014-05-30 15:26

>>63
Only if you keep the list contiguous in memory, meaning you'll need defragment everytime you remove a list element. And in that case you might as well just use a standard array.

Name: Anonymous 2014-05-30 15:48

>>66
So the extra pointer is to tell the memory allocator which addresses are free and which aren't? Thus it's unavoidable and when the garbage collector keeps an extra pointer for every node, it actually isn't wasting memory needlessly?

Name: Anonymous 2014-05-30 16:09

>>67
If you use equal-sized memory regions and allocation bitmaps, you can reduce the overhead to 1 bit per memory block.

Name: Anonymous 2014-05-30 16:41

>>68
Okay thank you.
Also, arrays rule.

Name: Anonymous 2014-05-30 17:21

>>69
It's not a perfect solution, however. You can waste a lot of memory through fragmentation. In fact, as a worst-case scenario, you could end up with 99% of ``free'' memory yet be unable to allocate anything larger than a word. The solutions vary:
- Compact the memory (mind you, this isn't GC, you're literally just moving blocks around without peeking inside them).
-- You'll have to peek inside objects to update the pointers though, so it's not a great solution. Either that, or use a pointer/handle table (which might or might not flush the cache more often that you'd like).
- Use regions.
-- This doesn't really prevent fragmentation, it merely restricts it somewhat.

Actually we should make a new thread on combating fragmentation.

Name: Anonymous 2014-05-30 18:33

>>70
Two words: generational GC.

Name: Anonymous 2014-05-30 20:13

>>68

AFAIK, the malloc coming with GCC also prefixes each allocated region with it's size. That is a huge overhead for small objects.

Name: Anonymous 2014-05-30 20:28

>>72
*glibc

Name: Anonymous 2014-05-30 20:29

>>72
I was talking about what is possible, not about what is currently being done.

Name: Anonymous 2014-05-30 20:46

Get N+1 bit RAM and use the extra bit for allocation status.

Name: Anonymous 2014-05-30 21:25

Refined the idea for purely functional languages (copying wont work with closure mutation):
- Break heap into N equally sized segments;
- When object A from segment S gets reference to object B from segment T, just deep copy the content of object B to S;
- When a segment grows past it's threshold size (1/2 of max size), collect it;
- If it cannot be collected, don use it for allocation anymore (unlink it from an alloc ring).

Name: Anonymous 2014-05-30 23:23

More ideas...

Break heap into master and slave segments. Then ensure the following laws hold:
- a slave can have only one master, but can own its own slaves
- an object in a master segment can reference any object of it's slave segments
- an object in a slave segment cant reference it's master objects

With all these invariants, we can efficiently collect any master segment, because it's objects are guaranteed to be free from references from its slaves (no cycles), while the references for its master are part of GC root set. The joy of slavery!

Name: Anonymous 2014-05-30 23:34

More ideas...

Break heap into leader and follower segments. Then ensure the following laws hold:
- a follower can have only one leader, but can have its own followers
- an object in a leader segment can reference any object of it's follower segments
- an object in a follower segment cant reference it's leader objects

With all these invariants, we can efficiently collect any leader segment, because it's objects are guaranteed to be free from references from its followers (no cycles), while the references for its leader are part of GC root set. The joy of leadership!

Name: Anonymous 2014-05-31 11:17

More ideas...

Break heap into white pimp and nigger whore segments. Then ensure the following laws hold:
- a nigger whore can have only one white pimp, but can own its own nigger whores
- an object in a white pimp segment can reference any object of it's nigger whore segments
- an object in a nigger whore segment cant reference it's white pimp objects

With all these invariants, we can efficiently collect any white pimp segment, because it's objects are guaranteed to be free from references from its nigger whores (no cycles), while the references for its white pimp are part of GC root set. The joy of nigger whorery!

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List