>>2 Studying ASM as a programmer for PCs is like a doctor studying quantum chemistry in an attempt to understand how to treat a snake bite.
Name:
Anonymous2016-08-24 18:22
C-dder is all talk and no action.
Name:
Anonymous2016-08-24 18:32
>>5 x86 hardware doesn't optimize for arrays (except for SIMD). It has what those FP kikes like to call the ``von Neumann bottleneck''.
If you think that's fast, let's see the Jewish linked list compete against arrays on an array processor or GPU.
Name:
Anonymous2016-08-24 18:37
>>6 Nah, it's like studying the chemistry of toxins. Quantum chemistry would be more like λ-calculus.
Name:
Anonymous2016-08-24 18:52
>>9 λ-calculus has nothing to do with programming, computer science, mathematics, or computers. It's a Jewish plot to replace goyish mathematics with a kike scam.
Name:
Anonymous2016-08-24 19:16
For imperative dogs perhaps, but you can almost always find a better structure for the task at hand to reduce the number of operations even necessary instead of just making traversal faster.
Name:
Anonymous2016-08-24 19:38
Lambda calculus is of limited relevance to practical programming, since it has no notion of I/O, nor does it address the physical realities of data storage (e.g. you might call a variable "A", but that variable is just an idea, the actual data might be in a CPU register at one point, in RAM at another, and on the disk at another point). Assembly can be useful to programmers, depending on their area of work. If you work exclusively in very high-level abstract languages, you might not have much use for it, but if doing reverse-engineering or writing software for embedded systems it's pretty important.
Name:
Anonymous2016-08-24 19:42
>>11 Arrays are always going to be the most efficient on the machine code level, since storage is contiguous and ordered. To get to the next element in an array, just go to the next memory address (assuming each element is one byte, obviously larger objects in an array have to be handled differently, but it's still pretty simple - if your array has 5 byte elements, just jump ahead 5 bytes to get the next element). With linked lists you have to explicitly follow a pointer to the next element (which may not be contiguous), and unless you have a doubly linked list you won't have any way to go backwards.
>>8 caching means memory access is optimized for contiguous chunks of data, rather than things spread around the heap.
it's why i enjoy programming for old, uncached, simple cpus like the 68000. I can write any damn algorithm for any damn data structure and all I care about is number of instructions executed.
Name:
Anonymous2016-08-24 20:25
>>14 Yes, that's an advantage of linked lists, but it makes traversing them quite awkward, unless you later reorder them into a proper array.
>>11,15 People are so brainwashed by Lisp schools that they are unable to understand that arrays are not lists meant to be traversed linearly in one direction.
Arrays are a random access data structure. You can access any part of an array without traversing the previous parts. You can't do that with a linked list.
Name:
Anonymous2016-08-24 23:46
>>18 you can keep pointers to handy locations in a list as well
Name:
Anonymous2016-08-25 0:32
>>18 Arrays are actually still pretty much the ideal structure for forward linear traversal, because having them be contiguous in memory simplifies the resulting machine code. The ONLY advantage of linked lists is that they allow insertion at any point, not just the end.
Name:
Anonymous2016-08-25 2:27
>>20 Yes, if you are loading data into memory and already know the size, and will only be reading from that or making changes that don't alter the size of any element, arrays are great. But almost anything else has a better data structure available. For example, if you will be doing lots of searching, you should use a sort of self-balancing tree. The O(lg(n)) worst case time complexity will save a lot of time more than the time that saved by having the data block in the cache.
And the whole thing falls apart when storing variable length things like strings anyway, since they still have to be dereferenced. Speaking of strings, I don't see why you imperative subniggers bitch about Lisp here, Huskull is the one that stores strings as linked lists.
Name:
ANDRU2016-08-25 2:50
I ARE A COMBO OF ONE CONCEPT ARRAY AND ONE AUDITORY ARRAY:
>>21 a bit of a digression, but speaking of self-balancing trees i really like skip lists, most of the benefit but super easy to implement carry on folks
>>27 Arrays are the least scalable data structure. You have to destroy or mutate the entire thing to perform many simple processing steps. Arrays mean they're contiguous and that it all needs to fit in RAM. It's lots of fun when you get some massive file you need to process and array-based garbage starts swap thrashing or dies from OoM.
>>5,11,21,26,31 Linked lists are a Lisp scam to take the random access out of computing and force their garbage collection on the people. You don't need garbage collection to use arrays safely. Garbage collection is only necessary for linked lists.
Name:
Anonymous2016-11-21 17:44
>>36 Linked lists are preferable for mutable, ordered data structures, as middle insertion requires O(1) write operations for a linked list, but O(n) write operations for an array.
Name:
Anonymous2016-11-21 18:42
arrays are good linked lists are good
linked lists of arrays are god tier
Name:
Anonymous2016-11-21 18:42
two dimensional array of linked lists is used in games
Name:
Cudder!MhMRSATORI2016-11-21 18:49
>>36 I use linked lists everywhere in my pure assembly Windows 95 browser.
>>42 link the array of bits in any order you like using another array of bits Bonus: you can control the bitsize of the link indices only need 255 elements or less? use 8 bit ints for your index array!
How big are pointers anyway? is there a minimum chunk of memory that they refer to?
Name:
Anonymous2016-11-22 0:40
Noone built the holy-grail sort yet? Its supposed to be in-place O(n*log(n))
Quickersort() using arraylinks?
quicksort has a chunk-style item reordering segment (3n / 2) read-[compare]-write op's
quickersort with comparison trees? possibly (2n / 2)? read-compare-compare-write ? Possible to change pivot points 0-n-1 times through an n-iteration?