Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

Advanced Data Structures

Name: Anonymous 2015-05-04 23:22

How often is the answer to a tricky programming problem to use a fancy data structure?

Name: Anonymous 2015-05-04 23:26

About 1:5.

What is a fancy data structure to you?

Name: Anonymous 2015-05-04 23:38

>>2
What is a fancy data structure to you?
e.g. those detailed in this course:
https://courses.csail.mit.edu/6.851/spring14/

Name: Anonymous 2015-05-05 0:42

>>2
Lists, queues, stacks, vectors, arrays, ....

Name: Anonymous 2015-05-05 2:04

>>2
Having to count using both my fingers and toes.

Name: Anonymous 2015-05-05 3:59

>>2
Red-black B AVL binary search stack hash lists.

Name: Cudder !cXCudderUE 2015-05-05 15:12

All you need are arrays and linked structures. Everything else can be built from those.

Name: Anonymous 2015-05-05 15:27

>>7
Excellent post. Would outsource/10

Name: Anonymous 2015-05-07 21:38

>>7
All you really need are arrays and integers. Linked structures can be built from those.

Name: Anonymous 2015-05-08 17:57

>>9
But then whenever your arrays get full, you'll need to dynamically allocate larger arrays and copy the whole thing. This is shit.

Name: Anonymous 2015-05-08 23:25

>>10
Your computer is an array with integers. What happens when your array gets full?

Name: Anonymous 2015-05-09 0:49

>>11
Buy more.

Name: Anonymous 2015-05-09 12:09

>>11
That's irrelevant to the question. Allocating the whole RAM to just one tree would be even more inefficient than copy-when-full. Either way, arrays lose. They have very limited application, trees are much more usable (because usually you have to add and remove arbitrary numbers elements at run-time) and simulating them with arrays is just weak and underperformant.

Name: Anonymous 2015-05-09 12:32

To a programming problem, rarely -- most of the time it's arrays, linked lists, ordered trees, hash tables, and so forth.

To an architectural design issue, usually.

(though one must always consider the application of the data structure in addition to its construction.)

Name: Anonymous 2015-05-09 14:46

>>13
Does your bloatware executable have a tree structure? or are you missing the point?

Name: Anonymous 2015-05-09 14:55

>>15
I don't even think you have a point.

Name: Anonymous 2015-05-09 22:02

arrays are best.

Name: Anonymous 2015-05-10 2:22

>>16
I don't think you have a joint. Here, take this one.

Name: Anonymous 2015-05-10 19:40

>>18 rehhhh

Name: Anonymous 2015-05-11 23:49

>>7
All you need are electrons and lack of electrons, everything else can be built from those.

Your question lacks a problem definition.

Name: Anonymous 2015-05-12 15:08

>>20
What if I want to use photons instead?

Name: Anonymous 2015-05-12 18:08

>>13
So you're saying trees are underperformant because you're emulating them with your array machine?

Name: Anonymous 2015-05-12 18:20

>>1
All the time. Just yesterday my boss asked me to add new input field to our internal PHP-application. I implemented Red-Black-Tree instead.

Name: Anonymous 2015-05-12 18:30

>>23
I hope you got a promotion!

Name: Anonymous 2015-05-12 21:34

23
Classic case of Mortgage-driven Software Development: -

https://www.youtube.com/watch?v=7RJmoCWx4cE

Name: Anonymous 2015-05-13 16:44

>>13
Arrays are always faster than lists, and in many cases, faster than trees.

Linked lists offer no benefits over arrays. Appending and popping is O(1) for both structures. Random access insertion and deletion are O(n) for both structures. Shifting or copying an array is still cheaper than traversing a list in most cases (see L1, L2, etc. cache), and even in the rare case that it's not, the tradeoff is O(n) for random access read and write.

You don't have to take my word for it, just try comparing them with an expensive algorithm and see for yourself. In my tests, a linked list has never beaten a dynamic array.

Trees are a little better, but read and write are still problematic (O(1) vs. O(log n) with cache misses). They're always better than linked lists though, except they're harder to write.

Name: Anonymous 2015-05-13 17:43

>>26
Shifting or copying an array is still cheaper than traversing a list in most cases (see L1, L2, etc. cache), and even in the rare case that it's not, the tradeoff is O(n) for random access read and write.

Poor locality is not as big a problem if you are being smart about the allocation of the list nodes. A pool of elements with embedded nodes is a hell of a lot better than a heap of elements and external nodes.

Name: Anonymous 2015-05-14 0:34

>>26
Linked lists can be used with heterogenous types, and linked structures in general are better for implementing multi-threaded data structures. For example, skip lists.

Name: Anonymous 2015-05-16 10:59

>>26
Appending and popping is O(1) for both structures
You need to take CS 101 course or whatever. Appending and popping for an array aren't O(1).

Name: Anonymous 2015-05-16 21:16

>>29
amortized*

Name: Anonymous 2015-05-16 23:27

>>30
what?

Name: Anonymous 2015-05-17 3:47

Name: Anonymous 2015-05-17 3:49

Name: Anonymous 2015-05-17 14:54

>>29
struct array {
int *objs;
int size;
int capacity;
};


Append:

array.objs[array.size++] = value;

Pop:

array.size--;

So append is actually O(n) if size equals capacity, but still Θ(1). Pop is O(1).

Name: Anonymous 2015-05-17 15:32

So append is actually O(n)
Good thing you've finally noticed.

but still Θ(1)
No, it's still Θ(n). You have to allocate a new, larger array, then copy all the n items from the old array to the new, then free the memory taken up the the old array.

Name: Anonymous 2015-05-17 16:29

>>35
No, it's still Θ(n). You have to allocate a new, larger array, then copy all the n items from the old array to the new, then free the memory taken up the the old array.

Only when the allocated array is full. Note the size and capacity vars. size is the number of elements in the array. capacity is the memory allocated to the array. size is always less than capacity. e.g. Initialize with size = 0 and capacity = 1024. That gives you 1024 O(1) append operations before capacity needs to be increased and objs needs to be reallocated. It's Θ(1), trust me.

Name: Anonymous 2015-05-17 16:38

>>35
No, it's still Θ(n). You have to allocate a new, larger array, then copy all the n items from the old array to the new, then free the memory taken up the the old array.
Someone did not read the post he is replying to.

Name: Anonymous 2015-05-17 17:26

>>36
If that is true, then why are linked lists still the default in e.g. Lisp and Haskell?

Name: Anonymous 2015-05-17 17:26

Asymptotic complexity is shit for fags. Learn analytic combinatorics instead of this non-constructive kike bullshit.

Name: Anonymous 2015-05-17 17:51

>>38
I don't know about Haskell's implementation but there are plenty of Lisps that don't use linked lists.

Name: Anonymous 2015-05-17 17:58

Name: Anonymous 2015-05-17 18:06

>>41
If it came out in 2009, why is it on a web page from 1997?

Name: Anonymous 2015-05-17 18:17

>>40
name one?

Name: Anonymous 2015-05-17 18:27

>>43
You know. The one with the parentheses.

Name: Anonymous 2015-05-17 18:31

>>41
A book with many pretty pictures. This one likes books with pretty pictures.

Don't change these.
Name: Email:
Entire Thread Thread List