Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Have you read your PFDS today?

Name: Anonymous 2015-11-13 21:39

Purely Functional Data Structures
http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf
When a C programmer needs an efficient data structure for a particular problem, he or she can often simply look one up in any of a number of good textbooks or handbooks. Unfortunately, programmers in functional languages such as Standard ML or Haskell do not have this luxury. Although some data structures designed for imperative languages such as C can be quite easily adapted to a functional setting, most cannot, usually because they depend in crucial ways on assignments, which are disallowed, or at least discouraged, in functional languages. To address this imbalance, we describe several techniques for designing functional data structures, and numerous original data structures based on these techniques, including multiple variations of lists, queues, double-ended queues, and heaps, many supporting more exotic features such as random access or efficient catenation.

In addition, we expose the fundamental role of lazy evaluation in amortized functional data structures. Traditional methods of amortization break down when old versions of a data structure, not just the most recent, are available for further processing. This property is known as persistence, and is taken for granted in functional languages. On the surface, persistence and amortization appear to be incompatible, but we show how lazy evaluation can be used to resolve this conflict, yielding amortized data structures that are efficient even when used persistently. Turning this relationship between lazy evaluation and amortization around, the notion of amortization also provides the first practical techniques for analyzing the time requirements of non-trivial lazy programs.
 
Finally, our data structures offer numerous hints to programming language designers, illustrating the utility of combining strict and lazy evaluation in a single language, and providing non-trivial examples using polymorphic recursion and higher-order, recursive modules.

Name: Anonymous 2015-11-22 10:58

>>23
Overhead is always a problem.
Do you live in a world where computer programs do not interact with humans in any way? Who gives a shit if the results come through 8µs faster, if it took an extra month of development time to remove the ``overhead''? Those are the kinds of numbers we're talking about here.
Or are you facetiously agreeing, and saying that there is always overhead, including in things like clock speed and memory round-trip time, and that in the slightly bigger picture, GC overhead is negligible compared to disk and network delays?
Unless your domain is pure calculations on an embedded, OS-less device, you will have overhead.

Here we go again. GC is shit.
You have immutable trees in C. You are updating a 1MB tree. You end up with two 1MB trees because of your naive implementation of your immutable trees. You decide to remove the memory overhead, by sharing data between trees old and new. However, you now need to keep track of which parts are OK to free because they belong solely to the old tree, and which parts are referenced by the new tree. You decide to keep a tag on each tree node saying how many references there are to it. Congratulations, you have reinvented GC.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List