Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-8081-

Have you read your PFDS today?

Name: Anonymous 2015-11-13 21:39

Purely Functional Data Structures
http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf
When a C programmer needs an efficient data structure for a particular problem, he or she can often simply look one up in any of a number of good textbooks or handbooks. Unfortunately, programmers in functional languages such as Standard ML or Haskell do not have this luxury. Although some data structures designed for imperative languages such as C can be quite easily adapted to a functional setting, most cannot, usually because they depend in crucial ways on assignments, which are disallowed, or at least discouraged, in functional languages. To address this imbalance, we describe several techniques for designing functional data structures, and numerous original data structures based on these techniques, including multiple variations of lists, queues, double-ended queues, and heaps, many supporting more exotic features such as random access or efficient catenation.

In addition, we expose the fundamental role of lazy evaluation in amortized functional data structures. Traditional methods of amortization break down when old versions of a data structure, not just the most recent, are available for further processing. This property is known as persistence, and is taken for granted in functional languages. On the surface, persistence and amortization appear to be incompatible, but we show how lazy evaluation can be used to resolve this conflict, yielding amortized data structures that are efficient even when used persistently. Turning this relationship between lazy evaluation and amortization around, the notion of amortization also provides the first practical techniques for analyzing the time requirements of non-trivial lazy programs.
 
Finally, our data structures offer numerous hints to programming language designers, illustrating the utility of combining strict and lazy evaluation in a single language, and providing non-trivial examples using polymorphic recursion and higher-order, recursive modules.

Name: Anonymous 2015-11-13 21:56

uh no im not really into pokemon

Name: Anonymous 2015-11-14 7:21

Name: Anonymous 2015-11-15 12:30

Still have to pay that logarithmic penalty which can mean double or triple slowdown.

Name: Anonymous 2015-11-15 13:57

the logarithmic slowdown is not a "penalty", it's a crown of thorns which keeps one vigilant. Notice how mutists are always slovenly and unkempt?

Name: Anonymous 2015-11-15 22:46

>>4
Still faster than maintaining undo/redo lists or serializing snapshots, as imperative coders try to do.

Name: Anonymous 2015-11-16 5:31

>>6
Except they almost never need to do that, and when they do need to do things like that, they can just adopt functional style within their multiparadigm language of choice.

Name: Anonymous 2015-11-16 19:27

>>7
Sure, if they want to reimplement everything that uses the update-in-place data structure that is at the core of their program (if it wasn't core, they wouldn't need undo/redo).

Alternatively they try to make their program multithreaded and end up with a blob of shit resembling an update-in-place data structure heavily guarded by mutex locks.

IHBT

Name: Anonymous 2015-11-17 0:12

>>8
One can allocate new nodes for a data structure in any language.

Name: Anonymous 2015-11-17 3:04

>>7
They also have to rewrite the memory management system if they want to hope to reach the performance of a FP solution. C isn't built for this.

Name: Anonymous 2015-11-17 18:29

Write me a purely functional, recursively defined, lazily evaluated red black tree, /prog/.

Name: Anonymous 2015-11-17 19:29

>>9
Everything using the data structure, dumbass.
Every insert(tree, k, v) now must become tree = insert(tree, k, v) (along with all the manual memory management involved, if applicable) and then the same code transformations must happen at each and every one of those bits of code to avoid exactly the same problems at a slightly different level.

Name: Anonymous 2015-11-18 19:49

>>10
hope to reach the performance of a FP solution

Ahahahaha.

Name: Anonymous 2015-11-18 23:58

>>13
In my day, people used to put sleep in their code for that.

Name: Anonymous 2015-11-19 5:56

>>13
I will only say to attempt it and see for yourself. C easily turns to molasses once you start pushing complexity, unless you rewrite everything from scratch that you'd get for free in other languages, including memory management.

Name: Anonymous 2015-11-19 6:01

>>15
But I like programming in portable assembly code! Only lamerz don't invent here!

Name: Anonymous 2015-11-19 8:24

>>15
See encourages the programmer to stay within reasonable constraints for performance. Haskell encourages the programmer to avoid thinking about performance, and forces a model that incurs overhead, while the programmer hopes the compiler is clever enough to optimize away the overhead, and at the end of the day it turns out it isn't.

If you want a thread safe malloc you can write your own. You can use pure functional data structures in C by just allocating things and not modifying them. And in parts of the program where a different model is more useful, you can do that instead. It's not a big deal to go between the two. They can, and should coexist.

Name: Anonymous 2015-11-19 9:00

>>17
malloc/free is dog slow for functional use compared to garbage collectors. But again, your response as a C programmer is "Write everything you need from scratch in low-level" instead of "Reuse tools that are appropriate and high-performance for your purpose".

Sorry, but I'm an adult who programs to get things done, not a kid who's just tinkering around for its own sake. These are solved problems.

Name: Anonymous 2015-11-19 19:42

>>17
the programmer hopes the compiler is clever enough to optimize away the overhead, and at the end of the day it turns out it isn't.
99% of the time the overhead doesn't matter, because on a computer capable of running an OS, you don't need to care. (Embedded applications, by your logic, will do better to be entirely in ASM rather than C, because of performance.) If the overhead does become a problem, you can tune the fuck out of it[1].

If you want a thread safe malloc you can write your own.
Sure, just bang one out over the weekend. It's not like there are many malloc implementations already.

You can use pure functional data structures in C by just allocating things and not modifying them.
That's immutable, not functional. And while you're doing that, you can enjoy writing your own garbage collector to go with it, and for every new allocation you can convince yourself you're doing it for performance.

Sometimes, I don't believe that people can believe what they are saying. This is one of those times, and you are one of those people.

[1] http://book.realworldhaskell.org/read/profiling-and-optimization.html

Name: Anonymous 2015-11-20 18:37

>>17,19
the programmer hopes the compiler is clever enough to optimize away the overhead, and at the end of the day it turns out it isn't.
You can get mad dosh for making problems from thin air. Read Machiavelli some time.

Look at all the cash generated by null-terminated strings. They're slow, hard to use safely, need escaping when including binary data, make algorithms that need a length require an extra scan, and scanning for the length is not only O(n), but thrashes your cache. People get grants and sell hardware for making it faster to scan for a 0 instead of, you know, including the length.

Name: Anonymous 2015-11-20 18:47

Assembly language is my mother tongue. It was trivial to write an ASM
bootloader when I had the ATA/ATAPI algorithm.

What you don't understand is that assembly language is my mother tongue.

You guys are young babies. I was your age when I was your age and now I have
advanced really really far and have divine intellect.

Name: Anonymous 2015-11-21 3:02

The only use for Assembly is to write Scheme interpreters.

Name: Anonymous 2015-11-21 23:52

>>18
malloc/free is dog slow for functional use compared to garbage collectors.
GC is shit. Therefore functional is shit if you insist it depends on it.

Sorry, but I'm an adult who programs to get things done, not a kid who's just tinkering around for its own sake.
So the reason you incur overhead that wont be optimized away is because you have to get things done.

These are solved problems.
I'm not impressed the solutions you are presenting me with.

>>19
If the overhead does become a problem
Overhead is always a problem. Once you've really realized the potential of a machine, it's hard to have patience for overhead, especially when the benefit is just adhering to someone's preferred style.

And while you're doing that, you can enjoy writing your own garbage collector to go with it,
Here we go again. GC is shit. If you can't optimize it away completely at compile time, your program is a product of bad design.

>>20
Nothing is stopping you from writing a string library that uses lengths.

Name: Anonymous 2015-11-22 9:17

>>23
I'm not impressed the solutions you are presenting me with.
Because you're a fucking tool who doesn't grasp the benefits of actual advancement in both productivity & runtime speed, because you play with toy problems.

Name: Anonymous 2015-11-22 10:58

>>23
Overhead is always a problem.
Do you live in a world where computer programs do not interact with humans in any way? Who gives a shit if the results come through 8µs faster, if it took an extra month of development time to remove the ``overhead''? Those are the kinds of numbers we're talking about here.
Or are you facetiously agreeing, and saying that there is always overhead, including in things like clock speed and memory round-trip time, and that in the slightly bigger picture, GC overhead is negligible compared to disk and network delays?
Unless your domain is pure calculations on an embedded, OS-less device, you will have overhead.

Here we go again. GC is shit.
You have immutable trees in C. You are updating a 1MB tree. You end up with two 1MB trees because of your naive implementation of your immutable trees. You decide to remove the memory overhead, by sharing data between trees old and new. However, you now need to keep track of which parts are OK to free because they belong solely to the old tree, and which parts are referenced by the new tree. You decide to keep a tag on each tree node saying how many references there are to it. Congratulations, you have reinvented GC.

Name: Anonymous 2015-11-22 11:11

>>25
Congratulations, you have reinvented GC.
And the worst and slowest variant of GC at that.

Name: Anonymous 2015-11-22 11:13

>>1
Weird, I read the first chapter of that book yesterday, and now I see a thread on /prog/ about it. What does it all mean?

Name: Anonymous 2015-11-22 17:15

>>27
This thread is a week old though.

Name: Anonymous 2015-11-23 8:39

>>24
I get what I need to get done and I get better results than I ever would using your mountain of crap that's written in C anyways. Haskell is not advanced. Don't kid yourself.

>>25
Haskell in its current form will incur overhead, and sometimes this overhead, whether it be time or memory, is enough to make your program useless for its purpose. And even if you have a fast computer with lots of memory and you are able to do what is required with a language that introduces overhead, aren't you curious to see what you could do if it could work 100 times faster and operate in virtual memory on datasets that were 100 times larger? It increases your capabilities which then inspire developments you may have never considered had you stayed on a slow platform that simulated using computers from 10 years ago.

Congratulations, you have reinvented GC.
You have a very narrow idea in your mind of a shared data structure. I bet you assume I use malloc and free as well. And you draw data structures with nodes and arrows like you're in elementary school.

Name: Anonymous 2015-11-23 11:41

>>29
I get what I need to get done

As I said, toys. Any purely fixed-function pipeline where the human can simply optimize a single static chain of execution is not an interesting or difficult problem. And even static pipelines can have different optimal performance characteristics depending on the runtime nature of the data it's processing.

But even in these simple cases, it's stupid that you spend your time wrangling bytes instead of teaching the machine how to do it for you. Have fun reinventing the wheel every single time.

Name: Anonymous 2015-11-23 19:42

>>29
aren't you curious to see what you could do if it could work 100 times faster and operate in virtual memory on datasets that were 100 times larger?
I might be curious if I had a dataset 100 times larger. As they say, YAGNI.
Read to the bottom of my link in >>19 where the author rewrote a chunk of Haskell until the generated ASM was as good as hand-written. The original code can't have even taken 2 minutes to write.
When it doesn't matter, ``overhead'' from not using C is negligible. When it matters, you can make the overhead disappear.

You have a very narrow idea in your mind of a shared data structure. I bet you assume I use malloc and free as well. And you draw data structures with nodes and arrows like you're in elementary school.
This is an amusingly substanceless rebuttal. If you are so enlightened, would you care to sketch out an immutable, shared tree, with minimal memory overhead, and without nodes and branches? Or are you saving that for your master's thesis? Or HIBT?

Name: Anonymous 2015-11-25 9:42

>>30
But even in these simple cases, it's stupid that you spend your time wrangling bytes instead of teaching the machine how to do it for you. Have fun reinventing the wheel every single time.
You assume too much about me. 60% of my time is devoted to meta linguistic abstraction. The difference between me and you is you depend on prepackaged abstractions that fit one and only one paradigm and introduce overhead, while I have the freedom to engineer my abstractions and how they target the machine I'm using. You force yourself to choose from a limited set of abstractions while I'm free to invent my own.

>>31
When it matters, you can make the overhead disappear.
In theory yes, but in practice no.

This is an amusingly substanceless rebuttal. If you are so enlightened, would you care to sketch out an immutable, shared tree, with minimal memory overhead, and without nodes and branches? Or are you saving that for your master's thesis? Or HIBT?
Using garbage collected nodes allows you to use a shared data structure with the most generic user interface. All your program has to worry about is managing references, which would be a concern in C using a gc library or reference counting, and built in to the language in others. If you restrict the user interface, you can do better. If you really study how your program is utilizing a shared data structure, you'll likely find the interface is overkill, and you can manage with a substitute in which lifetimes of objects can be anticipated. You could say I use these concepts but then optimize their use away at compile time.

Name: Anonymous 2015-11-25 11:46

>dubs

Name: Anonymous 2015-11-25 17:00

>>32
You know nothing, Jon Snow.

Working in Lisp lets you work with true abstraction at the [meta-]expression/declaration level all the way down to assembly code, and all the way up to however high you care to get, without writing fucking inline asm. It's a far more integrated system from top to bottom than any of the C tripe you promote. There is no single fixed abstraction, there is just the wild ability to create abstraction.

You force yourself to choose from a limited set of abstractions while I'm free to invent my own.

lol, again, have fun playing with bytes.

If you really study how your program is utilizing a shared data structure, you'll likely find the interface is overkill...
holy shit, you're insane. Are you the TempleOS guy?

Name: Anonymous 2015-11-26 9:02

>>34
Working in Lisp lets you work with true abstraction at the [meta-]expression/declaration level all the way down to assembly code, and all the way up to however high you care to get, without writing fucking inline asm.
Sounds like the opposite of Lisp.

Name: Anonymous 2015-11-26 10:20

The true power of lisp is that whenever you need to call a function, you can just return() to it

Name: Anonymous 2015-11-26 12:58

>>33
Who are you quoting?

Name: Anonymous 2015-11-26 13:16

>>37
He's quoting >>32. Are you blind?

Name: Anonymous 2015-11-26 14:06

>>37
le who are you epic quoting xD

Name: Anonymous 2015-11-26 15:52

>>39
Satire is the lowest form of comedy and the lowest form of wit.

Name: Anonymous 2015-11-26 17:14

>>35
Lisp is the #1=(programmable . #1#) programming language. All is open for plasticity.

Name: Anonymous 2015-11-27 9:10

Have you read your PDFs today?

Name: Anonymous 2015-11-27 14:23

>>42
oh my gods, le who are le you le quoting??? XD????

Name: Anonymous 2015-11-27 15:33

>>43
gods
What the shit? Get out of here you heathen scum.

Name: Anonymous 2015-11-27 16:01

>>43
Satire is the lowest form of comedy and the lowest form of wit.

Name: Anonymous 2015-11-28 13:52

>>44
Christianity is polytheistic too. They have 3 gods + lesser gods which are called "saints".

Name: Anonymous 2015-11-28 14:04

>>32
So what you've boiled it down to is "In my specific case, I invent limitations for myself so I can do more work in lieu of having the computer do it for me (as GC). Thus, universally, GC is shit."

Thanks for wasting my time.

Name: Anonymous 2015-11-28 15:41

>>46
Every christian is a "saint", at least according to their actual book. I don't think they count as lesser gods, though.

Name: Anonymous 2015-11-28 15:48

>>48
Of course not. According to their actual book, they are all sinners, because all people came short of God's standard.

Name: Anonymous 2015-11-28 16:32

>>48
Saints count as lesser gods not by the book, but in Christian practice. E.g. Santa Claus is Saint Nicholas and is regarded as a lesser god who brings presents to children every New Year. In Catalonia, they celebrate Saint George's day on 23rd April because St. George is supposedly the patron god of Catalonia. In Scandinavia, they celebrate Saint Lucia's day on 13rd of December, asking that lesser goddess to give light in the coming winter days. Etc etc, there are many lesser gods disguised as saints in the Christian tradition, read here:

http://www.americancatholic.org/Features/Saints/patrons.aspx

Name: Anonymous 2015-11-28 16:50

Also count the pictures and statues of dudes in any church. Funny for a religion that says "don't worship idols".

Name: Anonymous 2015-11-28 18:07

catholics are pagans, dolt

Name: Anonymous 2015-11-28 18:18

niggers

Name: Anonymous 2015-11-28 19:11

>>50
s/Christian/Catholic/, confessionfag.

Name: Anonymous 2015-11-28 19:15

The real question is, was the roman catholic church merely a tool of empires, or were the empires merely tools of that church? </tinfoilhat>

Name: Anonymous 2015-11-28 19:45

>>54
Even non-Catholics have 3 gods: the Father God, the Son God, and the Holy Spirit God. That makes them pagans, too, or at least polytheists.

Name: Anonymous 2015-11-28 20:04

Is God immutable?

Name: Anonymous 2015-11-28 20:20

>>57
Why don't you ask Him? Oh, I forgot, he no longer talks to anyone, despite all the magic books being filled with claims God talking to people and giving them visions and messages.

Name: Anonymous 2015-11-28 20:42

>>58
All Abrahamic religions are really suprematistic and hate-inciting. They are based on a notion of oneness and absolutism:

1) there is only one holy land (Palestine), all other lands are inferior and God doesn't like them

2) there is only one holy tribe (Jews in Judaism and Christianity, Mohammed's tribe in Islam), all other people are inferior and God doesn't ever talk to them

3) there was only one period of time when God was giving His law to man (Judaists believe it's the time of Moses, Christians - the time of Christ, Muslims - the time of Mohammed). Never before and never after did or will God ever impart His holy law to mankind

4) even when God did give His law unto man, he only gave it to his select holy tribe in His select holy land. The other 90% of humanity is just too inferior to receive God's message: all the Eskimos and Aztecs and Incas and Sami and Zulu and Maori and Japanese and the rest just weren't entitled to a holy prophet or at least a holy scripture.

5) there is only one God and anyone who didn't know about Him (due e.g. to living on a remote island around 50 CE and not meeting any missionaries) will go to hell with no chances of salvation; wrong place and time to be born, suckers!

Name: Anonymous 2015-11-28 20:48

1) there is only one holy land (Palestine)
U MENA ISRAEL

Name: Anonymous 2015-11-29 21:40

>>34
Working in Lisp lets you work with true abstraction at the [meta-]expression/declaration level all the way down to assembly code, and all the way up to however high you care to get, without writing fucking inline asm.

If you don't deal with inline asm, you are trusting someone else's code to generate asm and you have no ability to improve their generators or add your own abstractions that translate to asm.

lol, again, have fun playing with bytes.
I write programs to play with bytes for me.

>>47
It's actually the opposite. You've imposed the artificial constraint that your program must use GC, when it can get by without it. Adding that constraint leads to suboptimal programs. I still have the computer do work for me, but most of that is done at compile time.

Name: Anonymous 2015-11-29 22:19

>>61
you are trusting someone else's code to generate asm
You are also trusting someone else's hardware implementation to execute your asm (see HCF, FDIV), so where do you draw the line?

you have no ability to improve their generators or add your own abstractions that translate to asm.
One word: Lisp macros. Thread over. Hell, even LLVM and its plugin architecture fits your requirements.

It's actually the opposite.
Don't you "no u" me. I never said it must use GC, I said it's much easier and takes a huge design load off, to which you replied "as long as I write my programs in this severely constrained way, everything will be fine".

Name: Anonymous 2015-11-29 23:14

>>62
You are also trusting someone else's hardware implementation to execute your asm (see HCF, FDIV), so where do you draw the line?
If I could fabricate my own hardware, I'd consider reviewing and modifying it. The difference is all it takes for me to generate asm is to learn about it and write programs for it.

One word: Lisp macros
You are correct. I'm also a lispppeerrr. However, since I'm not completely satisfied with any lisp implementation, I've taken my own route.

everything will be fine
Everything could be better. Rather than choosing a simple inefficient design, I write programs that calculate complicated efficient designs starting from simpler ideas.

Name: Anonymous 2015-11-30 6:19

>>61
If you don't deal with inline asm, you are trusting someone else's code to generate asm and you have no ability to improve their generators or add your own abstractions that translate to asm.

Yeah, you don't Lisp at all, bro. Lisp is one of the only languages where you can modify the actual fucking compiler at either compiletime or runtime. Piss off with your shit C assumptions, and quit fucking telling me that my superior languages has the shortcomings of yours. Wallow in your garbage, fuck off, and die. You have no fucking clue what you're talking about and are only making an embarrassment of yourself.

Name: Anonymous 2015-11-30 16:53

>>64
On some hardware Lisp machines, you could even reprogram the CPU's microcode at runtime, all in Lisp.

Name: Anonymous 2015-11-30 17:20

>>65
Check this video if you wanna date gay Jews!
https://youtu.be/DXYaIxxx9KY

Name: Anonymous 2015-12-01 1:23

>>64
Your hostility is misplaced. But if you were really a lispperrr, you would know nothing I said could have implied lisp had shortcomings, since you can inline asm in lisp.

http://www.pvk.ca/Blog/2014/08/16/how-to-define-new-intrinsics-in-sbcl/

I only avoid lisp because of the runtime.

Name: Anonymous 2015-12-01 13:44

>>67
This le quote here:

If you don't deal with inline asm, you are trusting someone else's code to generate asm and you have no ability to improve their generators or add your own abstractions that translate to asm.

That's a shortcoming, and one that Lisp doesn't have. You aren't bound to what the compiler does out of the box, you can make it do whatever you want, at any level.

You don't have to bypass the compiler by generating your own assembly. That's the only tool you keep bringing up, and it's a C-focused shit view. The Lisp compiler is in Lisp in every reasonable implementation, and is fully modifiable. Compiler macros are standard (no, these aren't regular macros). The MOP lets you change the entire OO system. And that's without even getting into the implementation's compiler itself.

Name: Anonymous 2015-12-01 19:35

>>63
since I'm not completely satisfied with any lisp implementation, I've taken my own route.
This is exactly what lisp excels at. At the level you appear to be working, there is no reason to use anything other than lisp, as >>34 says. It's a traceable tower of abstraction from macros down to asm if you write it to be -- and yes, you can write the whole thing yourself without relying on someone else's code generator. It is done and it is done to death.

I write programs that calculate complicated efficient designs starting from simpler ideas.
Is that your job then? Or a hobby? As we discussed in >>31,32 - is the reason you are programming to implement the ideas you are talking about? Because if so, again, you are completely generalising to "GC is shit" and "if you are using GC, you aren't OMG OPTIMISING enough" because that is the domain you are working in. Everybody else is working on something else, and GC or otherwise is just a detail of implementation of the underlying system. You may well have only convinced yourself that bare-metal asm generation is the only way to go because of your current goals.

Of course, if that is not the case, feel free to elaborate on how GC and no GC are relevant to metalinguistic abstraction for anything other than a compiler.

Name: Anonymous 2015-12-03 7:03

>>68
Stop talking to me like I'm a C stack boy. You have no idea who you're talking to kid. I've been matching parens since before you were born.

But inlining asm fits the spirit of lisp. Why shouldn't I be able to write a loop macro that can translate a certain class of expressions into SIMD instructions, and use lisp itself to do the translation. It's perfectly built for it.

MOP
Overengineered crap. Thanks for reminding me why I've abandoned all known lisp implementations.

>>69
At the level you appear to be working, there is no reason to use anything other than lisp, as >>34 says. It's a traceable tower of abstraction from macros down to asm if you write it to be -- and yes, you can write the whole thing yourself without relying on someone else's code generator.
Lisp implementations today don't produce programs that can be separated from their expensive runtime environments.

and GC or otherwise is just a detail of implementation of the underlying system.
An underlying detail that makes it run slower than it needs to be. Using such a flexible interface at runtime is convenient but it's never optimal. With compiler time abstractions you can get optimality while still maintaining a simple model of how the program works in source form. GC is the lazy way out and its results show.

Name: Anonymous 2015-12-03 8:32

>>70
GC isn't slow. And lol at "optimality" as there is no such thing when generating code, because it depends on runtime circumstance.

Name: Anonymous 2015-12-03 9:22

>>71
There is something eerily familiar about that statement....

Name: Anonymous 2015-12-03 20:54

>>70
Lisp implementations today don't produce programs that can be separated from their expensive runtime environments.
I don't know what part you aren't getting.
Your program. Write it in lisp.
Yur program will mostly use code-generating functions you defined yourself.
You will define functions and macros using these code-generating functions. They could take the names of standard library functions and macros, if you like.
You will write your lisp program as anyone else does: using a bunch of abstractions. It will read like any other lisp program. It will run like any other lisp program.
Your program will output another program. You can run that other program.
The program produced won't rely on a lisp runtime or GC or anything, because you wrote the entire stack.
Lisp implementations produce whatever the hell you tell them to.

Keep re-reading >>71 as well.

Name: Anonymous 2015-12-05 8:34

>>73
Not many people are working on that stack, but that is what I'm getting at and what I'm looking for. No, >>71 is shit.

>>71
Just because there is noise at runtime doesn't mean you can't produce programs that will on average greatly outperform other programs on average. The statement GC is or is not slow depends greatly on how you are using it and what the alternatives are. But if you can avoid it altogether, that's one less thing your program needs to spend time doing, and if you take more control over how things are allocated in memory you can arrange objects compactly to save memory and adjacently to improve cache performance.

Name: Anonymous 2015-12-05 8:35

Wait, who the fuck am I to know who is and isn't working on a lisp stack. Nevermind.

Name: Anonymous 2015-12-05 11:54

>>75
It's going to be either Lisp, Haskell or an ML in that realm. There's also C++ if you are lucky enough to be sponsored by Apple (LLVM).

Name: Anonymous 2015-12-05 12:03

>>74
The statement GC is or is not slow depends greatly on how you are using it and what the alternatives are.

In order words, the statement "GC is slow" (or "makes it run slower than it needs to" and all its variations) is incorrect.

But if you can avoid it altogether, that's one less thing your program needs to spend time doing

It's not something the programmer spends time doing, so it's moot. And because it can be as fast or faster at runtime than manual management, and is definitely faster for development time, you're a fucking pathetic idiot for discarding it.

and if you take more control over how things are allocated in memory you can arrange objects compactly to save memory and adjacently to improve cache performance.

You're wasting your goddamn time. Your life is useless, because you throw it at shit like this. People who are way smarter than you have already solved these problems, but you'll dick around with slower shit, spending months fiddling with bytes and address lines, instead of taking advantage of stuff that's better.

Again, you're a child playing with toys. Fuck off, because you have nothing mature to say.

Name: Anonymous 2015-12-05 12:30

This is why I hate the computing industry. Nothing has changed since the 1950s, when there was nothing but bits, and it was the human's job to manually route them around. People like >>74-chan are actually happy with that. That's either flagrant masochism, or ignorance about the potential of computing. Probably both.

There are the occasional people here & there who actually do perceive how much things suck with computing. However, they either don't know any better and can do nothing but just reinvent the wheel; or they get so lost up their own meta-rectum that they never actually get around to anything useful.

Computing remains at a stasis where humans serve the details of the machine, instead of the centuries-old vision of machines taking care of the details of humans.

Precisely and absolutely because of people like >>74. and cudder

Name: Anonymous 2015-12-05 12:47

the anti-GC bait is so strong.. it's because there's actually a lot of stupid programmers who actually beleive this

Name: Anonymous 2015-12-05 13:48

>>25
Do you live in a world where computer programs do not interact with humans in any way? Who gives a shit if the results come through 8µs faster
I live in a world where people are increasingly (in fact, the majority now) uses battery powered devices - extra overhead or bad performance directly translates to less battery time.

Name: Anonymous 2015-12-05 14:07

>>79
This is not an imageboard, please take your imageboard catchwords somewhere else. If you don't know what I'm talking about, please reassess the size and quality of your vocabulary and try not to sound like a meme-spewing teenager next time.

Name: Anonymous 2015-12-05 14:18

>>81
what the fuck are you babbling about you insane retard

Name: Anonymous 2015-12-05 14:18

>>81
Fuck all the way off.

Name: Anonymous 2015-12-05 21:03

>>80
You live in a world where 99.99% of battery-powered devices are executing Objective-C, Java, or Javascript.
I am excluding handheld game consoles because you are clearly not developing for any.

Name: Anonymous 2015-12-05 21:28

>>84
You live in a world where 99.99% of battery-powered devices are executing Objective-C, Java, or Javascript.
Which is obviously a bad thing.

Name: Anonymous 2015-12-05 21:41

>>84
You live in a world with niggers and sandniggers. Doesn't mean I'll hug niggers for a living.

Name: Anonymous 2015-12-05 22:21

>>82,83
LLLLLLLEEEEEEEEEEEELLLLLLLLLL E/G/IN /B/AIT MEME /G/RO, or at least that's what I think >>81-kun (aka lel-kunt) is trying to say.

Name: Anonymous 2015-12-05 22:35

>>85
Yes, but that also means the battery savings from such a minor optimisation are negligible compared to the rest of the shit being run.

Name: Anonymous 2015-12-05 23:38

I fucking means that those environments are already optimized for battery use and anything you fucklenuggets try to hack doesn't make a pissing shit of a difference.

The major power drains are the covered by the OS/driver/JIT.

Name: Anonymous 2015-12-05 23:40

(and I'm drunk, so spelling/grammar nazifags can fuck right the fuck off. all the way)

Name: Anonymous 2015-12-05 23:55

Don't justify your idiocy with that LOL I'M SO WASTEEED LMAOOO XD I'M SUCH A COOL BOY XDD act. These devices are severely constrained in what they are able to do and careless apping is why a 1GHz dual-core ARM CPU is barely able to put up with displaying pictures of your balding face without lagging like the piece of shit it is. It's the same reason your battery rarely lasts more than 30 hours.

Apping like a currynigger because of ``time constraints'' is just as bad as Cudder's bitfucking.

Name: Anonymous 2015-12-06 0:01

>>91
I'm just talking about spelling, not content, you piece of dick.

Sure, you can fuck your battery life. But any decisions about the algorithmic workload of your APP, like how many passes you run over your data or whatever, are best handled in a fucking high fucking level fucking language, not by dicking around with bytes and SEPPLES.

Name: Anonymous 2015-12-06 7:31

>>77

In order words, the statement "GC is slow" (or "makes it run slower than it needs to" and all its variations) is incorrect.

Only when there are no faster alternatives, which is never the case unless you are unwilling to consider faster alternatives.

It's not something the programmer spends time doing, so it's moot.
It leads to a suboptimal program. It's crap.

And because it can be as fast or faster at runtime than manual management,
There's more ways to manage memory than malloc and free. Like memory pools. You can allocate and free memory in constant time and a few assembly instructions with some additional assumptions.

and is definitely faster for development time,
Not if you have meta-linguistic abstraction to do this work for you at compile time.

you're a fucking pathetic idiot for discarding it.
Nope, a rational being.

You're wasting your goddamn time.
No, I write programs to do this for me. The thing is I try to get things right at compile time rather than use some resource intensive overkill monstrosity that figures obvious shit out at runtime.

Your life is useless, because you throw it at shit like this.
The value of my life is independent of these pursuits, and your judgement of it has no bearing for me.

People who are way smarter than you have already solved these problems, but you'll dick around with slower shit, spending months fiddling with bytes and address lines, instead of taking advantage of stuff that's better.
Their solutions are utter shit. And trying to optimize their shit is too fucking hard. I'm starting with bit fiddling and finding the right set of abstractions that will trivially lead to optimal machine code.

Again, you're a child playing with toys. Fuck off, because you have nothing mature to say.
No, the toys are python, ruby, javascript, java, and haskell. And I refuse to waste any more time playing with them.

Name: Anonymous 2015-12-06 8:31

GC may be faster than manual management, but it never consumes less memory, and in fact needs 4-5 times as much memory just to achieve the same speed.

Name: Anonymous 2015-12-06 11:08

>>93 translated:

I am cudder

Lol, look at me move bits!

No, j00r progarm is slower becuz hooman do better than masheen!

I am cudder!

Name: Anonymous 2015-12-06 11:57

>>93
I write programs to do this for me.
While the rest of us use the programs that already exist to do this for us, and get on with our lives.

Name: Cudder !cXCudderUE 2015-12-06 13:57

>>78
or ignorance about the potential of computingdesire of HW manufacturers to sell product
FTFY

This "GC isn't slow" bullshit really feels like a conspiracy. It's not slow to you because you don't know any better and are accustomed to programs taking more resources than they should. You just haven't seen the true small and fast.

>>94
That's just a roundabout way of saying (like most other pro-GC articles) that the less you have to GC, the faster it is. Manual management is fast if you never have to free() either. Duh.

Many years ago a friend went to one of the many Java advocacy events Sun (this was really long ago) ran. I remember this clearly because he told me one of the funniest things was seeing, in one of the presentations about how GC is "low overhead", the extreme lag of the presenter's laptop --- which was running their latest idea at the time, a full Java-based OS. No wonder that didn't catch on...

Name: Anonymous 2015-12-06 14:19

>>97
fuck off namefag

Name: Anonymous 2015-12-06 21:24

>>97
I'm >>78, and I've done everything from bit-banging, cycle-counting, hex-entered machine code 35 years ago to distributed inference compilers today.

I have seen true small and fast. I have written true small and fast.

The circumstances of small and fast do not hold the same on modern CPU architectures, with modern connectivity, with supporting modern hardware compatibility, with support of modern file format & encoding variety, and with modern flexible behavioral expectations.

That's just a roundabout way of saying (like most other pro-GC articles) that the less you have to GC, the faster it is.
I think you need to read >>94 again. It has nothing to do with GC frequency.

Many years ago, Java...

Yeah, it was pretty bad back then. However, "then" is also not "now," not by a long shot.

Compiler & runtime environment tech has come a long way in the last years, especially due to the fact that regular machines now have the elbow room to analyze the code environment FAR better. The space and speed afforded to us in modern machines are well-used in making the resulting code faster and smaller. That is appropriate use of resources.

Name: Anonymous 2015-12-06 22:23

>>97
This "GC isn't slow" bullshit really feels like a conspiracy.
It's not slow when all you use is turdware that spends more time swapping than running.
A bigger problem is programs using so much memory in the first place that GC vs manual or sometimes even compiled vs interpreted is dwarfed by swap disk speed.

Name: Anonymous 2015-12-06 22:28

>>98
Fuck off back to /g/ where ``namefags'' are an actual problem.

Name: Anonymous 2015-12-07 3:36

>>97
An optimal GC will be as fast as manual memory management. Also, free() is basically free; it's just dropping a entry in a couple tables.

Name: Cudder !cXCudderUE 2015-12-07 6:47

An optimal GC will be as fast as manual memory management.
The problem is that "optimal GC" doesn't exist, just like the optimal cache replacement algorithm, or the optimally optimising compiler, or world peace for that matter...

Look at it another way: if optimal GC did exist, it would have to know enough about the program and its inputs and outputs that it would probably have enough intelligence to question its existence and the meaning of life.

Name: Anonymous 2015-12-07 8:43

>>96
I'm actually trying to advance the human race by making something better here, as opposed to building on top of a foundation of crap.

>>99
The circumstances of small and fast do not hold the same on modern CPU architectures, with modern connectivity, with supporting modern hardware compatibility, with support of modern file format & encoding variety, and with modern flexible behavioral expectations
That was ENTERPRISE Quality!

Compiler & runtime environment tech has come a long way in the last years, especially due to the fact that regular machines now have the elbow room to analyze the code environment FAR better. The space and speed afforded to us in modern machines are well-used in making the resulting code faster and smaller. That is appropriate use of resources.
Something is wrong if it takes a super computer to do code analysis. Design the language so code analysis is easy. If certain methods of evaluation can be done using special purpose instructions, let the programmer emphasize this so the computer isn't responsible with coming up with this on its own.

Name: Anonymous 2015-12-07 8:45

>>103
Optimal compilers work best for high level languages like Haskell.

Name: Anonymous 2015-12-07 13:55

>>104
So you don't want high-functioning software, and you don't want the computer to be smart enough to manage its own affairs.

Everything you say always points back that you want to be a slave to the machine, enjoying your waste of time in manually tracking bits and cycles.

You're nuts, and you're a blight on computing.

Name: Anonymous 2015-12-07 21:45

>>104
This motherfucker thinks the entirety of the past 50 years in computer science is ``a foundation of crap'' and that he can do better !!!

True FrozenVoid ,,infinite compression'' levels of delusion.

Name: Anonymous 2015-12-07 23:21

>>106
So you don't want high-functioning software, and you don't want the computer to be smart enough to manage its own affairs.
I want a better compiler and that starts with a better design for a language. A design that it itself can evolve to your needs. I want more control over the process because I know I can do better than some shitty Haskell implementation.

Everything you say always points back that you want to be a slave to the machine, enjoying your waste of time in manually tracking bits and cycles.
You are enslaved to shit software. You have no ability to improve the software you use. Go write another raycaster in python. Change the world with your program that depends on an interpreter that looks like a sophomore's programming project from 1980.

>>107
It's sad but it's true. And it's because of people like you, worshipping a mountain of crap and getting stuck in the mud, or should I say crap, instead of forging a new trail.

Name: Anonymous 2015-12-07 23:45

>>107
XML, Java, x86, Windows, JavaScript, PHP, Ruby, Lua, MongoDB, HTTP, 4chan and reddit were created in the past 50 years. Just because something is popular and has lots of momentum doesn't mean it's good.

Name: Anonymous 2015-12-08 4:31

>>107
Why do you believe that languages are exclusively "compiled or interpreted"? This feature is not inherent to a language. It is a property of the compiler's implementation.

Name: Anonymous 2015-12-08 6:07

>>110
While that may be true, it just happens that all implementations for the python language are shit toy interpreters that could have been written in 1980, and shit c translators that generate source files so large they may as well be interpreters. There will never be a more advanced ``compiler'' for that language because people are satisfied with every implementation being slow, overhead inducing shit. While a better compiler is possible, the people that are talented enough to produce one know their time is better spent on other languages and other problems.

Name: Cudder !cXCudderUE 2015-12-08 10:27

>>106
and you don't want the computer to be smart enough to manage its own affairs.
This "smart" shit has to stop. We'll just lose control of the things we build if we make them any more intelligent. "smart computers, stupid humans." It's already happening with these ultra-bloated ultra-abstracted systems that are so complex no single person really understands what they do. And the corporates are leveraging this immense complexity to spy and track us, while most people remain unaware because it's all been hidden under the guise of "smart".

that you want to be a slave to the machine
Ironic. If you make devices so "smart" they can manage themselves, they're going to start managing YOU. This is not a lame Soviet Russia joke. It's reality. How many people know about all the network traffic their smartphone/tablet/{insert your choice of locked-down user-hostile device} is making? Or what all the 100+ processes do? All they see is a shiny opaque box. They don't know what it's doing. Manufacturers don't want users to know. When it does something users don't want, they mostly shrug it off and keep going. That's the scary part. These (l)users are letting their devices control them, not the other way around.

How to fix this? Never let it do what you don't want. Make it do what you want. Don't passively let it do things that you should have control over. It's YOUR computer, YOUR choice, YOUR life.

Name: Anonymous 2015-12-08 12:38

>>112
Will to Power.

Name: Anonymous 2015-12-08 16:58

>>12
I hardly know what all the processed to on a regular Linux box do. Everything is either very complex or insufficiently developed. I know that it can be a lot simpler. But we are living in a world where worst-is-best won. It's depressing.

Name: Anonymous 2015-12-08 20:00

>>112
What's worse is that these idiots are actually driving cars that are connected to the Internet and have been explicitly proved to be hackable. Deaths don't get much stupider than getting smashed into a tree because some 13 year old Chinese kid got control of your car just to test his haxxor skills. Smart things indeed.

Name: Anonymous 2015-12-08 20:19

>>108
OK sure FV.

>>109
None of those are CS.

>>110
Whome are you quoting?

Don't change these.
Name: Email:
Entire Thread Thread List