We have all learned functional programming in Haskell, but there are more functional languages like Lisp, Scheme, ML, and Clean.
Why should we even bother to look further than Haskell?
- You want your programs to run faster. - Monads drive you mad (what are they anyway? warm fuzzy things?). - You need objects. - You sometimes need a more powerful module system.
>>44 Please stop with this minimalist dubs bullshit. There used to be a time when dubs were eloquent works of art. Nowadays, a single "no" passes for dubs. It's deplorable, really.
>>36 They are recursive, and they can be tested for equivalence. Your unification algorithm just needs to support it, and Haskell is too lazy to do it.
Name:
Anonymous2015-02-21 21:06
>>47 1. Demonstrate a type system with such a feature. 2. Haskell is too lazy to do it precisely because it's useless in practice.
Name:
Anonymous2015-02-21 21:21
>>48 1. Ocaml with rectypes. 2. Nope. They just have a limited understanding of type theory and think what they've implemented is all there is.
On 64-bit systems, the size of OCaml arrays from the Array module is limited to 2^54 - 1 and on 32-bit systems the limit is 4,194,303. For arrays of float, the limit is 2 times smaller. In both cases the index is easily represented as an int, so there's no advantage in using int64 as an index.
Name:
Anonymous2015-02-22 10:20
I have created huge systems in C++ using a combination of Generic Programming and Functional Programming meta models. I am nowadays more of Clojure + Scala + Elixir developer, but welcome the C++14 (proposed) changes, to enable polymorphism even in the anonymous realm of functoids. C++ actually has a lot of expressivity — no, I didn’t even mention performance — that most FP languages lack, from its (admittely ad hoc) notion of templates. So, there are other reasons but performance and even ABI compatibility to use C++ before other (FP) languages.
C++ is indeed re-converging. From a hairy mishmash to a more streamlined FP+GP experience. Perhaps a bit too late? Well, I welcome the changes, and the quite quick uptake of the new standard features in the compilers (MSVC, GCC and CLang, specifically…)
Am at loss when people claim that using functional techniques — such as HOFs and functional combinators — is simply an exercise and cannot be used for “real programming.” One should not eject that style of programming in C++ merely due to one’s own shortcomings. I can assure you that functional programming in C++ is used in highly real systems. Or perhaps I have just been dreaming the two million lines of C++ code I have written since 1989, with heavy use of functional combinations and streams (such as using Boost’s iterator transformers, yielding streams or Haskell/Clojure-like lazy sequences.)
Yes, you can create crappy, procedural monsters with C++. But you can write quite succinct FP+GP programs as well. And even more so with C++11, and with C++14.
I will now go back to my Scala hacking…
Name:
Anonymous2015-02-22 10:42
:: -> Functional prog//ramming->
After considering that most languages are in fact not at all functional, some language designers decided to find out what programming in a really functional language would be like. As you would expect, programming functional languages is mostly annoying and impractical. Almost no real-world problems can be usefully expressed as static, well-defined mappings from input values to output values. That being said, functional programming is not without merit and many languages have been designed to take advantage of a functional style of programming. What that means is finding a convenient way of isolating the functional parts of a program from the (actually interesting) non-functional parts. Languages like Haskell and Ocaml use this isolation as a means of making aggressive optimisation assumptions.
But this is lisp. We're very non-functional and very proud of it. To the extent that this isolation of side-effects is useful, lisp programmers can and do implement it with macros. The real purpose behind functional programming is to separate the functional description of what should happen from the mechanics of how it actually does happen. Lisp is definitely not functional, but, because of macros, there is no better platform or material for implementing functional languages than lisp.
You know a language is good when you can't have an array with 5 million elements in it.
Name:
Anonymous2015-02-22 16:31
>>59 That would be a limitation of the runtime, not the language. If you want a big array, there's always, um, the Bigarray module.
Name:
Anonymous2015-02-22 17:27
>>60 Is there another runtime for that language? If not, then it's effectively a limitation of the language.
And having separate modules for big arrays is really retarded and ad hoc, it's like some primitive negroid tribe that has numbers "one", "two" and "many".
Name:
Anonymous2015-02-22 18:05
>>61 The BigArray module is part of OCaml's standard library, like the Array module. What more do you want? If you don't like the name, just write module Array = Bigarray at the top somewhere.
Name:
Anonymous2015-02-22 18:26
>>61 Yes. Seeing as you're being obtuse, I'll let you find them yourself.
separate modules for big arrays is really retarded
Well, it's called Bigarray, but if you took half a second to google ``ocaml bigarray'' and read the documentation, you'd find there are many differences, not limited to the size of the array. If you're serious that you need garbage collection on arrays of 5 million elements, then my condolences go out to your colleagues.
Name:
Anonymous2015-02-22 19:33
>>54 Ocaml may be shit, but that doesn't change that it is a perfectly valid example of a type system that supports recursive types. Why do you feel like you have to find a way to shoot down the example?
Note that variable str is immutable in the above; it's the string that it's bound to that's mutable! Efficient strings are so important for the efficiency of so many programs that each character of an OCaml string is actually a reference;
Why Not Haskell Haskell: lazy by default → difficult to reason about performance/execution purity does not bring much to safety people use unsafePerformIO anyways + less “hackable” type-classes in practice: used 95% of the time to make the code as unreadable as possible more unreadability: need to know every possible GHC extension (there is even dynamic typing in there!?) indentation-based grammar makes code hard to read (blocs > 7 lines) does not have (AFAIK) any good subtyping mechanism
>>73 Lazy by default means Haskell doesn't do any extraneous work which implies its work is as efficient as possible. Purity means safety as you don't have to reason about the external environment outside of the function. In fact, it is possible to mathematically prove the pure code will be correct for what it does. This is not possible for impure code Whenever we use unsafePerformIO, we know where is a source of impurity which gives us hints to focus our effort whenever we encounter bugs in our program. We will use it when it's needed, and we don't need it everywhere. That's just your opinion man. That's just your opinion man. That's just your opinion man. That's just your opinion man.
Lazy by default means Haskell doesn't do any extraneous work which implies its work is as efficient as possible.
Except for that thunks are massively inefficient compared to their eager pals
Name:
Anonymous2015-02-27 18:48
>>75 No, they aren't. They use more memory but save a lot of time (less evaluations because some stuff is memoized while some stuff is never even evaluated). Besides, no language can be purely strict or purely non-strict, it's just a question of choosing the default, and lazy evaluation is definitely not the worst default.
Name:
Anonymous2015-02-27 18:49
>>76 OK I was just stirring really. I have nothing to respond with. I might pick up Haskell in a couple of months.
Name:
Anonymous2015-02-27 18:57
>>73 lazy by default → built-in modularity. Abstractions are not only possible to a much greater extent than in the imperative shit-slums, they also combine very easily and efficiently.
Purity brings a lot for safety because it means lack of global state and guarantees (with Safe Haskell) that a large portion of code can never do anything besides returning a value. This can be extended to create secure systems, see for example https://hackage.haskell.org/package/lio
unsafePerformIO is used by dumbass newfags mostly, except in the core libraries of course, but those have pure interfaces.
Haskell's code is short and denotation-like, it's a lot more readable than the {var a; a = b; b = c; c = c - 1; b = a - b*c; return a; } diarrhea.
You do not need to know every possible GHC extension. For example, Oleg Kiselyov uses only GADTs and the *Instances extensions.
No, "dynamic typing" is a library, not an extension.
Indentation is not forced. You can use your semicolons and curly braces all you want. Simon P. Jones does, for example.
There are no good subtyping mechanisms. Subtyping is shit that wrecks type inference, type safety, and developer time. Subclassing is much better and Haskell has it (no GHC extensions needed).
Any more dumbass misinformed non-reasons to hate Haskell (which is shit, by the way, but for a whole different set of reasons)?
Name:
Anonymous2015-02-27 19:00
>>78 What are the shit things then? What's a good setup? GHC is an even bigger install than Latex.
Name:
Anonymous2015-02-27 19:30
>>79 Off the top of my head: Weak module system (compared to SML). No real packages (cannot disambiguate same-name modules). Shitty record types (a community-acknowledged thorn in the ass). String is a linked list of chars. The numeric tower in the Prelude is dumb-ass (though still better than any of your Scheme or OCaml shit). Stream fusion still not well-integrated (though most languages don't even have it).
GHC is big only because it contains 4 versions of every library (and 5 copies of the compiler itself).