Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Floating-point numbers are NOT real numbers!

Name: Anonymous 2018-07-14 14:39

In every introduction to programming, one has to see the author spout this nonsense. Never mind that a format that represents every real number through a finite bit pattern is a simple mathematical impossibility[1] and doing it in a bit pattern of fixed length even more so; never mind that this supposedly-white lie breeds nothing but confusion when the newly-made programmer tries to do ``real'' arithmetic only to encounter seemingly random rounding errors and concludes that floating-point arithmetic is black magic that corrupts your digital fluids; never mind the staggering amount of subtly wrong programs that deal with fixed-point quantities like money through floating-point arithmetic when there was never any reason to have rounding errors enter the problem or solution.

Floating-point arithmetic is not difficult. It works like you'd do arithmetic on paper with a fixed amount of digits, except for the fact that your digits are binary and so the amount of rationals with finitely many digits is a bit smaller. There exist good texts on floating-point arithmetic, understandable even for the kind of mental midget that studies CS to write DISRUPTIVE ENTERPRISE NANOSERVICES nowadays. [2] was written in 1991 -- what's your excuse? If you want to understand the rounding behaviour better, every decent book on numerical analysis contains a section on it and books like Wilkinson's Rounding Errors in Algebraic Processes (1961) go even further into detail. Not that you need much beyond the trivially derived (even if coarse) bounds in anything that isn't a serious numerical problem; and if it is one, why the fuck did you not learn numerical analysis before tackling it?

Where does this fetishism for ignorance in programming stem from? Nowhere else have I seen a supposedly technical field that happily ignores its own history, takes pride in using something without having read as much as the goddamn manual (let alone a book) and outright teaches this ignorance.

[1] Yes, yes, the computable real numbers are by definition representable and a triple Shalom! to Cantor and the powerset axiom. While we wait for the system that represents every number as an algorithm producing its digits, let's keep things classical.
[2] https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf

Name: Anonymous 2018-07-15 0:24

>>4
Integers can be represented through bit patterns such that every number has a finite pattern. For real numbers, this is something the people who supposedly passed a class on calculus should be able to prove impossible by themselves.

However, it's true that you cannot have correct arithmetic on a computer without handling either overflow or OOM conditions and this is a big problem that most programming languages ``solve'' by simply ignoring it. Many lack even the most basic tools to deal with this is an ordered way: How many languages have only fixnums and no way to detect overflow unless you recompute the result yourself, even though basically all extant hardware detects it automatically? And yet C is ``close to the hardware'' according to some idiots. How many languages cannot react to OOM conditions, e.g. to allow you to drop caches? As far as I know not even Common Lisp can do that portably, though I might be wrong there. OOM handling in general is a place where the ``state of the art'' is abominably primitive even though it is a fundamental problem.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List