Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

C is a big 0

Name: Anonymous 2017-08-04 4:47

https://softwareengineering.stackexchange.com/questions/110804/why-are-zero-based-arrays-the-norm

1-based counting was ``the norm'' for thousands of years. Years begin at 1. The Bible counts chapters and verses from 1. Programming languages before C started arrays from 1 or a user-defined index.

Only a few answers mention that some programming languages let you start from 1. This should be filled with answers saying ``1-based arrays used to be the norm and C hackers came along and forced everyone to use the C way because they get confused if you let them choose the index.'' Stupid questions do not bother me, but wrong answers do. Stack Overflow and Stack Exchange are spreading wrongness into the world. They are reducing the amount of truth on the Internet.

They all say that arrays count from 0 and nobody can change it because it would ``confuse'' people. This is the C mentality. They want to force their 0 on thousands of years of human history and on every non-C-based programming language. They want everyone else to cater to them because they are too dumb. Pascal programmers can learn and use C. They don't like it, but they can get used to it. C programmers don't want to use Pascal because it's not C.

Stop catering to the idiots. They are not good programmers if they get confused by simple concepts like array base. Kids using QBasic and Pascal can understand it, but these C ``expert hackers'' can't. We should stop dumbing down our languages and debasing important computer science concepts because some people are stupid.

Name: Anonymous 2017-08-11 12:05

>>40
"Mental Midgets" created assembler which counts from 0.
Assembler predates all other languages and their 1-based counting.

Name: Anonymous 2017-08-11 12:28

The Ivory Tower autists ITT completely lost it. 0-Based counting wasn't some
C terrorist forcing the industry to forget math on the spot,
it comes directly from the hardware and assembler.

Name: Anonymous 2017-08-11 18:24

>>41,42
Hardware counts from whatever number you choose and C isn't 0-based counting either. C array dimensions and data sizes count from 1 like most everything else. int a[10] declares 10 items (counting from 1) but indexed 0 to 9. sizeof(char) is 1, not 0. C makes it confusing because it's unnatural and this is why there are so many off-by-one errors. If you need more proof that it's unnatural, C requires you to be able to form a pointer to the byte after the end of the array.

0-based arrays and 1-based arrays are both special cases of what an array is, sort of like a chicken is a special case of an animal. If your idea of ``animal'' is a chicken, you might think elephants and crocodiles aren't animals, but they are. That's essentially what is going on with CS education. They focus on very limited special cases that dumb you down.

Arrays are a computer science concept that existed before there were high-level languages. Recursion was a computer science concept before programming languages supported it. Traditional programming languages start counting from the lower bound because that is the part of the high-level concept of arrays. The lower bound is usually 1 because the number of elements, the last indexable element, and the number you stop counting at are the same. dim a(0 to 9) counts from 0 to 9 giving 10 elements, real 0-based counting. In Pascal, they don't even have to be numbers. You can count from Monday to Friday or Red to Violet. All of these concepts are part of what an array is, but not all of what an array is.

Name: Anonymous 2017-08-11 19:11

仕方が無い

Name: VIPPER 2017-08-11 21:28

>>43
Pretty sure whenever anyone says something similar to 0 count, they mean indexing. It's like when people talk about static/dynamic typing and strong/weak typing

but you still make good points.

Name: Anonymous 2017-08-12 5:16

>Hardware counts from whatever number you choose
Hardware pointers begin at 0, end of story.

Name: Anonymous 2017-08-12 5:31

Name: Anonymous 2017-08-12 11:35

>>46
Counting and arrays aren't pointers. Pointers don't have to begin at 0 either and they usually don't. People can count by 2 from 100 to 400 or count backwards from 500. There are no pointers there. The C ``for'' loop is so horrible that it makes the simple concept of counting confusing. Normal programming languages have a loop for counting (or iterating over a sequence), but C doesn't. The ``hackers'' also get confused by counting and say <= is more ``bloated'' than < because you have to write one more character (in ASCII because ≤ is one character too). That's how stupid they are. They think using an operator that's one character longer makes their code more ``bloated''.

>>47
and I ended up mixing 0 indexed and 1 indexed arrays throughout my program.
It's good that Haskell has this feature. If you don't need a specific lower bound, start from 1 because Haskell is a high-level language and doesn't treat arrays like pointers. If you do that, you never get confused and never have off-by-one errors. You should also write functions to accept arrays with any index. The index is a semantic part of the array. Pascal and QBasic programmers never got confused, but these ex-C Haskell programmers do because they have been poisoned by C.

Name: Anonymous 2017-08-12 12:08

>>48
Arrays are used like:
mov ax,[bx]
mov ax,[bx+1]
mov ax,[bx+2]


Not adding anything to the first element is equivalent to adding zero.

Name: Anonymous 2017-08-12 15:42

0 makes sense as the very start of the array because it is the additive identity. Twice as far along the array is still the very start, and it makes sense to reflect this in the number you choose to index it.

When converting columns/rows to a single index in the array, it makes more sense to use y*stride+x or (y-1)*stride+x-1, than it does to use (y-1)*stride+x or y*stride+x+1.

In 0-based arrays, an index is before the start of the array if and only if it is negative. In 1-based arrays, 0 is before the start too.

Many people have made the pointer argument so I will omit that.

0-based arrays allow for interesting conveniences, e.g. the cardinality of an array is also the index you write to when adding another element to it (in resizeable array abstractions).

I don't care what you Pascal loonies think is "the right way", but the 0 way is far more convenient in every conceivable respect.

I suspect that the reason non-mathematicians and other lay people chose 1 to represent the first year in the calendar, the first day in the month, etc., is because these were invented before zero was invented. The Gregorian calendar is a tiny adjustment to the Julian calendar which appeared 43 years before the concept of zero was first recorded. (Recall also that there is no Roman numeral to represent zero). The rest is down to tradition, which is hard to break.

Name: Anonymous 2017-08-12 19:58

>>48
The ``hackers'' also get confused by counting and say <= is more ``bloated'' than < because you have to write one more character (in ASCII because ≤ is one character too). That's how stupid they are. They think using an operator that's one character longer makes their code more ``bloated''.
No, it's ``bloated" because not all architectures have jump if greater-than/jump if less-than instructions. Which means a < only requires one comparison instruction, while a <= may require two.

Name: Anonymous 2017-08-12 19:58

>>51
I mean jump if greater-than-or-equal-to and less-than-or-equal-to, obviously.

Name: Anonymous 2017-08-12 20:34

>>50
You're basically reiterating the Stack Overflow answers. The thread is about how C is dumbing people down and that Stack Overflow is full of these dumbed down people who never used arrays that don't start at 0.

I believe you should be able to choose the lower bound for your array and that 1 makes more sense as a default because the number of elements and last index are the same (unlike C's ``one byte past the end'' rule that standardizes off-by-one counting and prevents trapping errors as soon as possible). It is only C and C-based programmers who say choice is bad because it confuses them. To see why it's confusing, compare the definition of arrays in ISO C and ISO Pascal. Compare the whole languages while you're at it.

I suspect that the reason non-mathematicians and other lay people chose 1 to represent the first year in the calendar,
Mathematicians counted from 1. FORTRAN counted from 1 because it was made for mathematicians and scientists (now it lets you choose the index). Computer scientists counted from 1. You will see one-based counting and one-based indexing in any CS paper from the 60s and 70s. This was when all of the innovation happened. They did so many things that we couldn't or don't want to do today because the C and UNIX semantics and way of thinking make it too complicated, if not impossible.

Einstein once said ``We cannot solve our problems with the same level of thinking that created them.'' but the tragedy is that a lot of problems were already solved before C came around to create them. Universities are ignoring the solutions because they're older than the problems. The solutions are also very simple and people who like complex solutions don't like hearing that changing the loop and array to make it more like Fortran, Algol, Pascal, and BASIC (all languages older than C) can eliminate a lot of problems instantly.

Since all of these smart people chose to count from 1 with closed-intervals and the ``same level of thinking'' that brought us C, C++, Java, and these other garbage languages was based on 0 and half-open intervals, I think counting from 1 is superior. Call that an ``appeal to authority'' but I think the way they think about computer science concepts has a lot to do with why they made such better programming languages.

the first day in the month, etc., is because these were invented before zero was invented.
They chose 1 because they are counting days from the beginning of the month. The first day is 1, the second day is 2, etc.

(Recall also that there is no Roman numeral to represent zero)
There is. It's simply an empty field. Roman numerals come from counting objects, and when there are no objects, there are no marks.

The rest is down to tradition, which is hard to break.
C programmers have no respect for tradition unless it is Bell Labs hacker tradition. They don't even respect counting. The choice of 1-based arrays and the traditional (counting with a step) loops were based on thousands of years of mathematical and human tradition.

Name: Anonymous 2017-08-12 21:08

>>51
Transforming a var ≤ constant expression to var < constant + 1 is trivial.

Name: Anonymous 2017-08-12 21:23

>>51
may require two
What shitty well-used architecture doesn't have this?
Almost no one needs to write code that runs on every architecture.

Name: Anonymous 2017-08-13 3:19

>>53
C programmers have no respect for tradition unless it is Bell Labs hacker tradition. They don't even respect counting. The choice of 1-based arrays and the traditional (counting with a step) loops were based on thousands of years of mathematical and human tradition.
But this isn't about the attitude of C programmers themselves towards tradition. It's about the people who were raised in the post-C world, who see C-like languages as ``how programming is supposed to be."

Name: Anonymous 2017-08-13 3:51

>>53
A lot of your post is approached from the point of view that we would all see how 1-based arrays are better if we only took a good look at them. Since that epiphany obviously hasn't happened, please elaborate on what you think makes them specifically better during programming, on a practical level.

Relating to what you do have in your post, the "standardized" off-by-one error with zero-indexing is just an incorrect expectation on behalf of the (presumably inexperienced) programmer. It is not an argument in favour of either method. In fact, indexing the array with its length (despite being a confusion of the concepts of ordinal and cardinal numbers) has a different meaning in 0-based and 1-based arrays, and both meanings are useful for different purposes.

They chose 1 because they are counting days from the beginning of the month. The first day is 1, the second day is 2, etc.
This is an unsatisfying explanation, and could easily have been rewritten with every number decreased by 1 without contradicting itself.

There is. It's simply an empty field.
This is more a special case than an actual numeral. Even so, an empty square on a calendar signifying the first day would not inspire confidence in the reader.

Name: Anonymous 2017-08-13 15:15

>>57
A lot of your post is approached from the point of view that we would all see how 1-based arrays are better if we only took a good look at them. Since that epiphany obviously hasn't happened, please elaborate on what you think makes them specifically better during programming, on a practical level.
They are easier to understand and open your mind to thinking about more powerful array concepts. You can use these concepts even if a programming language doesn't support it. Multi-dimensional arrays, indexing by enumerations, slicing, strides, transposing, array expressions, and all of these other great array concepts originated with the traditional computer science definition and closed intervals because it is a much more powerful way of thinking about arrays. 0-based arrays come from ``hackers'' who have the philosophy ``here's a pointer, you don't even get a length'' and created gets(). I don't think those are the people we should be copying. We have good arrays but none of these C-based languages do because the C way is too hard to understand. The adoption of this C way of thinking really destroyed the quality of arrays in new programming languages. These C hackers tell you that good arrays are useless and not worth learning. They tell you to use 0 and manually map from the indexes. That's ``Worse Is All You Get Forever'' because the C way poisons your mind and makes learning good arrays impossible. You can download a compiler for Pascal, Fortran, or Ada for free and have some of these features. They do not require expensive compilers or obscure languages nobody ever heard of.

If you want to understand why it is better, compare the ISO standards for Pascal and C. When you index the elements instead of the ``first byte'' with ``offsets'' and pointer arithmetic, you are thinking about tables/tuples of data instead of pointers and linear memory. Arrays are a very powerful and elegant data structure and the C way does a disservice to them. None of these languages force you to start from 1, they just use closed intervals and 1 is the default because it's the most natural and most convenient. 1:10 has 10 elements, 1:1 has 1 element, and 1:0 is the empty array. If you start from 0, it would be 0:9, 0:0, and 0:-1 (which is also fine in these languages, but less elegant for a human).

Why do you want closed intervals instead of the C way? Besides being easier to understand and matching thousands of years of human conventions (like dates, alphabetized names, numbered ranges, etc.), they make it possible to use enumerations and finite ranged integers. You can have January..December in Pascal, but with half-open intervals, you can't even write it down.

The C-based programmers have trouble even starting with 1 because they have this incredibly bad distortion of arrays in their heads. They can't think about multi-dimensional arrays without thinking about pointers to pointers or pointers and multiplication. If you have a table of data on a piece of paper, that's a 2D array, but there are no pointers or index multiplication.

I don't have any trouble understanding C arrays or their ``first byte'' philosophy, I just think they're more complicated and less powerful. The C way (and the whole C language, really) is more confusing, harder to learn, harder to understand, limits your implementation and optimization, and you get less out of it. Why would I want to use something like that?

Relating to what you do have in your post, the "standardized" off-by-one error with zero-indexing is just an incorrect expectation on behalf of the (presumably inexperienced) programmer.
If you declare int a[10], the C standard requires you to be able to form a pointer to a[10] even though no such element exists. That is an off-by-one error and would not be allowed in any sane language. Some languages don't require bounds checks, but this is not the same thing because it is still considered an error and you are getting into implementation-defined/undefined territory. The C standard does not consider it an error and actually mandates this special case of out-of-bounds pointers. If you raise a run-time error, you are violating the C standard.

This is an unsatisfying explanation, and could easily have been rewritten with every number decreased by 1 without contradicting itself.
Or increased by 1 or decreased by 2 or multiplied by 2. We choose 1 because day 1 is the first day in the month. When we count objects, we say 1, 2, 3, not 0, 1, 2.

Name: Anonymous 2017-08-13 17:35

I know, guys, what if we store something at
array[0]
(like its size) and start storing the array data from position 1? Both the hardware and the programmers will be happy!

EDIT: Thanks for the gold kind stranger!

Name: Anonymous 2017-08-13 17:58

>>59
Both the hardware and the programmers will be happy!
the hardware would be happy with anything, but the Appers will see it as starting from 1, so they won't be happy.

Name: VIPPER 2017-08-13 18:01

1-based counting was ``the norm'' for thousands of years. Years begin at 1. The Bible counts chapters and verses from 1. Programming languages before C started arrays from 1 or a user-defined index.
Programming is applied mathematics, and in math you start from 0

Name: Anonymous 2017-08-13 20:20

>>61
You count the first object as 1 in math. 0 is what you have before you count anything. Maybe that's what you mean, but C counts differently.

When you count three apples, do you say 0, 1, 2 or do you say 1, 2, 3? I say 1, 2, 3 because I am counting the apples themselves, not pointer offsets.
https://en.wikipedia.org/wiki/History_of_ancient_numeral_systems
Tallies made by carving notches in wood, bone, and stone were used for at least forty thousand years.
C has no respect for at least 40,000 years of human history of counting.

To create a record that represented "two sheep", they selected two round clay tokens each having a + sign baked into it. Each token represented one sheep. Representing a hundred sheep with a hundred tokens would be impractical, so they invented different clay tokens to represent different numbers of each specific commodity, and by 4000 BC strung the tokens like beads on a string.[7] There was a token for one sheep, a different token for ten sheep, a different token for ten goats, etc. Thirty-two sheep would be represented by three ten-sheep tokens followed on the string by two one-sheep tokens.
Counting used to be really simple before C made it complicated. People 40,000 years ago were able to count without having off-by-one errors, and Sumerian sheep herders never had any problems, but C programmers have problems today. C is setting humanity back more than 40,000 years.

People and normal programming languages count when they point out the apple, but C does not. You go through the loop without counting when, suddenly, you add 1 at the end because you remembered that there's an apple. That's why the C standard requires you to be able to form a pointer to the byte outside the array. That's an off-by-one error mandated by an ISO standard because too many C loops would break if they did the Right Thing and made it an error.

Here's some pseudocode.
counter = 0
while uncounted apples exist:
apple.counted = True
counter = counter + 1

If there are no apples, the counter stays at 0 because the loop never runs.

count (apple : apples) = Succ (count apples)
count [] = Zero

0 is the empty list, when you have no apples.

As for why C is so popular, I think it's because the natural way of counting is so easy that babies and prehistoric man from 40,000 years ago can understand it, but the ``hackers'' don't like making things easy for people. This ``hacker counting'' is a new kind of ``math'' they can teach to people that has very little value but seems important because it's hard to learn and not like normal math. It makes it seem like they learned some ``secret'' that's only for programmers, but programmers using non-C-based languages count like everyone else. Loops in BASIC, Pascal, and Fortran are very easy because they're based on counting, something you already know. C makes counting complicated so it becomes a huge problem these ``hackers'' can pretend to solve because they're too dumb to solve real problems. Counting is not a problem. C does counting wrong and that's the problem. The C hackers forget that all these programming books and compilers still exist and simply switching to a different language would solve all of these problems immediately.

Name: Anonymous 2017-08-14 5:34

This ``hacker counting'' is a new kind of ``math'' they can teach to people that has very little value but seems important because it's hard to learn and not like normal math
HACKER MATH
DON'T DO IT, OR YOUR KID WILL BECOME A HACKER
DANGEROUS C CODE ``HACKER COUNTING` ` FROM ZERO

Name: Anonymous 2017-08-14 7:56

>>63
``Hacker'' as in ``UNIX hacker'' and ``C hacker'', someone who hates doing the Right Thing and doesn't care about what they're doing. ``Worse Is All You Get Forever'' is not just for arrays and counting, but everywhere.

It is unfortunate that we call both ``security crackers'' and ``idiot C programmers'' ``hackers'', but these UNIX hackers are more ``dangerous'' than malicious ``security crackers'' because they think poor quality software and bad programming languages are good. They think C, UNIX, and Plan 9 are good. They do not think ``we will fix them later'', they think they are already good and don't need to be fixed. The UNIX/C hackers enable the other kind of hacker by creating garbage and dumbing down programmers.

Name: Anonymous 2017-08-14 14:05

Zero-based counting also makes sense when you're doing graphics, euclidean geometry, physics, or anything else where real values are quantized to integral approximations. As a simple-example, say you're mapping an angle in radians to quadrants of the cartesian plane. If you index the quadrants starting from 0, then you simple multiply the angle in radians by 2 / pi and cast to int to get the quadrant. Indexing from 1, you'd have to add 1 at the end of your calculation. A similar analogy can be made for just about everything that deals with quantized lengths like this. 0-based indexing is just simpler because distances start from 0, not 1.

Name: Anonymous 2017-08-14 14:54

>>65
Zero-based counting also makes sense when you're doing graphics
That is not how C counts. C counts at the end of loops instead of the beginning, so it appears the same if the loop completes, but if you exit early, it is off by one. I say appears because the C way, when counting with pointers, can actually point outside the array instead of pointing to the last element.

However, this thread is not about what's better because all of the languages I prefer let you start from 0, 1, -10 or whatever is most convenient. It's about Stack Overflow being so dumbed down that they have never seen arrays that didn't start at 0, and computer science classes teaching inferior distortions of concepts in a way that is both harder to understand and less powerful.

Name: Anonymous 2017-08-14 16:41

>>66
It's about Stack Overflow being so dumbed down
SO's quality, just like Wikipedia, is affected by its community members.
In SO's case, mostly code monkeys and mental midgets use it.

Name: Anonymous 2017-08-14 20:56

>>67
Yes, but unfortunately that is the level of education now.

Name: Anonymous 2017-08-15 12:24

>>67
Not like Wikipedia is any better. I'm aware you didn't claim otherwise, just wanted to put it on the record.

Name: Anonymous 2017-08-15 22:23

>>69
yeah, they both are terrible for the same reason.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List