Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

C is a big 0

Name: Anonymous 2017-08-04 4:47

https://softwareengineering.stackexchange.com/questions/110804/why-are-zero-based-arrays-the-norm

1-based counting was ``the norm'' for thousands of years. Years begin at 1. The Bible counts chapters and verses from 1. Programming languages before C started arrays from 1 or a user-defined index.

Only a few answers mention that some programming languages let you start from 1. This should be filled with answers saying ``1-based arrays used to be the norm and C hackers came along and forced everyone to use the C way because they get confused if you let them choose the index.'' Stupid questions do not bother me, but wrong answers do. Stack Overflow and Stack Exchange are spreading wrongness into the world. They are reducing the amount of truth on the Internet.

They all say that arrays count from 0 and nobody can change it because it would ``confuse'' people. This is the C mentality. They want to force their 0 on thousands of years of human history and on every non-C-based programming language. They want everyone else to cater to them because they are too dumb. Pascal programmers can learn and use C. They don't like it, but they can get used to it. C programmers don't want to use Pascal because it's not C.

Stop catering to the idiots. They are not good programmers if they get confused by simple concepts like array base. Kids using QBasic and Pascal can understand it, but these C ``expert hackers'' can't. We should stop dumbing down our languages and debasing important computer science concepts because some people are stupid.

Name: Anonymous 2017-08-04 4:53

C arrays start from zero, because they are syntax sugar for pointers:
Pointer to X, with X[0] being the first element in the array.
You can also refer to element behind the array/points
X[-1] refer to element one behind X/X[0]
X[-2] refers to element two behind X/X[0]
C translates this to *(&X-1),*(&X-2)

Name: Anonymous 2017-08-04 5:48

Zero-based arrays are the only option that makes sense within the context of C. One of the main principles of C is that array element access is syntactic sugar for pointer arithmetic. This is the real reason why arrays in C are so frequently ``unsafe" and it isn't plausible to implement array bounds checking - because there's many data items that are not declared as arrays, but yet have reason to be accessed like they are, and providing bounds checking for data items declared as arrays would likely lead to more errors when dealing with ``array-like objects" because programmers wouldn't learn to be cautious when dealing with indexing.

I do actually agree with the premise that 1-based arrays (or even better, custom array indexing) can be preferable for application programming. That's why so many earlier languages did use 1-indexed arrays. But to express incredulity at the prevalence of zero-based array indexing only demonstrates historical ignorance. Zero-based indexing may not be ideal, but there is a coherent reason why it's so widely used. C was a big deal. It both offered performance sufficient for writing system modules and utilities without resorting to ASM, and was highly portable. That was one of the keys to the success of Unix, that it didn't have to be completely rewritten when porting to a new platform (and unlike today, there were a lot of platforms to consider in those days - nowadays, for ``general purpose" software, pretty much only x86 and ARM need to be considered). And nobody wanted to add or replace core Unix features in anything other than C, because it would reduce the portability of the operating system as a whole. This ultimately led to C and Unix becoming synonymous, and furthermore, resulted in many people having some exposure to C; as a consequence of Unix proliferation, C became popular as an applications language as well as a systems language, even though it wasn't ideal for the former in any aspect other than portability. C was so popular and well-known that many future languages (Java, JavaScript, CSS, PHP, and Perl, to name a few) used a syntax very much like C, including zero-based arrays, even though the main justification for zero-based arrays (that being pointer arithmetic) was not present in many C-derived languages. But it's what programmers are familiar with.

I do agree that C (and modern computing in general) is in some ways a step backwards. However, StackOverflow is a site intended for programmers to seek advice from other programmers. And most programmers are paid for being able to write working code in the languages that exist today, not for knowing the history of how those languages came to be. That information can be useful, certainly, and its absence is a serious flaw in programming education (for a specific example, my education was so focused on C-style syntax and OOP that it took me several years to really fully understand the significance of structured programming - programming courses largely overlook that there ever was such a thing as non-structured programming.) But it's a mistake to expect programmers as a group to be familiar with the history and philosophy of programming. That's simply not the culture we live in. People who began programming after the Unix/C revolution simply aren't aware of what life was like before. And they're not trained to think ``is this really the most logical way to do things?", rather they're trained to write code in the languages that are used, not the ones that they wish existed and were used.

Name: Anonymous 2017-08-04 6:27

Name: Anonymous 2017-08-04 7:38

>>4
0-based arrays leverage core assembler paradigms and world-class optimizations to provide our clients worldwide with robust, scalable, modern turnkey implementations of flexible, personalized, cutting-edge network-enabled e-business application product suite batch solution architectures that accelerate response to customer and real-world market demands and reliably adapt to evolving technology needs, seamlessly and efficiently integrating and synchronizing with our existing legacy infrastructure, enhancing the computing capabilities of our production environments across the enterprise while giving us a critical competitive advantage and taking programming to the next level.

Name: Anonymous 2017-08-04 7:41

Ashley Yakeley
Posted November 21, 2013 at 3:15 am | Permalink

Clearly bytes should be one-based: they have 256 possible values, and they are most easily counted 1,2,3…256. I wonder who first came up with the idea of zero-based bytes?

Name: Anonymous 2017-08-04 10:28

Mayans wasn't it?

Name: Anonymous 2017-08-04 10:30

>>6
1..256 can be the range of values of one byte, as can 1000..1255 and -99999..-99744. Any range n..n+255 can be stored in a byte. I don't know why universities don't teach this anymore either.

Name: Anonymous 2017-08-04 11:00

>>4
That article doesn't tell the whole story because languages designed after BCPL and C also start arrays at 1 or an arbitrary index.

These ``Worse Is All You Get Forever'' people say it's ``confusing'' to be able to pick the lower bound. I was never confused when I used QBasic on DOS. I was never confused when I used Pascal. The multi-dimensional arbitrary indexed array is easy to learn in these languages, but all you can learn in C is the 0-based one-dimensional ``decayed to pointer'' array because C arrays are harder for people to understand. The C way is confusing because it is inferior, not because it is superior. C-based languages have worse arrays because the C definition of array makes anything more powerful seem impossible.

The C way is dumbing down programmers and ruining languages.

Name: Anonymous 2017-08-05 1:49

C arrays are not ``hard to understand". C for loops are ``hard to understand", making one of the most common array tasks (iterating over an array) confusing and leading to off-by-one errors and buffer overflows. The intent of the C for loop syntax is to make it ``versatile", by allowing the initialization, test condition, and increment condition to be terminated. However, it makes the most common loop format (iterating by 1 over an integer range) needlessly complex for humans to parse, and while it does allow less common loop structures, it's more readable to just use explicit ifs and gotos. The Pascal and BASIC loop syntaxes are more intuitive, and the use of goto can be used to handle less typical cases.

Name: Anonymous 2017-08-05 2:56

As somewon whomst learned x86 asm before C, zero-based arrays maid perfect sense upon introduction.

Name: Anonymous 2017-08-05 3:36

Are babies born 1 year old or 0 years old?
https://en.wikipedia.org/wiki/East_Asian_age_reckoning

Name: Anonymous 2017-08-05 4:36

>>10
C arrays are not ``hard to understand".
The C array is harder to understand than the Pascal or BASIC array even though it's less powerful. We have better ways to do arrays that used to be taught in computer science classes and used by ordinary programmers.

The C books teach something about pointers to the first byte of an element, which is an implementation detail of one possible implementation. They also ``flatten'' variable-length multidimensional arrays into one dimension and you have to use multiplication to get the address, another implementation detail. Arrays are a simple and beautiful computer science concept and all of these complicated implementation details are ugly and get in your way of understanding them. These CS classes are dumbing down programmers by using inferior definitions and teaching methods that make things harder to understand. C is dumbing down people and programming languages.

The pointer arithmetic makes arrays starting from a number besides 0 too hard to understand and C programmers want you to fake multi-dimensional arrays with multiplication, which makes multi-dimensional arrays too hard to understand and use, so these new language creators don't include them in their languages.

C for loops are ``hard to understand", making one of the most common array tasks (iterating over an array) confusing and leading to off-by-one errors and buffer overflows.
Yes they are and this is another problem. The C for loop is also full of implementation details, like whether you add 1 before or after you throw away the result. There are at least 4 different ways to add 1 to the index (i++, ++i, i += 1, i = i + 1). The array and loop together make C even worse.

Name: Anonymous 2017-08-05 7:13

This is like claiming that we should use tau instead of pi. The exact numbers we use for the indices of an array is superficial at best and doesn't affect the implementation or understanding of any algorithm. Asking for it to directly represent data from your domain in your code speaks of an inability to distinguish between the two and map even less trivial differencing concepts from one to the other.

Name: Anonymous 2017-08-05 8:41

>>11
Of course they make sense from a low-level background, because C style arrays are designed to correspond with common machine instructions.

>>13
The C books teach something about pointers to the first byte of an element, which is an implementation detail of one possible implementation.
It's my understanding that that is not merely an implementation detail as far as C is concerned, it's a defined part of the language semantics, or at least that arrays must behave as if their elements are stored contiguously in memory. C does place fairly strict requirements on how features are implemented, because C is by design a minimalist language and unless its basic constructs are implemented consistently, more complex features built on top of those constructs won't be reliable. It's not the only way to make a language, but it is in line with the C philosophy. The mistake is in thinking that the qualities that make a low-level systems language will also make a good apping lang or a good scripting lang.

They also ``flatten'' variable-length multidimensional arrays into one dimension and you have to use multiplication to get the address, another implementation detail.
Because that is pretty much a requirement when it comes to treating arrays of arbitrary dimensions (or any object, really) as a block of contiguous bytes. It's an inherent part of how the whole C environment was designed. Not ideal for languages built with other models in mind, but it's perfectly reasonable within the context of C. The problem isn't C, C is perfectly good for its niche, the problem is that so many other languages end up being "C with design patterns", "C for the web", "C for dummies", "C with significant whitespace", "C with lazy evaluation", "C as a markup language" etc, etc.

Name: Anonymous 2017-08-05 21:10

>>14
C arrays force you to think about implementation details and actually prevent you from understanding CS concepts. If you want to use a Pi analogy, the C way would be like measuring the diameter and circumference of a specific circle that someone drew in a notebook and declaring that as the ``Pi circle'' because any other way of calculating Pi is ``confusing'' for ``notebook hackers''. These ``notebook hackers'' also claim the mathematical properties of Pi are ``outdated'' and ``abstract''.

>>15
It's my understanding that that is not merely an implementation detail as far as C is concerned, it's a defined part of the language semantics
This makes it hard to understand what an array is. It might be how an array is implemented, but it's not what an array is. An array drawn on paper with a pencil doesn't have anything to do with pointers or contiguous byte blocks. Computer science classes used to teach that you should never confuse a concept with its implementation.

C does place fairly strict requirements on how features are implemented, because C is by design a minimalist language and unless its basic constructs are implemented consistently
Making the semantics more complicated by forcing you to implement features a certain way is the exact opposite of minimalist. Minimalist has minimal requirements on the implementation. Compare the definition of arrays in ISO Pascal and ISO C.

The problem isn't C, C is perfectly good for its niche, the problem is that so many other languages end up being "C with design patterns", "C for the web", "C for dummies", "C with significant whitespace", "C with lazy evaluation", "C as a markup language" etc, etc.
It's about to get even worse because WASM is forcing the ``block of contiguous bytes'' model even more than C does.

Name: Anonymous 2017-08-07 5:57

This has completely drifted from the original topic of the thread, which is about Stack Overflow being wrong and upvoting wrong answers, and harming your understanding. Stack Overflow makes you know less after reading it than before. It replaces truth that you are unsure about with false information. 1-based arrays are the norm for programming languages that aren't based on C. This is because of what an array is and what a high-level language is. Fortran keeps adding more and more array features because arrays as a concept are more powerful than their implementation in any language. The Fortran people know what arrays are and aren't dumbed down by misunderstandings and Stack Overflow answers.

Universities are also part of the problem, because it seems like nobody knows anything anymore except how to use Google and Stack Overflow. Anyone who took a CS class in the 90s and learned Pascal would know right away that these answers are wrong. They are dumbing down their students by teaching inferior and harder to understand C arrays. This has even dumbed down languages like Java. Wrongdoers say things about ``an inability to distinguish between the two and map even less trivial differencing concepts from one to the other'' which is mumbo jumbo for ``people have made compilers to do it since the 1950s, but we're too ignorant of computer science concepts to know it is even possible''. They are saying that ``Worse is Better'' in a very literal sense. Having a worse compiler is not better. Having a worse understanding of computer science is not better. Because the students do not know anything about computer science, ``Worse Is All You Get Forever''.

Name: Anonymous 2017-08-07 15:38

Counting != indexing.

Name: Anonymous 2017-08-07 16:00

>>18
Are C arrays counted from 1 or 0?

Name: Anonymous 2017-08-07 20:59

>>19
Usually

Name: Anonymous 2017-08-08 1:37

>>19
From 1 because int a[20] declares an array with 20 elements indexed from 0 to 19.

Name: Anonymous 2017-08-08 1:45

>>21
lol

Name: Anonymous 2017-08-08 2:04

>>22
It's true.

Name: Anonymous 2017-08-08 4:56

C doesn't actually have "Arrays" they only exist in compile-time abstraction for sizeof()
http://codepad.org/wlzc7Iwu

Name: Anonymous 2017-08-08 5:04

>>24
I have no idea what I'm looking at.

Name: Anonymous 2017-08-08 5:24

>>25
C "Arrays" exist at compile-time stage.
At runtime, they become pointers.
for loop iterates over the "Array" outside its limits.

Name: Anonymous 2017-08-08 6:53

Structs will not protect you either
http://codepad.org/U38pndJN

Name: Anonymous 2017-08-08 9:29

Maybe you should ask yourself why memory addresses start at 0 instead of 1 and you'll realize why 0-based indexing is the preferable choice.

Name: Anonymous 2017-08-08 9:59

>>28
but 0 is the null pointer address...

Name: Anonymous 2017-08-08 10:01

>>29
..which exist only in software abstraction.
In hardware 0 is a valid memory address.

Name: Anonymous 2017-08-09 1:46

>>30
how would you access it in software abstraction then?
arrays are also software abstraction.

Name: Anonymous 2017-08-09 4:32

>>31
In kernel mode you can access any piece of physical memory.

Name: Anonymous 2017-08-09 17:31

>>32
Oh yeah? well prove it

Name: Anonymous 2017-08-09 22:03

>>33
First I need a formal specification written in Coq.

Name: Anonymous 2017-08-10 1:09

>>31
arrays are also software abstraction
Unless you're referring specifically to ``high level" arrays that have ``nice" features like bounds checking, arrays are usually implemented in hardware. Basically, any architecture with load/store instructions taking register or memory operands, and which uses a pointer representation similar to integers (which, by the way, is not required by the C Standard) has array support at the hardware level. Only very primitive (i.e. pre-1950s) computers lacked those features, requiring that array operations be implemented using self-modifying code, or writing a massive switch statement for every single array. Even then, the address space was just a giant array in terms of how the computer accessed it, but without indirect addressing you basically had to treat it as a bunch of slow registers.

>>32
Only if your hardware actually allows 0 to map to somewhere in physical memory, rather than triggering an internal hardware exception whenever you try to access something at address 0.

Name: Anonymous 2017-08-10 2:29

1-based counting would reduce the amount of off-by-one errors

Name: Anonymous 2017-08-11 0:02

>>36
Less mental midgets would reduce the amount of off-by-one errors.

Name: Anonymous 2017-08-11 2:35

>>34
I'll get back to you on that

Name: Anonymous 2017-08-11 3:23

>>37
It's easier to switch to one-based counting than to get rid of mental midgets.

Name: Anonymous 2017-08-11 11:44

>>37,39
Mental midgets created 0-based counting. Real computer scientists count from 1.

Name: Anonymous 2017-08-11 12:05

>>40
"Mental Midgets" created assembler which counts from 0.
Assembler predates all other languages and their 1-based counting.

Name: Anonymous 2017-08-11 12:28

The Ivory Tower autists ITT completely lost it. 0-Based counting wasn't some
C terrorist forcing the industry to forget math on the spot,
it comes directly from the hardware and assembler.

Name: Anonymous 2017-08-11 18:24

>>41,42
Hardware counts from whatever number you choose and C isn't 0-based counting either. C array dimensions and data sizes count from 1 like most everything else. int a[10] declares 10 items (counting from 1) but indexed 0 to 9. sizeof(char) is 1, not 0. C makes it confusing because it's unnatural and this is why there are so many off-by-one errors. If you need more proof that it's unnatural, C requires you to be able to form a pointer to the byte after the end of the array.

0-based arrays and 1-based arrays are both special cases of what an array is, sort of like a chicken is a special case of an animal. If your idea of ``animal'' is a chicken, you might think elephants and crocodiles aren't animals, but they are. That's essentially what is going on with CS education. They focus on very limited special cases that dumb you down.

Arrays are a computer science concept that existed before there were high-level languages. Recursion was a computer science concept before programming languages supported it. Traditional programming languages start counting from the lower bound because that is the part of the high-level concept of arrays. The lower bound is usually 1 because the number of elements, the last indexable element, and the number you stop counting at are the same. dim a(0 to 9) counts from 0 to 9 giving 10 elements, real 0-based counting. In Pascal, they don't even have to be numbers. You can count from Monday to Friday or Red to Violet. All of these concepts are part of what an array is, but not all of what an array is.

Name: Anonymous 2017-08-11 19:11

仕方が無い

Name: VIPPER 2017-08-11 21:28

>>43
Pretty sure whenever anyone says something similar to 0 count, they mean indexing. It's like when people talk about static/dynamic typing and strong/weak typing

but you still make good points.

Name: Anonymous 2017-08-12 5:16

>Hardware counts from whatever number you choose
Hardware pointers begin at 0, end of story.

Name: Anonymous 2017-08-12 5:31

Name: Anonymous 2017-08-12 11:35

>>46
Counting and arrays aren't pointers. Pointers don't have to begin at 0 either and they usually don't. People can count by 2 from 100 to 400 or count backwards from 500. There are no pointers there. The C ``for'' loop is so horrible that it makes the simple concept of counting confusing. Normal programming languages have a loop for counting (or iterating over a sequence), but C doesn't. The ``hackers'' also get confused by counting and say <= is more ``bloated'' than < because you have to write one more character (in ASCII because ≤ is one character too). That's how stupid they are. They think using an operator that's one character longer makes their code more ``bloated''.

>>47
and I ended up mixing 0 indexed and 1 indexed arrays throughout my program.
It's good that Haskell has this feature. If you don't need a specific lower bound, start from 1 because Haskell is a high-level language and doesn't treat arrays like pointers. If you do that, you never get confused and never have off-by-one errors. You should also write functions to accept arrays with any index. The index is a semantic part of the array. Pascal and QBasic programmers never got confused, but these ex-C Haskell programmers do because they have been poisoned by C.

Name: Anonymous 2017-08-12 12:08

>>48
Arrays are used like:
mov ax,[bx]
mov ax,[bx+1]
mov ax,[bx+2]


Not adding anything to the first element is equivalent to adding zero.

Name: Anonymous 2017-08-12 15:42

0 makes sense as the very start of the array because it is the additive identity. Twice as far along the array is still the very start, and it makes sense to reflect this in the number you choose to index it.

When converting columns/rows to a single index in the array, it makes more sense to use y*stride+x or (y-1)*stride+x-1, than it does to use (y-1)*stride+x or y*stride+x+1.

In 0-based arrays, an index is before the start of the array if and only if it is negative. In 1-based arrays, 0 is before the start too.

Many people have made the pointer argument so I will omit that.

0-based arrays allow for interesting conveniences, e.g. the cardinality of an array is also the index you write to when adding another element to it (in resizeable array abstractions).

I don't care what you Pascal loonies think is "the right way", but the 0 way is far more convenient in every conceivable respect.

I suspect that the reason non-mathematicians and other lay people chose 1 to represent the first year in the calendar, the first day in the month, etc., is because these were invented before zero was invented. The Gregorian calendar is a tiny adjustment to the Julian calendar which appeared 43 years before the concept of zero was first recorded. (Recall also that there is no Roman numeral to represent zero). The rest is down to tradition, which is hard to break.

Name: Anonymous 2017-08-12 19:58

>>48
The ``hackers'' also get confused by counting and say <= is more ``bloated'' than < because you have to write one more character (in ASCII because ≤ is one character too). That's how stupid they are. They think using an operator that's one character longer makes their code more ``bloated''.
No, it's ``bloated" because not all architectures have jump if greater-than/jump if less-than instructions. Which means a < only requires one comparison instruction, while a <= may require two.

Name: Anonymous 2017-08-12 19:58

>>51
I mean jump if greater-than-or-equal-to and less-than-or-equal-to, obviously.

Name: Anonymous 2017-08-12 20:34

>>50
You're basically reiterating the Stack Overflow answers. The thread is about how C is dumbing people down and that Stack Overflow is full of these dumbed down people who never used arrays that don't start at 0.

I believe you should be able to choose the lower bound for your array and that 1 makes more sense as a default because the number of elements and last index are the same (unlike C's ``one byte past the end'' rule that standardizes off-by-one counting and prevents trapping errors as soon as possible). It is only C and C-based programmers who say choice is bad because it confuses them. To see why it's confusing, compare the definition of arrays in ISO C and ISO Pascal. Compare the whole languages while you're at it.

I suspect that the reason non-mathematicians and other lay people chose 1 to represent the first year in the calendar,
Mathematicians counted from 1. FORTRAN counted from 1 because it was made for mathematicians and scientists (now it lets you choose the index). Computer scientists counted from 1. You will see one-based counting and one-based indexing in any CS paper from the 60s and 70s. This was when all of the innovation happened. They did so many things that we couldn't or don't want to do today because the C and UNIX semantics and way of thinking make it too complicated, if not impossible.

Einstein once said ``We cannot solve our problems with the same level of thinking that created them.'' but the tragedy is that a lot of problems were already solved before C came around to create them. Universities are ignoring the solutions because they're older than the problems. The solutions are also very simple and people who like complex solutions don't like hearing that changing the loop and array to make it more like Fortran, Algol, Pascal, and BASIC (all languages older than C) can eliminate a lot of problems instantly.

Since all of these smart people chose to count from 1 with closed-intervals and the ``same level of thinking'' that brought us C, C++, Java, and these other garbage languages was based on 0 and half-open intervals, I think counting from 1 is superior. Call that an ``appeal to authority'' but I think the way they think about computer science concepts has a lot to do with why they made such better programming languages.

the first day in the month, etc., is because these were invented before zero was invented.
They chose 1 because they are counting days from the beginning of the month. The first day is 1, the second day is 2, etc.

(Recall also that there is no Roman numeral to represent zero)
There is. It's simply an empty field. Roman numerals come from counting objects, and when there are no objects, there are no marks.

The rest is down to tradition, which is hard to break.
C programmers have no respect for tradition unless it is Bell Labs hacker tradition. They don't even respect counting. The choice of 1-based arrays and the traditional (counting with a step) loops were based on thousands of years of mathematical and human tradition.

Name: Anonymous 2017-08-12 21:08

>>51
Transforming a var ≤ constant expression to var < constant + 1 is trivial.

Name: Anonymous 2017-08-12 21:23

>>51
may require two
What shitty well-used architecture doesn't have this?
Almost no one needs to write code that runs on every architecture.

Name: Anonymous 2017-08-13 3:19

>>53
C programmers have no respect for tradition unless it is Bell Labs hacker tradition. They don't even respect counting. The choice of 1-based arrays and the traditional (counting with a step) loops were based on thousands of years of mathematical and human tradition.
But this isn't about the attitude of C programmers themselves towards tradition. It's about the people who were raised in the post-C world, who see C-like languages as ``how programming is supposed to be."

Name: Anonymous 2017-08-13 3:51

>>53
A lot of your post is approached from the point of view that we would all see how 1-based arrays are better if we only took a good look at them. Since that epiphany obviously hasn't happened, please elaborate on what you think makes them specifically better during programming, on a practical level.

Relating to what you do have in your post, the "standardized" off-by-one error with zero-indexing is just an incorrect expectation on behalf of the (presumably inexperienced) programmer. It is not an argument in favour of either method. In fact, indexing the array with its length (despite being a confusion of the concepts of ordinal and cardinal numbers) has a different meaning in 0-based and 1-based arrays, and both meanings are useful for different purposes.

They chose 1 because they are counting days from the beginning of the month. The first day is 1, the second day is 2, etc.
This is an unsatisfying explanation, and could easily have been rewritten with every number decreased by 1 without contradicting itself.

There is. It's simply an empty field.
This is more a special case than an actual numeral. Even so, an empty square on a calendar signifying the first day would not inspire confidence in the reader.

Name: Anonymous 2017-08-13 15:15

>>57
A lot of your post is approached from the point of view that we would all see how 1-based arrays are better if we only took a good look at them. Since that epiphany obviously hasn't happened, please elaborate on what you think makes them specifically better during programming, on a practical level.
They are easier to understand and open your mind to thinking about more powerful array concepts. You can use these concepts even if a programming language doesn't support it. Multi-dimensional arrays, indexing by enumerations, slicing, strides, transposing, array expressions, and all of these other great array concepts originated with the traditional computer science definition and closed intervals because it is a much more powerful way of thinking about arrays. 0-based arrays come from ``hackers'' who have the philosophy ``here's a pointer, you don't even get a length'' and created gets(). I don't think those are the people we should be copying. We have good arrays but none of these C-based languages do because the C way is too hard to understand. The adoption of this C way of thinking really destroyed the quality of arrays in new programming languages. These C hackers tell you that good arrays are useless and not worth learning. They tell you to use 0 and manually map from the indexes. That's ``Worse Is All You Get Forever'' because the C way poisons your mind and makes learning good arrays impossible. You can download a compiler for Pascal, Fortran, or Ada for free and have some of these features. They do not require expensive compilers or obscure languages nobody ever heard of.

If you want to understand why it is better, compare the ISO standards for Pascal and C. When you index the elements instead of the ``first byte'' with ``offsets'' and pointer arithmetic, you are thinking about tables/tuples of data instead of pointers and linear memory. Arrays are a very powerful and elegant data structure and the C way does a disservice to them. None of these languages force you to start from 1, they just use closed intervals and 1 is the default because it's the most natural and most convenient. 1:10 has 10 elements, 1:1 has 1 element, and 1:0 is the empty array. If you start from 0, it would be 0:9, 0:0, and 0:-1 (which is also fine in these languages, but less elegant for a human).

Why do you want closed intervals instead of the C way? Besides being easier to understand and matching thousands of years of human conventions (like dates, alphabetized names, numbered ranges, etc.), they make it possible to use enumerations and finite ranged integers. You can have January..December in Pascal, but with half-open intervals, you can't even write it down.

The C-based programmers have trouble even starting with 1 because they have this incredibly bad distortion of arrays in their heads. They can't think about multi-dimensional arrays without thinking about pointers to pointers or pointers and multiplication. If you have a table of data on a piece of paper, that's a 2D array, but there are no pointers or index multiplication.

I don't have any trouble understanding C arrays or their ``first byte'' philosophy, I just think they're more complicated and less powerful. The C way (and the whole C language, really) is more confusing, harder to learn, harder to understand, limits your implementation and optimization, and you get less out of it. Why would I want to use something like that?

Relating to what you do have in your post, the "standardized" off-by-one error with zero-indexing is just an incorrect expectation on behalf of the (presumably inexperienced) programmer.
If you declare int a[10], the C standard requires you to be able to form a pointer to a[10] even though no such element exists. That is an off-by-one error and would not be allowed in any sane language. Some languages don't require bounds checks, but this is not the same thing because it is still considered an error and you are getting into implementation-defined/undefined territory. The C standard does not consider it an error and actually mandates this special case of out-of-bounds pointers. If you raise a run-time error, you are violating the C standard.

This is an unsatisfying explanation, and could easily have been rewritten with every number decreased by 1 without contradicting itself.
Or increased by 1 or decreased by 2 or multiplied by 2. We choose 1 because day 1 is the first day in the month. When we count objects, we say 1, 2, 3, not 0, 1, 2.

Name: Anonymous 2017-08-13 17:35

I know, guys, what if we store something at
array[0]
(like its size) and start storing the array data from position 1? Both the hardware and the programmers will be happy!

EDIT: Thanks for the gold kind stranger!

Name: Anonymous 2017-08-13 17:58

>>59
Both the hardware and the programmers will be happy!
the hardware would be happy with anything, but the Appers will see it as starting from 1, so they won't be happy.

Name: VIPPER 2017-08-13 18:01

1-based counting was ``the norm'' for thousands of years. Years begin at 1. The Bible counts chapters and verses from 1. Programming languages before C started arrays from 1 or a user-defined index.
Programming is applied mathematics, and in math you start from 0

Name: Anonymous 2017-08-13 20:20

>>61
You count the first object as 1 in math. 0 is what you have before you count anything. Maybe that's what you mean, but C counts differently.

When you count three apples, do you say 0, 1, 2 or do you say 1, 2, 3? I say 1, 2, 3 because I am counting the apples themselves, not pointer offsets.
https://en.wikipedia.org/wiki/History_of_ancient_numeral_systems
Tallies made by carving notches in wood, bone, and stone were used for at least forty thousand years.
C has no respect for at least 40,000 years of human history of counting.

To create a record that represented "two sheep", they selected two round clay tokens each having a + sign baked into it. Each token represented one sheep. Representing a hundred sheep with a hundred tokens would be impractical, so they invented different clay tokens to represent different numbers of each specific commodity, and by 4000 BC strung the tokens like beads on a string.[7] There was a token for one sheep, a different token for ten sheep, a different token for ten goats, etc. Thirty-two sheep would be represented by three ten-sheep tokens followed on the string by two one-sheep tokens.
Counting used to be really simple before C made it complicated. People 40,000 years ago were able to count without having off-by-one errors, and Sumerian sheep herders never had any problems, but C programmers have problems today. C is setting humanity back more than 40,000 years.

People and normal programming languages count when they point out the apple, but C does not. You go through the loop without counting when, suddenly, you add 1 at the end because you remembered that there's an apple. That's why the C standard requires you to be able to form a pointer to the byte outside the array. That's an off-by-one error mandated by an ISO standard because too many C loops would break if they did the Right Thing and made it an error.

Here's some pseudocode.
counter = 0
while uncounted apples exist:
apple.counted = True
counter = counter + 1

If there are no apples, the counter stays at 0 because the loop never runs.

count (apple : apples) = Succ (count apples)
count [] = Zero

0 is the empty list, when you have no apples.

As for why C is so popular, I think it's because the natural way of counting is so easy that babies and prehistoric man from 40,000 years ago can understand it, but the ``hackers'' don't like making things easy for people. This ``hacker counting'' is a new kind of ``math'' they can teach to people that has very little value but seems important because it's hard to learn and not like normal math. It makes it seem like they learned some ``secret'' that's only for programmers, but programmers using non-C-based languages count like everyone else. Loops in BASIC, Pascal, and Fortran are very easy because they're based on counting, something you already know. C makes counting complicated so it becomes a huge problem these ``hackers'' can pretend to solve because they're too dumb to solve real problems. Counting is not a problem. C does counting wrong and that's the problem. The C hackers forget that all these programming books and compilers still exist and simply switching to a different language would solve all of these problems immediately.

Name: Anonymous 2017-08-14 5:34

This ``hacker counting'' is a new kind of ``math'' they can teach to people that has very little value but seems important because it's hard to learn and not like normal math
HACKER MATH
DON'T DO IT, OR YOUR KID WILL BECOME A HACKER
DANGEROUS C CODE ``HACKER COUNTING` ` FROM ZERO

Name: Anonymous 2017-08-14 7:56

>>63
``Hacker'' as in ``UNIX hacker'' and ``C hacker'', someone who hates doing the Right Thing and doesn't care about what they're doing. ``Worse Is All You Get Forever'' is not just for arrays and counting, but everywhere.

It is unfortunate that we call both ``security crackers'' and ``idiot C programmers'' ``hackers'', but these UNIX hackers are more ``dangerous'' than malicious ``security crackers'' because they think poor quality software and bad programming languages are good. They think C, UNIX, and Plan 9 are good. They do not think ``we will fix them later'', they think they are already good and don't need to be fixed. The UNIX/C hackers enable the other kind of hacker by creating garbage and dumbing down programmers.

Name: Anonymous 2017-08-14 14:05

Zero-based counting also makes sense when you're doing graphics, euclidean geometry, physics, or anything else where real values are quantized to integral approximations. As a simple-example, say you're mapping an angle in radians to quadrants of the cartesian plane. If you index the quadrants starting from 0, then you simple multiply the angle in radians by 2 / pi and cast to int to get the quadrant. Indexing from 1, you'd have to add 1 at the end of your calculation. A similar analogy can be made for just about everything that deals with quantized lengths like this. 0-based indexing is just simpler because distances start from 0, not 1.

Name: Anonymous 2017-08-14 14:54

>>65
Zero-based counting also makes sense when you're doing graphics
That is not how C counts. C counts at the end of loops instead of the beginning, so it appears the same if the loop completes, but if you exit early, it is off by one. I say appears because the C way, when counting with pointers, can actually point outside the array instead of pointing to the last element.

However, this thread is not about what's better because all of the languages I prefer let you start from 0, 1, -10 or whatever is most convenient. It's about Stack Overflow being so dumbed down that they have never seen arrays that didn't start at 0, and computer science classes teaching inferior distortions of concepts in a way that is both harder to understand and less powerful.

Name: Anonymous 2017-08-14 16:41

>>66
It's about Stack Overflow being so dumbed down
SO's quality, just like Wikipedia, is affected by its community members.
In SO's case, mostly code monkeys and mental midgets use it.

Name: Anonymous 2017-08-14 20:56

>>67
Yes, but unfortunately that is the level of education now.

Name: Anonymous 2017-08-15 12:24

>>67
Not like Wikipedia is any better. I'm aware you didn't claim otherwise, just wanted to put it on the record.

Name: Anonymous 2017-08-15 22:23

>>69
yeah, they both are terrible for the same reason.

Don't change these.
Name: Email:
Entire Thread Thread List