Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Python is the most popular language

Name: Anonymous 2015-03-09 15:50

Name: Anonymous 2015-03-09 15:52

It's the most popular coding language, not the most popular programming language.

Name: Anonymous 2015-03-09 15:56

Name: Anonymous 2015-03-09 16:02

>>3
Bullshit what, you idiot? According to that ranking, the winners are still Java and Javashit.

Name: Anonymous 2015-03-09 16:09

Billions of flies cannot be wrong - feces is the best food. [/thread]

Name: Anonymous 2015-03-09 16:19

Wait-wait-wait, where're Common Lisp and Scheme in there?

Name: Anonymous 2015-03-09 16:20

>>4
Its neither post popular coding or programming language.

Name: Anonymous 2015-03-09 16:22

>>6
see >>3

Name: Anonymous 2015-03-09 17:41

You're all wrong, Mandarin is most popular language.

Name: Anonymous 2015-03-09 17:50

>>9
Mandarin is like Assembler with a really extreme CISC architecture with tons of prefixes and suffixes.

Name: Anonymous 2015-03-09 17:55

>>10
It's also encrypted. Also, check `em.

Name: Anonymous 2015-03-09 18:44

>>2
Sure, it's the most popular coding language. Javashit is the most popular apping language.

Name: Anonymous 2015-03-09 19:33

>>2,12
Actually, I'd say that Java is the most popular coding language. At the very least, Python doesn't go out of it's way to prevent doing anything neat with it. Java was designed with one idea in mind: there is no problem that cannot be solved by adding more classes. And it has been enormously successful and popular in that.

Name: Anonymous 2015-03-09 23:49

CSV, XML, and JSON are the most popular coding languages.

Name: Anonymous 2015-03-10 6:43

>>9
If Chinese didn't exists(along with kanja/hanja), Unicode could be much simpler and fit in 2 bytes fixed. Instead we have a Cthonic monstrosity of various variable-width encodings, megabyte fonts and 4 byte chars.

Name: Anonymous 2015-03-10 6:49

>>15
I intentionally never add any Unicode support, simplifies tons of stuff. Either use English or stop using computers.

Name: Anonymous 2015-03-10 10:50

>>15
If we used fixed 2-byte encodings almost all text would still take up more space than it does currently.

Name: Anonymous 2015-03-10 14:50

>>16
Unless you are writing fonts, that doesn't make sense, and even then it makes just a little. You can handle most of UTF-8 with very simple algorithms.

Name: Anonymous 2015-03-10 14:55

18 get

Name: Anonymous 2015-03-10 15:48

>>18
Chars are bytes and operations on them are much faster and simpler(plus they takes x2-x6 less space). Library abstractions hide Unicode complexity and cruft in simple interfaces.

Name: Anonymous 2015-03-10 15:57

>>18
never benchmarked utf-8 functions memory and speed
thinks "simple" interface means its fast and simple inside
thinks unicode==font graphics

Name: Anonymous 2015-03-10 16:12

>>18
You can handle most of UTF-8 with very simple algorithms.
libiconv-1.14.tar.gz 4,984,397 bytes

Name: Anonymous 2015-03-10 16:21

>>22
libiconv handles more than utf-8, it has full unicode support and many local encodings.

Name: Anonymous 2015-03-10 16:30

>>23
Thats the point: all this translation/conversion/rendering cruft exists because its non-English text. By "not supporting Unicode" i don't support bloat and inefficient abstractions(such as utf-8 variable byte encoding) which are always inferior in speed and memory to plain ASCII.

Name: Anonymous 2015-03-10 16:43

>>24
© Ben Garrison Software Foundation

Name: Anonymous 2015-03-10 17:51

>>25
Whom are you quoting?

Name: Anonymous 2015-03-10 18:48

>>26
Ben "if it ain't byte, it ain't right" Garrison

Name: Anonymous 2015-03-10 18:53

I absolutely adore the multitude of mutually incompatible byte encodings that exist for various versions of Cyrillic, Greek, Turkish, even Western European. Not to mention our precious Japanese which is either UTF-8 or ShiftJIS, both are multibyte.

Name: Anonymous 2015-03-10 18:54

>>24
UTF-8 is not an abstraction. Unicode is an abstraction. UTF-8 is not. Therefore you are retarded.

Name: Anonymous 2015-03-10 18:57

>>28
our precious Japanese

Speak for yourself. Japanese is shit. Not even intelligent enough to use an alphabet like the Koreans.

Name: Anonymous 2015-03-10 19:06

>>30ファッグがカナについてしらねえwwwであります

Name: Anonymous 2015-03-10 19:09

>>31
Oh look at me, I'm writing in the smilie-face language!

Name: Dubsmon No.1 Fan 2015-03-10 19:14

get

Name: Anonymous 2015-03-10 19:18

>>29
Abstraction is abstraction.
ASCII is abstraction of Latin character ranges, its just a really thin one and maps to a single byte. UTF-8(and its alternative 7-bit form UTF-7) are abstract unicode code point ranges represented as variable width byte stream with complex rules.
http://en.wikipedia.org/wiki/UTF-8#Examples
Consider the encoding of the Euro sign, €.

The Unicode code point for "€" is U+20AC.
According to the scheme table above, this will take three bytes to encode, since it is between U+0800 and U+FFFF.
Hexadecimal 20AC is binary 0010 0000 1010 1100. The two leading zeros are added because, as the scheme table shows, a three-byte encoding needs exactly sixteen bits from the code point.
Because the encoding will be three bytes long, its leading byte starts with three 1s, then a 0 (1110...)
The remaining 4 bits of this byte are taken from the start of the code point (1110 0010), leaving 12 bits of the code point yet to be encoded (...0000 1010 1100).
The remaining 12 bits are cut in half, and 10 is added to the start of each of the 6-bit blocks to make two 8-bit bytes. (so 1000 0010, then 1010 1100).

Name: Anonymous 2015-03-10 19:32

>>34
That's the exact opposite of an abstraction. IHBT

Name: Anonymous 2015-03-10 19:54

>>35 http://en.wikipedia.org/wiki/Abstraction_%28computer_science%29

In computer science, abstraction is a technique for managing complexity of computer systems. It works by establishing a level of complexity on which a person interacts with the system, suppressing the more complex details below the current level.
The programmer works with an idealized interface (usually well defined) and can add additional levels of functionality that would otherwise be too complex to handle. For example, a programmer writing code that involves numerical operations may not be interested in the way numbers are represented in the underlying hardware (eg. whether they're 16 bit or 32 bit integers), and where those details have been suppressed it can be said that they were abstracted away, leaving simply numbers with which the programmer can work.

What is UTF-8 doing?
1.Manages complexity of unicode representation
2.Hide complex parts of unicode processing
3.Provides an interface from unicode to strings
4."Programmers" see string functions and text
5.Replace in the above wiki paragraph "numeric"/"numbers" with "text characters" if you still don't get it

Name: Anonymous 2015-03-10 20:54

I do not even know why am I responding to an obvious idiot/troll.
Your list applies to an encoding programming library, not to UTF-8. UTF-8 itself does not do anything you have listed.

Name: Anonymous 2015-03-11 4:24

>>37
Your list applies to an encoding programming library
UTF-8 itself
So you read/write UTF-8 without the library, manually by performing mental calculations?
Oh wait, that would also create a UTF-8 library in your brain.
Is there even a world where perfect platonic form of UTF-8 exists and is usable without the code, bytes and physical computers?

Name: Anonymous 2015-03-11 8:48

Abstraction exists only in the mind as (empty) placeholder template for real object with a skeleton form(structure) filled by data(information). Without concrete objects and data its an idea (fantasy) of what one wants to accomplish(desire). We transform the idea into pseudocode and specifications, guided by the abstractions towards implementation of concrete programs and libraries operating on the assumption that the abstraction is a perfect form(its not, its just creators ideas turned into http://en.wikipedia.org/wiki/Reification_%28fallacy%29 ). Bugs and quirks are assumed as the problems of the implementation "properly implemented abstraction", like "sufficiently smart compiler" and "reasonable normal people" writing "pseudocode that maps to reality". So the abstraction layer pileup starts.
0.Unicode is treated as concrete static authority holding the correct placement of languages and symbols, which isn't reality.
1.UTF-8 encoding as an idea of "efficient storage of unicode characters".
2.UTF-8 as standard convention: UTF-8 is defined as "variable width stream of bytes that are convertible to unicode code points"
3.UTF-8 interface: UTF-8 library interface(typically handles all types of unicode)
4.UTF-8 code: concrete code which converts, parses or manipulates UTF-8 strings.
5.UTF-8 program: builds on #4 to process UTF-8 encoded text
6.UTF-8 programmer:writes programs blissfully unaware of abstractions below.
7. Until the program gets a dose of reality, the text reversed in random placed because of RTL directions or gets weird punctuation above letters(ala Zalgo texts), letters get bunched up into a cluster and string functions suddenly begin corrupting text.
8. The "programmer" is forced to see inside the abstraction and discover its "perfect form" is quite unlike the idea.
9.The inevitable conclusion is that all abstraction is merely illusion and hiding information is security by obscurity.

Name: Anonymous 2015-03-11 15:29

>>38
I can read some limited subsets of Unicode encoded as UTF-8 and displayed as hex, what's the problem? Your notion that an encoding standard AND EACH AND EVERY LIBRARY THAT READS AND/OR WRITES IT ARE THE FUCKING VERY SAME THING is fucking WRONG. THEY ARE NOT. Fucking retard.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List