Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Linux is Defective by Design

Name: Anonymous 2019-02-20 15:14

A translation of old but still true Russian article about Linux: https://www.ylsoftware.com/news/116

Linux is better than Windows!
- Better?
- Than Windows!
-- An old joke

In order to avoid various misunderstandings and attempts to present me as a provocateur, I immediately want to make a reservation that I personally am far from delighted with the M $ products, however, of the GNU / Linux distributions too. That is why I decided to write this article.

Actually the topic is entitled, but you should definitely specify what it’s all about.

POSIX means the standard unifying * nix and each function in it. The result is a complete monolithic system API, in which nothing can be changed, and to which nothing can be added.

When the article talks about NT, it means NT, and not win32 and MFC. NTOS is one of the first attempts to build a kernel on OOP principles. In the NT kernel, an attempt was made to present all resources as objects; this was his novelty. In addition, the object ideology of the NT kernel was isolated from application technologies (COM / DCOM / COM + /. NET).

The NT kernel's OO approach is far from perfect, and largely implemented clumsily, but the kernel does not seem to be such a solid monolith. Say, tomorrow, developers will want to build in the NT kernel some new finite subsystem, and they will easily do it. For example, the developers of the clone NT - ReactOS for 2 weeks made a layer of support for BeOS.

The purpose of the article is not to compare the signal concepts of NT and Linux (as a bright and fast-growing representative of * nix): a detailed analysis would be extremely useful - many would finally know the NT signals (yes, Linux), but it’s necessary to say about them. The main goal of the article is not to explain the planning mechanisms and to prove why priority scheduling on events and a timer + an adaptive scheduling algorithm is better than round-robin scheduling on a virtually ONE priority (* nix / Linux scheduler).

Last thought needs some clarification. Linux has 100 priorities, priorities 1-99 are real-time. Non-real-time threads / processes (and the overwhelming majority of them) are performed on a single priority of 0, and are planned on the principle of circular dispatch (RoundRobbin - hello from the 80s). In order to at least somehow settle the execution of threads / processes, the scheduler uses a dynamic quantum of processor time, which is allocated to each thread depending on its “interactivity coefficient” (nice). The greater the percentage of processor use by this thread, the lower its “interactivity coefficient”. The nice coefficient varies from -20 to +20.

Real-time streams are scheduled according to the priority scheme in 2 modes: FIFO (processor quantum for this stream is turned off, and it is planned only by events), and RR (RoundRobbin, processor quantum for this stream is allowed, and it is planned both by events and by timer). But since almost everything works outside the RV, then welcome to the mid-80s.

The stuttering Helix in SuSE is so sick in particular that is why. It and the RV mode does not save, however what RV in SuSE? Quite recently, Suse Linux Enterprise Real-Time has certainly emerged, but the guaranteed response time is 27 ms - a bit too much for RTOS, especially considering that even in desktop XP it is about 40 ms.

For comparison, in NT all threads are planned by priorities, and are divided into 3 groups:

real-time streams, priorities 16-31.
average dynamic priorities 4-15.
low fixed priorities 0-3.

Real-time streams are not quantized; they are scheduled only by event.

Dynamic priority flows are scheduled by priority and by quanta. The priorities of this group are reduced relative to the base priority with an increase in the percentage of processor utilization, and increase with an increase in the percentage expectations of events / resources. Processor quanta in this group are directly proportional to the current priority.

In the group of fixed low priorities, planning is also based on quanta and events, but the priorities of flows are changed only by direct instructions of the program / operator.

And do not nod here at different RT distributions, like RTLinux or LynxOS (LynxOS-178, LynxSecure), although they are positioned as RTOS, their guaranteed reaction time is inferior to the traditional leaders of this segment at times, that is, the implementation of RTs is possible But the truth of this RV is extremely controversial.

An example of the flexibility of systems can serve as a mechanism (and in general its very ability) API translation. The method is certainly doubtful, but in practice it is used :).

So in NTOS kernel it is easy to implement POSIX and all Unix variants. Moreover, NT is partially compatible with POSIX, more precisely, some implementations based on NT are compatible with the POSIX standard, some are not, such as support for Windows 95-98 in XP.

Under Linux or * nix, it is impossible to fully support win32, since they lack the APC mechanism, widely used in win32, because poor signal concept * nix is ​​incompatible with win32, because asynchronous I / O in * nix is ​​actually a fiction. Based on the above, win32 can be supported, for example in Linux, only by implementing the kernel in the kernel, using Linux as a microkernel. But in this case, APC, the NT signaling concept and asynchronous I / O cannot be fully reproduced. It should be added that Wine, working under Linux, can never fully support win32.

In this case, for NT, the creation of subsystems and the translation of the API is a native mezanism. NT is an attempt to create basic tools with which you can build any finite subsystem: win32, posix, os / 2 and everything that comes to mind. Moreover, the toolkit is hierarchical. The microkernel API provides the base classes of objects and the mechanisms by which kernel mode drivers and the nearest environment of the microkernel are developed (in NT, this environment includes the object manager, motherboard manager, input / output system). The next level is the so-called API. kernel executive.

This API is used in drivers and protected subsystems. With its help, finite subsystems are created. At this level, classes of objects are created that inherit the classes of objects of the microkernel and new classes of objects. The object manager allows you to manage the kernel executive objects.

As a result, you come to the sad conclusion that POSIX is a cast of the concepts of the 80s, and the OSes, which are tough on him, are frozen in their development.

For example, one concept of representing everything and everything in the form of files is ideological limitation.

All * nix are built on the concept of "everything is a file", which itself is limited and does not allow the OS to evolve. The very concept of the file is primitive and reduced to its only representation as a “data stream”, or “a stream of fixed-length records of 1 byte size”. And the same Plan9, which brought the concept of "everything is a file" to the limit, which is why it is doomed.

Another limitation of * nix and especially Linux is the integrity of the kernel. The fact that Torvalds was not mastered by a memory manager independent of the microkernel (he could not surpass the teacher in this, and then proved and argues that he is to the right) made it impossible to add a driver or system component without recompiling the kernel. That is, increasing the properties of the system by simply adding drivers or components is impossible in principle.

Therefore, developers in particular have to release a new version of the kernel every 3-4 months with the support of a larger amount of hardware: added support for one, another, third. Distribution vendors, on the other hand, are forced to ship kernels compiled with support for the maximum amount of hardware, even if it is not used. But if something they have forgotten - then here you are in the assembly shop.

With all this, the core constantly swells - the amount of supported hardware is still not enough.

One of the most controversial provisions about the technical flaws of the Linux kernel is objectivity, or rather the complete absence thereof. The Linux kernel (and generally * nix) in principle cannot support the object ideology, which is key to the development of new generation interfaces, and the creation of a single distributed operating environment. One could argue that they say, using the Linux kernel, on top of it, as a shell, create an object ideology. Of course you can, but not! This shell will exist outside the kernel, and therefore will not be protected! In addition, the X server with OOP is as bad as the Linux kernel.

So it turns out that in NT you can find many more powerful ideas than solutions in POSIX.

Remembering the advice: “criticizing - offer”, we proceed smoothly to the second part of the Arlington ballet.

If you have not yet agreed that Linux is a dead-end, then re-read the above, again, and then again, and so on until the onset of full enlightenment. Without this, reading on is still meaningless.

Because the best thing that the FOSS community can do is throw out the Linux kernel and do more useful things. Otherwise it will come, someone who quite possibly does not share the values ​​of FOSS and will tear everyone away and M $ and Red Hat. Linux will be forced to huddle in some very narrow niche.

What should be taken as a basis?

There is HURD. But considering that it is still based on mach, which is deadly slow, there is no need to expect any special performance from it, and if you migrate to L4, then it will take a lot of time and by that time it risks completely outdated.

There is an exotic Plan9 and its descendant Inferno. But for the reasons already mentioned, the first one is in principle no better than Linux, and the second one can perhaps be used only by embedded systems (the absence of VFS as a “class” did you know that nobody else benefited).

There is an even more exotic BlueBottle, but it seems that few programmers will want to switch to Oberon, and then also rewrite a good part of all existing software on it.

There is Minix. The problems are the same.

P.S. You can send your opinions on the article to troninster@gmail.com

Name: Anonymous 2019-02-20 15:40

Of course the Windows API has always been vastly superior to POSIX. That's why developers, developers, developers, developers of end user software always chose to develop for Windows rather than Lunix. Lunix's design issues will never be fixed because now JavaScript is the end user platform.

Name: Anonymous 2019-02-20 18:07

Linux is evil. Very evil.
https://raw.githubusercontent.com/tinganho/linux-kernel/master/Documentation/stable_api_nonsense.txt
>You think you want a stable kernel interface, but you really do not, and
you don't even know it. What you want is a stable running driver, and
you get that only if your driver is in the main kernel tree.

Name: Anonymous 2019-02-20 20:54

tl;dr

Name: Anonymous 2019-02-21 7:48

some of the things here are obsolete (Linux doesn't use a round robin scheduler by default anymore), others are bullshit (the idea that object orientation is the only way of making a maintainable kernel)

Name: Anonymous 2019-02-21 8:43

didn't read

Name: Anonymous 2019-02-21 10:02

>>4
>>6
>>5
get mad, linux scum

Name: Anonymous 2019-02-21 10:19

>>7
make your're are game and optimize your're are quotes

Name: Anonymous 2019-02-21 10:41

The best argument against stable API from what i can ascertain from FOSS forums is that Stable APIs leads to binary blobs and black box software relying on them:
The other core argument from LKML, maintaining backward compatibility is an undue performance hit and requires more effort for kernel developers.

Name: Anonymous 2019-02-21 10:47

http://lkml.iu.edu/hypermail/linux/kernel/1604.0/00998.html Here is an attempt to argue for stable APIs, as you can see it doesn't end well.

Name: Anonymous 2019-02-21 10:47

>>9
Unstable API: no binary blobs and everything is forced to be opensource. API can also be badly designed without any thoughts or comments from hardware providers.
Stable API: wide hardware support and backward compatibility. Windows 10 still supports your soundblaster from 90ies, even if at the loss of performance or stability.

Do you believe end-users care about this open-source purism at the expense of support for their hardware.

Name: Anonymous 2019-02-21 10:48

textboards have stable API for dubs

Name: Anonymous 2019-02-21 10:49

>>11
Also, it makes no sense to have open source driver, when hardware is closed source itself. You also need open hardware.

Name: Anonymous 2019-02-21 10:49

>>12
You fail :D Guess your're are Linux system is too slow to send dubs properly.

Name: Anonymous 2019-02-21 11:05

>>13
The kernel developers openly call "commercial companies" leeches and think its the kernel that is the central part of the computer, not the hardware it uses. In reality, thats why android won the mobile market: it wasn't some magical "Google marketing" it was stability of the kernel interface.
https://source.android.com/devices/architecture/vndk/abi-stability

Name: Anonymous 2019-02-21 11:07

>>10
They basically gulaged him. Criticizing Linux at LKML is like criticizing Stalin in USSR.

Name: Anonymous 2019-03-01 10:14

The kernel report 2019:
https://www.youtube.com/watch?v=yt29BKVfI0I

Nothing has changed.

Name: Anonymous 2019-03-01 13:12

>>17
kernel report
Boring as fuck, could be summarized in 1 paragraph: Meltdown/Spectre was fully patched, Kernel devs whining about maintaining backward compatibility with ancient stuff, how BPF has grown to a generic VM , Android is bad, 5.0 Scheduler is now aware of CPU wattage costs - pls upgrade phones, kvetching that situation with android versions is resembling distros - we need one single kernel, code of conduct is integral part of linux.

Name: Anonymous 2019-03-01 13:48

>>18
Meltdown/Spectre was fully patched
-10% to preformance

gamers generally disable the fix on Windows 10. No way to disable it on Linux.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List