Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

RLE Blitting

Name: Anonymous 2019-08-17 15:20

Ok. Transitioning to RLE blitting haven't improved the performance that much - just 20% speedup, but code complexity greatly increased. One thing I noticed while measuring performance (for both rle and non-rle code) was that at times my static code completed two times faster, which is impossible because test used all static data (a sprite which it blitted a million of times in a loop), with only variables being branch prediction and I have 0% CPU load, and it doesn't make any syscall inside measured code. What does that even mean? Branch misprediction does affect performance, but not two times in the long run, because it would quickly retrain the cache on thousandth iteration.

Broken scheduling or OSX intentionally slowing down the code? Or maybe the Intel CPU itself does that? My MacBook is relatively old, so if it has any time bomb, it would be activated by now. Or maybe that is the infamous Meltdown fix slowing down my code two times? How does one disable the Meltdown patch? For Linux there is https://make-linux-fast-again.com/, but what about OSX? I don't care about security - it is overrated.

Name: Anonymous 2019-08-28 7:28

>>40
hax my anus

Name: Anonymous 2019-08-28 13:08

fuck my anus

Name: Anonymous 2019-08-29 20:58

>>42
You wish

Name: Anonymous 2019-09-01 2:35

Pork my pignus

Name: Anonymous 2019-09-02 13:23

>>44
That's all, Folks!

Name: Anonymous 2019-09-02 21:07

I was looking for some efficient alternative to mutexes, which don't put the thread to sleep, but found none, and generally people recommend some nonsense, like refactoring and decoupling the code (wasting a lot of time for nothing), instead of inserting the multithreading as a cheap speedup into the existing codebase. When I suggested the superior alternative to just do while(!signaled); I immediately got downvoted:
https://stackoverflow.com/questions/6460542/performance-of-pthread-mutex-lock-unlock/57749968#57749968

Typically every good answer gets heavily downvoted for offering some simple solution, which is not a "good practice". I.e. you propose using key-value database as an alternative to SQL, but instead of good argumentation against kv dbs, you will hear some autistic screeching about how SQL was the product of much experience, several PhD papers and therefore everyone should follow SQL teaching like it is some holy Quran. Well, you know what? Haskell also grown out of experience of how bad are side effects, and then helped with making PhDs, but you wont be using Haskell in any non-toy project.

Generally there are two kinds of programmers:
1. Programmers who write actual code, which solves the problem.
2. The retards who, instead of code, write unit tests with getters/setters whole time, because it is the "good practice" recommended by some iconic bible by some deranged lunatic like Bjarne Straustrup. These same programming Nazis will scold you for doing "#define PI" instead of "const double pi", using indentation style they dislike (I find it useful putting `;` and `,` before the statements) or not prefixing member variable names with "m_".

Ideally there should be some IQ test, so all such autistic retards could be identified and sent to a country designed for people with special needs. Like they have villages for blind people. Edited on 02/09/2019 21:30.

Name: Anonymous 2019-09-02 23:04

>>46
6/10; nice to read. Highlights are the edited line at the bottom and criticizing getters and setters which is indeed correct and accessors/mutators are just retarded or meant for stupid languages with not enough features to provide for that if ever necessary. But the ; and , before statements is just too obvious.

Name: Anonymous 2019-09-02 23:17

>Ideally there should be some IQ test, so all such autistic retards could be identified and sent to a country designed for people with special needs. Like they have villages for blind people.
Is this your goal for Russia once your plans to destroy it succeed?

Name: Anonymous 2019-09-03 13:25

Sending all special needs people to special needs cities would make life simpler though.

Name: Anonymous 2019-09-04 7:01

>>46
busy waiting is slow as fuck compared to waiting on a mutex though. your're are hogging a core with constant checks of the volatile variable, negating most of the things you gain by spawning a thread. an alternative that would be faster and avoid most of the mutex-related overhead (but would be fairly risky when it comes to race conditions/deadlocks/livelocks) would be a busy wait coupled with a usleep().

Name: Anonymous 2019-09-04 7:54

It's safer to just do something like:

while (owned):
usleep(100)
owned = 1
...
owned = 0


There. No PhDs or anything.

Name: Anonymous 2019-09-04 7:55

>>50
busy waiting is slow as fuck
Unless threads are tightly coupled and each core is always used to 100%. I.e. if one core writes like 100 samples into a buffer, and another core immediately applies some effect onto them. Using mutex would add a huge slowdown, so it would make no sense to use two cores.

Just make sure you have enough free cores for that, possibly nicing your process at -10. That is enormously bad practice, but it works.

Name: Anonymous 2019-09-04 7:57

>>51
In my case you need nanosleep, not usleep.

Name: Anonymous 2019-09-04 8:01

>>53
I.e. I have a 1000000 of little sprites, like pixel sized particles: one thread asks to draw a sprite, another immediately starts drawing, so any amount of sleep would kill the performance, because one sprite could be drawn in a microsecond. The "good practice" would be refactoring the code by increasing a queue, but refactoring is that autistic practices, which doesn't add any new feature, beside making code more bloated and complex.

Name: Anonymous 2019-09-04 9:10

>>52
Just make sure you have enough free cores for that
download moar coars

Name: Anonymous 2019-09-04 9:11

nanosleep doesn't work as intended unless the kernel is real time. The coarse-grain multithreading is dealing with random delays of several milliseconds at least.

Name: Anonymous 2019-09-04 9:16

Just run graphics as single separate thread.(this approach also works with audio, input,etc for seamless low-latency feedback). If your game need multi-threaded graphics its overengineered bs that should be replaced with a game engine.

Name: Anonymous 2019-09-04 9:22

>>57
his game is a turn-based strategy game with sprite-based graphics, it can work without noticeable hiccups on a single thread. he's just bikeshedding performance to avoid doing any real work, and to have something to complain about

Name: Anonymous 2019-09-04 10:13

>>58
I'm using the game as an opportunity to learn various threading nuances. And the more FPS I've, the more effects I can do, without using OpenGL. Ideally I want to have software trilinear sampling to scale the view, but that is a lot of work.

Name: Anonymous 2019-09-04 10:18

>>57
Decoupled thread still needs to communicate with the part giving draw orders. Now there is a choice between refactoring it to establish a large queue of draw requests, and using normal mutexes for communication, or use existing codebase and just do the while(locked); busy loop. Obviously Smartphone users won't thank me for doing that and discharging their battery, but who cares about users today?

Name: Anonymous 2019-09-04 10:21

>>56
Well. One could run for 1000000 times, and if there is still no request and start sleeping for increasingly larger periods. But that is overengineering for a small indie game, which doesn't have to be nice to the rest of software running inside OS.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List