>>19 "The pigeon principle works for data we don't need. "
to decompress you would do no searches but just read what transform was used from metadata, apply the reverse transform, then read extracted metadata and repeat the process until no compression metadata is present?
Yes, the reverse transform is much faster. The main idea of "infinite compression" is that searching parameters and transforms eventually gives some results, allowing to transfer non-compressible subset to compressible data. The thing that missed here is that compression algorithm can be anything: its also chosen by metadata. It would look like this:
[Compression Algorithm:Size and Metadata:[Compressed Data]]=>
[[Transform:Parameters]:Uncompressed transformed data]->
[Original Data].
>>20I'm aware of FASM and its macros system. I'm not using IDEs.
There are plenty of other reasons not to invest my time in asm.
Asm is non-portable and lacks any safety features.
C compilers produce very good asm output(well not perfect).
Asm optimizations are only required in hot spots, 99% of code doesn't get called every millisecond.
People switched to C with inline asm for a reason.
The domain of assembler is optimized libraries like ffmpeg which need the last cycles available from hardware. Cudder optimizes for browser functions called a few times per site.
Asm is losing ground to OpenCL, CUDA and GPGPU computing: simple function are transformed to shaders and run over large datasets loaded in VRAM.