Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

FrozenVoid in the wild

Name: Anonymous 2017-05-30 19:15

Name: Anonymous 2017-06-08 11:07

>>16
I don't write assembler(its hard to write, debug and optimize: e.g. MenuetOS vs any C microkernel)
>>17
if you have two bits of data, you won't be able to reproduce all possible states using just 1 bit
I'm aware of this "The limit of this scheme begin the break is when the file size become small enough"
>you seem to propose a mixture of efficient algorithms tailored to specific types of data
Its not limited to any algorithm. The underlying principle is to use any function that transforms
data from incompressible subset to compressible subset. There is nothing magical in it: e.g.
https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform
>some weird combination of lossy algorithms but with exhaustive search to find what was lost
Its lossless not lossy. The transform is Reversible.
I don't propose exhaustive search(this is often impossible), there could be a time limit:
The idea is for finding fast transform function which allow to search parameter space efficiently.
>it's basically impossible to tell the difference between 'useless' and 'useful' algorithmically
Useful data is much lower entropy than random high-entropy noise. There are statistical differences if you run statistical functions on it. True Randomness is hard to get: the content that is useful is between easily predictable and pseudo-random.
You should try running this demo program over truly random and pseudo random files:
you will see the truly random data doesn't lose population count easily, while the opposite is true for other types of data(its population count drops by 20%-25% easily.)
https://www.reddit.com/r/frozenvoid/wiki/algorithms/data_encoding/bitflipping/simpledeltac



>How could your algorithm tell the difference between noise
It would just state it not compressible and refuse to produce output after search time limit.
>it won't break the theoretical limits set by pigeonhole principle
The limits apply to all possible files. The useful subset is very small.
E.g. for 2^32678 bits only a tiny minority represent useful content:
Instead of N:N set correspondence the high-entropy garbage is discarded and
N-X set of useful files suddenly becomes a viable target to transform and search.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List