Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

FrozenVoid in the wild

Name: Anonymous 2017-05-30 19:15

Name: Anonymous 2017-05-30 19:17

void.h
Everytime

Name: Anonymous 2017-05-30 19:38

What happened to the infinite compression? How does it work?

Name: Anonymous 2017-05-30 19:41

Name: Anonymous 2017-05-31 4:29

>created: 265 days ago
>karma: 194

Name: Anonymous 2017-05-31 9:53

Anonymous Anonymous said...

Actually I didn't know that someone allready patented this. OK, it's strange to claim that data can allways be compressed by at least one bit, but I think it can be done very close to this. It must be writen at least how many times file is compressed in file, and than at least one byte not bit more to decompress. I think I have solution for the problem myself, and I'm starting to code it in the begining of the year 2007.

I really am interested in what will be the case, if my code actually will work, regarding the not working patent that is mentioned here.

I'm sorry for errors in text, because I'm from Slovenia, and I'm not using English much.

Name: Anonymous 2017-05-31 10:09

i have been working on this problem for 7 years and i still have hope that it is possable to save information infinitly it would have to use a patteren and a scale abcd efgh 0123 4567
there are only 256 different abcd
and exactly 4096 efgh
and the power of so there are
15 main patterens of abcd
and have not found the main patterns for efgh yet then you have a scale 01234567
change the binary to 011010100001
011 010 100 001 0 1 2 3 4
all the way to 0 through F hex
the prosses can be very complicated if you can hel[ me any further write to mrbaker_mark@yahoo.com

Name: Anonymous 2017-05-31 11:15

FrozenVoid 5 days ago | parent [-] | on: A Soviet vision of the future: the legacy and infl...
see http://zhurnalko.net/journal-2
All retards seemingly come from Russia...

Name: Anonymous 2017-06-04 13:44

With all the hate for hackernews its surprising many of you have accounts there,

Name: Anonymous 2017-06-04 13:50

>>9
Those who don't don't post about them.

Name: Anonymous 2017-06-04 17:01

>>9
Thanks, FrozenAnus

Name: Anonymous 2017-06-05 11:38

>>11
This is not stackoverflow, downvoting me doesn't do anything.

Name: Anonymous 2017-06-07 7:32

ITT: people who never heard of pigeonhole principle

Name: Anonymous 2017-06-08 8:01

>>13
https://www.reddit.com/r/frozenvoid/wiki/pigeonhole_principle
Pigeonhole principle states that for every set a unique correspondence of 1:1 occurs and the set cannot be represented by smaller set.

Why this doesn't work.
Suppose a subset of input set is compressible.
If we only compress the compressible subset and don't modify the input in case of incompressibility, there are not 1:1 correspondence, but 1:1-m where m is the compressible subset.

Now what if we could move items from the incompressible subset to a compressible subset?
Given a transform which is reversible X->Y->X
We could try all possible transforms and parameters,until X is in the compressible subset. The key idea is with enough search space
almost all data could be converted into compressible form of same size

The cost of transform: the main cost is time required to search through transformation parameters. Since each transform is an operation on the entire file, time grows proportionally to filesize.
The extra metadata introduced by storing the transform is neglicible until the critical size.
Critical size: when the transform parameters
datasize is longer than savings gained by compression, the compression is no longer viable and file should be returned unmodified.

So the questions is: How N input into decompression produce N+M outputs?
The question is invalid as majority of files will not compress at all or require search space that impossible to check before heat death of the universe.
The idea is that the subset of compressible files grows, making the set of files of size N more compressible than a simple straight test for compression shows(i.e. variants of data transformed can reach into the compressible subset ).

Name: Anonymous 2017-06-08 8:58

>>14
More. Most of the files in set 2^N are random garbage with no use whatsoever.
We just need to transfer a small minor of useful but not compressible files into the easy-compressible subset: the only overhead is transformation parameters(which for large files would be trivial compared to savings). The limit of this scheme begin the break is when the file size become small enough that parameter metadata is larger than compression savings.

Name: Anonymous 2017-06-08 9:05

>>14

FV, will you help Cudder implementing compression in her browser?

Name: Anonymous 2017-06-08 9:27

>>14-15
you're rambling, FV. like, I usually have no problem understanding your posts but I can't follow the moon logic present here. pigeonhole principle is a mathematical principle, math has no concept of 'files' or 'usefulness'. if you have two bits of data, you won't be able to reproduce all possible states using just 1 bit.

if I understood correctly, you seem to propose a mixture of efficient algorithms tailored to specific types of data (good idea, but won't break pigeonhole principle) with some weird combination of lossy algorithms but with exhaustive search to find what was lost. now that's a fucking horrible idea. why? even disregarding time and memory requirements, it's basically impossible to tell the difference between 'useless' and 'useful' algorithmically. you could assume that high-entropy data is useless but that's not always the case. sure, it might be garbage but maybe I've compressed a folder which, among other things, contains encryption keys, already-compressed files and, worst of all, perl code? how could your algorithm tell the difference between noise and just things that look like noise but will break if you decide they should be replaced with something more structured?

basically, as clever as your algorithm might be, it won't break the theoretical limits set by pigeonhole principle while still being lossless.

Name: Anonymous 2017-06-08 11:07

>>16
I don't write assembler(its hard to write, debug and optimize: e.g. MenuetOS vs any C microkernel)
>>17
if you have two bits of data, you won't be able to reproduce all possible states using just 1 bit
I'm aware of this "The limit of this scheme begin the break is when the file size become small enough"
>you seem to propose a mixture of efficient algorithms tailored to specific types of data
Its not limited to any algorithm. The underlying principle is to use any function that transforms
data from incompressible subset to compressible subset. There is nothing magical in it: e.g.
https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform
>some weird combination of lossy algorithms but with exhaustive search to find what was lost
Its lossless not lossy. The transform is Reversible.
I don't propose exhaustive search(this is often impossible), there could be a time limit:
The idea is for finding fast transform function which allow to search parameter space efficiently.
>it's basically impossible to tell the difference between 'useless' and 'useful' algorithmically
Useful data is much lower entropy than random high-entropy noise. There are statistical differences if you run statistical functions on it. True Randomness is hard to get: the content that is useful is between easily predictable and pseudo-random.
You should try running this demo program over truly random and pseudo random files:
you will see the truly random data doesn't lose population count easily, while the opposite is true for other types of data(its population count drops by 20%-25% easily.)
https://www.reddit.com/r/frozenvoid/wiki/algorithms/data_encoding/bitflipping/simpledeltac



>How could your algorithm tell the difference between noise
It would just state it not compressible and refuse to produce output after search time limit.
>it won't break the theoretical limits set by pigeonhole principle
The limits apply to all possible files. The useful subset is very small.
E.g. for 2^32678 bits only a tiny minority represent useful content:
Instead of N:N set correspondence the high-entropy garbage is discarded and
N-X set of useful files suddenly becomes a viable target to transform and search.

Name: Anonymous 2017-06-08 11:26

>>18
ah, so you want to dynamically look for the best possible transform, then embed metadata about which transform was used, and then potentially make more rounds of searching, applying and embedding as long as the size of output is lower than the size of input? then, to decompress you would do no searches but just read what transform was used from metadata, apply the reverse transform, then read extracted metadata and repeat the process until no compression metadata is present?

this could potentially work (although it would probably be extremely inefficient when it comes to compression time, and relatively inefficient with decompression), although the pigeonhole still applies.

Name: Anonymous 2017-06-08 11:27

>>18
>I don't write assembler(its hard to write, debug and optimize: e.g. MenuetOS vs any C microkernel)
have you tried FASM's IDE (forgot its name)?

Name: Anonymous 2017-06-08 11:53

>>19 "The pigeon principle works for data we don't need. "
to decompress you would do no searches but just read what transform was used from metadata, apply the reverse transform, then read extracted metadata and repeat the process until no compression metadata is present?
Yes, the reverse transform is much faster. The main idea of "infinite compression" is that searching parameters and transforms eventually gives some results, allowing to transfer non-compressible subset to compressible data. The thing that missed here is that compression algorithm can be anything: its also chosen by metadata. It would look like this:
[Compression Algorithm:Size and Metadata:[Compressed Data]]=>
[[Transform:Parameters]:Uncompressed transformed data]->
[Original Data].

>>20
I'm aware of FASM and its macros system. I'm not using IDEs.
There are plenty of other reasons not to invest my time in asm.
Asm is non-portable and lacks any safety features.
C compilers produce very good asm output(well not perfect).
Asm optimizations are only required in hot spots, 99% of code doesn't get called every millisecond.
People switched to C with inline asm for a reason.

The domain of assembler is optimized libraries like ffmpeg which need the last cycles available from hardware. Cudder optimizes for browser functions called a few times per site.
Asm is losing ground to OpenCL, CUDA and GPGPU computing: simple function are transformed to shaders and run over large datasets loaded in VRAM.

Name: Anonymous 2017-06-08 12:37

>>21
so the compressed file could be thought of as a code for a lisp-like program which keeps generating either data or more code recursively, until we're left with just the data? sounds interesting, are you going to implement that?

also, will you please check my dubs?

Name: Anonymous 2017-06-08 12:40

>are you going to implement that?
Unfortunately i don't have time to work on these topics. I have other projects and work.

Name: Anonymous 2017-06-08 12:43

>>23
too bad, this sounds kinda neat. what are you working on?

Name: Anonymous 2017-06-09 2:03

FrozenTurd is another C shitter.

Name: Anonymous 2017-06-09 8:07

there's no such thing as infinite compression

Name: Anonymous 2017-06-10 2:06

>>26
Wrong.

Name: Anonymous 2017-06-10 11:51

>>27
Prove it.

Name: Anonymous 2017-06-10 12:22

>>28
First we create a space-time spiral using carefully aligned black holes to create a warp vortex which would wrap and spin spacetime around itself, dragging more and more mass in a massive chain reaction culminating in collapse of star systems, nearby galaxies and eventually the local supercluster. The universal Hubble expansion begins to slow, the warp vortex concentrates large percents of mass of the universe, grabbing nearby supercluster like a kid an candy store. Big Crunch begins as scaffolds of spacetime fold into a multidimensional hyper-vortex of Infinite Compression, while The Great Void[1] subsumes all that existed and lived. The Universe will eventually start anew with Infinite Decompression, if we can find the source code for the decompressor.
[1]https://en.wikipedia.org/wiki/Bo%C3%B6tes_void

Name: Anonymous 2017-06-11 1:48

>>28
1) There exists a trivial lossy compression scheme which can reduce the size of any data by X percent, where 0 < X < 100.
2) By repeatedly applying this compression scheme to its own output, arbitrarily high compression ratios may be attained.

Name: Anonymous 2017-06-11 3:16

>>30
but the compression rate significantly reduces with each compression.
So wouldn't \(x\) reach 0 percent while \(y < \infty\)?

Name: Anonymous 2017-06-11 11:09

>>30
1) There exists a trivial lossy compression scheme which can reduce the size of any data by X percent, where 0 < X < 100.
Prove it.

Name: Anonymous 2017-06-11 11:33

>>28

If you compress infinite string of "jewjewjew...", then the compression ratio will approach 0%.

Name: Anonymous 2017-06-11 12:12

>>33
And if you compress it again it will equal 0%, or even go negative.

Name: Anonymous 2017-06-11 13:16

Name: Anonymous 2017-06-12 11:33

What's the Weissmann score?

Name: Anonymous 2017-06-12 22:59

FrozenAnus is full of shit.
His theories are not sound at all.

Name: Anonymous 2017-06-13 13:48

How does a 2d CRC scheme go for decoding? modulo > 2

Name: Anonymous 2017-06-13 14:39

>>38
Why would anyone do 2D CRC? You'd either have to have a large buffer or a shitton of overhead.

Name: Anonymous 2017-06-13 19:05

>>39
It's very efficient for ASCII.

Name: Anonymous 2017-06-14 2:05

>>39
it could almost be used to compress 8 byte chunks by 25%


01001011 | 100
01010101 | 100
10011001 | 100
01001011 | 100
01010101 | 100
10011001 | 100
01001011 | 100
00100100 | 010
--------
01011001
10000111
01101111

Name: Anonymous 2017-06-27 4:42

>>41
But would it do infinite compression??

Name: Anonymous 2017-06-27 6:29

have you ever released any useful program, frozenvoid?
or are you all talk and no code like cudder?

Name: Anonymous 2017-06-27 6:36

>>43
I don't release programs, i keep them hidden in my totally secret reddit wiki.

Don't change these.
Name: Email:
Entire Thread Thread List