Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

A new floating point format

Name: Anonymous 2017-07-02 5:54

Can express numbers from ~ 2^(2^(10^-308)) to 2^(2^(10^308))
https://www.reddit.com/r/frozenvoid/wiki/algorithms/data_encoding/logfloat

Name: Anonymous 2017-07-02 6:50

To understand logflot, first reference dualfloat, a simpler format
https://www.reddit.com/r/frozenvoid/wiki/algorithms/data_encoding/dualfloat
dualfloat is basic a floating point number with floating point exponent.
A conventional float32 is mantissa*2^exp.
a DualFloat uses mantissa*(float32) i.e. mantissa * (float_mantissa*2^(10+-38))
essentially using float32 as exponent part, giving enormous exponent range.
LogFloat is the idea that floating point number can represent logarithm with
fixed base, in this case 2, allows to perform operations in logarithms instead of values.
Now logFloat is 2^logarithm, where logarithm is the DualFloat number
and (2=logbase) is implied, i.e. 2^DualFloat.
In 256bit representation this gives range of ( 2^(mantissa* 2^(10^-308)) to 2^(mantissa*2^(10^308)))

Name: Anonymous 2017-07-02 9:10

Ok, but why would you want that?

Name: Anonymous 2017-07-02 10:50

>>3
You can't express floats with exponents larger than 2^64 or 2^128 integer.
All arbitrary precision libraries have limit on exponent size.
They have huge mantissa of thousands of bits, but tiny exponents(in comparison to dualfloat).
The dual float is the reverse: the mantissa is tiny, but exponent range is huge.
2^64(int32)< 2^127(float32 range)
2^128(int64) <2^308(float64 range)

Name: Anonymous 2017-07-02 11:21

>>3
Lets takes an example:
you want to calculate 9^(9^9) 9^387420489 is not expressible at floating point
with logfloat or dualfloat
but log2(9^387420489)=387420489*log2(9) =3.169925001*387420489=
2^1228093893.980745489 is not expressible as float either
(only arbitrary precision floats can handle the exponent)
But dualfloat can handle it as a VALUE.
(2^1.228093893980745489e+9) is less than mant* 2^(mantf*2^127)
since 2^127= 3.40 × 10e+38 which is x10e+31 larger
[128bit dualfloat using 32bit float exponent]

Name: Anonymous 2017-07-02 12:12

Frozenvoid, I might take you more seriously if you at least learn to format your code better. Not even your reddit pages have good formatting; it's like you randomly strip spaces and carriage returns.

Name: Anonymous 2017-07-02 15:05

This can't even represent 3 without rounding errors.

Name: Conjurer 2017-07-02 15:20

I summon cdr.

Name: Anonymous 2017-07-02 15:35

How easily can it be optimised and implemented in ASM? How does it compare in speed with IEEE floating points? How about its accuracy for the integer set? How easily can it be extended for more bits and how easy would a bignum implementation of it be?
What happened to Sugin?

Name: Anonymous 2017-07-02 16:19

>>9
How easily can it be optimised and implemented in ASM? How does it compare in speed with IEEE floating points?
For Frozenvoid, optimization and performance is an afterthought. If it works, who cares!

How about its accuracy for the integer set?
see >>7.

What happened to Sugin?
Posted here for a little bit, then left back to /jp/.

Name: Anonymous 2017-07-02 18:48

>>10
If you see Sugin please tell him to come back, I miss him.

Name: Anonymous 2017-07-03 9:06

>>10
I've just published the idea. Its not even a real spec.
Its just that you can have floating point exponents, instead of integers.
>>7
Actually precision rises with lower float_exp.
Its only when you can't represent the exponent as integer(i.e. double/float start to skip integers)
then the precision start to take a hit, but unlike float you can adjust the mantissa.
Adjusting the mantissa adds extra precision, but will take some computation cost.

Name: Anonymous 2017-07-03 9:10

>>12
I've just published the idea. Its not even a real spec.
Well that's fine, but what about its application/use and performance in theory?

Name: Anonymous 2017-07-03 9:14

>>13
Replacing arbitrary precision floating point for imprecise computation.
A low precision float with huge exponent range(even larger than arb.prec. floats).
Performance will likely be a few times slower than the level of GCC quadmath(several orders of magnitude slower than long doubles)

Name: Anonymous 2017-07-03 10:05

>>13
The evolution of this concept would be explained as.
1.We have a float64 with int_mantissa:int_exponent
2.Dualfloat makes it int_mantissa:float_exponent
3.Logfloat is log(dualfloat), with same format. exp(logfloat)=dualfloat

Name: Anonymous 2017-07-03 15:24

>>12
log2(3) is transcendental while all dualfloats are algebraic. No matter how low you set the exponent, logfloats cannot encode 3.

Name: Anonymous 2017-07-03 15:37

>>16
1.0000000086*(e^1.09861228)=

Name: Anonymous 2017-07-03 17:26

>>17
Not 3.

Name: Anonymous 2017-07-03 19:37

Name: Anonymous 2017-07-03 19:51

>>19
2.9999999997956708148812233279372852911414544125698460
Yeah, ok.

Name: Anonymous 2017-07-04 3:43

>>20
The point is at double precision the result appears as 3
Paste it into a calculator and you will see

Name: Anonymous 2017-07-04 3:53

Name: Anonymous 2017-07-04 3:55

Name: Anonymous 2017-07-04 4:42

#include <math.h>
#include <stdio.h>
#include <quadmath.h>
typedef __float128 f128;
static void quadprint(f128 num){
char out[200];
quadmath_snprintf(out,199,"%.48Qe",num);
puts(out);}
int main(){
f128 x=expq(1.098612288668109691395245236922525803419160119968Q);
f128 y=3.00000000000000000000000000000000000000000000000000Q;
quadprint(x);
quadprint(logq(y));
return 0;
}
3.000000000000000000000000000000000385185988877447e+00
1.098612288668109691395245236922525803419160119968e+00

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List