You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in the source, gfalist will print the number as zero.
Using some debug printf, i found the representation of such a number is (as raw binary data) 800000000000fbf6
and after conversion by dgfabintoieee 3f6000000000001f
That looks strange, because dgfabintoieee does not use the high bit of the first byte (which seems to be correct when printing the number in decimal)
Then i looked what the compiler generates when a double a assigned to an int. The function looks like this:
A0 on entry points to the original raw bytes. So it seems that the high bit of byte 6 seems to used as a sign (it will be loaded into the low 16 bits of d2)
Then i wrote this function (operating on the original raw input), which is just a conversion of the above assembler code:
unsigned long ul;
...
case 219:
src++; /* skip filler byte at odd address */
/* FALLTROUGH */
case 220:
/* binary 8byte double -> ASCII hexa */
*dst++ = '&';
*dst++ = 'H';
ul = dgfaieeetolong(src);
src += 8;
pushnum(dst, ul, 16);
break;
And that seems to work properly ;)
Looks also like dgfabintoieee (which is now only used when converting to decimal or real floats) has to be slightly adjusted. If if understand the code correctly, the actual format of doubles in gfa is:
48 bits mantissa, 1bit sign, another 4 bits of mantissa, 7 bits exponent.
But this is currently wrong only for the least significant 4 bits of the mantissa.
PS: sorry for not providing a patch, but i had already reformatted the source.
The text was updated successfully, but these errors were encountered:
So there are actually only 48 bits of mantissa, and a 16 bits exponent which also carries the sign (as mentioned above),
That is a format that cannot be represented in an IEEE 754 format (the exponent can have values up to 999 decimal)
th-otto
changed the title
Wrong conversion of floating constants specified with hex (&H), bin (&X), or octa(&O) prefixes
Wrong conversion of floating constants specified with hex (&X), bin (&H), or octa(&O) prefixes
Oct 10, 2022
th-otto
changed the title
Wrong conversion of floating constants specified with hex (&X), bin (&H), or octa(&O) prefixes
Wrong conversion of floating constants specified with hex (&H), bin (&X), or octa(&O) prefixes
Oct 10, 2022
when specifying something like
in the source, gfalist will print the number as zero.
Using some debug printf, i found the representation of such a number is (as raw binary data)
800000000000fbf6
and after conversion by dgfabintoieee
3f6000000000001f
That looks strange, because dgfabintoieee does not use the high bit of the first byte (which seems to be correct when printing the number in decimal)
Then i looked what the compiler generates when a double a assigned to an int. The function looks like this:
A0 on entry points to the original raw bytes. So it seems that the high bit of byte 6 seems to used as a sign (it will be loaded into the low 16 bits of d2)
Then i wrote this function (operating on the original raw input), which is just a conversion of the above assembler code:
And use it like this later:
And that seems to work properly ;)
Looks also like dgfabintoieee (which is now only used when converting to decimal or real floats) has to be slightly adjusted. If if understand the code correctly, the actual format of doubles in gfa is:
48 bits mantissa, 1bit sign, another 4 bits of mantissa, 7 bits exponent.
But this is currently wrong only for the least significant 4 bits of the mantissa.
PS: sorry for not providing a patch, but i had already reformatted the source.
The text was updated successfully, but these errors were encountered: