Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong conversion of floating constants specified with hex (&H), bin (&X), or octa(&O) prefixes #10

Open
th-otto opened this issue Oct 9, 2022 · 1 comment

Comments

@th-otto
Copy link

th-otto commented Oct 9, 2022

when specifying something like

a#=&HFFFFF800

in the source, gfalist will print the number as zero.

Using some debug printf, i found the representation of such a number is (as raw binary data)
800000000000fbf6
and after conversion by dgfabintoieee
3f6000000000001f

That looks strange, because dgfabintoieee does not use the high bit of the first byte (which seems to be correct when printing the number in decimal)

Then i looked what the compiler generates when a double a assigned to an int. The function looks like this:

VFFTOI:
[000100a8] 2018                      move.l     (a0)+,d0
[000100aa] 2418                      move.l     (a0)+,d2
FFTOI:
[000100ac] 48c2                      ext.l      d2
[000100ae] 6b16                      bmi.s      $000100C6
[000100b0] 0442 03ff                 subi.w     #$03FF,d2
[000100b4] 6b0c                      bmi.s      $000100C2
[000100b6] 0442 001f                 subi.w     #$001F,d2
[000100ba] 6a2a                      bpl.s      $000100E6
[000100bc] 4442                      neg.w      d2
[000100be] e4a8                      lsr.l      d2,d0
[000100c0] 4e75                      rts
[000100c2] 7000                      moveq.l    #0,d0
[000100c4] 4e75                      rts
[000100c6] 4442                      neg.w      d2
[000100c8] 0442 03ff                 subi.w     #$03FF,d2
[000100cc] 6bf4                      bmi.s      $000100C2
[000100ce] 0442 001f                 subi.w     #$001F,d2
[000100d2] 6a08                      bpl.s      $000100DC
[000100d4] 4442                      neg.w      d2
[000100d6] e4a8                      lsr.l      d2,d0
[000100d8] 4480                      neg.l      d0
[000100da] 4e75                      rts
[000100dc] 6608                      bne.s      $000100E6
[000100de] 0c80 8000 0000            cmpi.l     #$80000000,d0
[000100e4] 67f4                      beq.s      $000100DA
[000100e6] 7002                      moveq.l    #2,d0
[000100e8] 6000 0018                 bra.w      ERROR

A0 on entry points to the original raw bytes. So it seems that the high bit of byte 6 seems to used as a sign (it will be loaded into the low 16 bits of d2)

Then i wrote this function (operating on the original raw input), which is just a conversion of the above assembler code:

static unsigned long dgfaieeetolong(const unsigned char *src)
{
	int16_t exp = (src[6] << 8) | src[7];
	uint32_t mant = ((uint32_t)src[0] << 24) | ((uint32_t)src[1] << 16) | ((uint32_t)src[2] << 8) | ((uint32_t)src[3]);

	if (exp & 0x8000)
	{
		exp = -exp;
		exp -= 0x3ff;
		if (exp < 0)
			return 0;
		exp -= 31;
		if (exp >= 0)
		{
			if (exp != 0 || mant != 0x80000000UL)
			{
				/* would cause runtime error 2 in interpreter */
				return 0xffffffffUL;
			}
			return mant;
		}
		exp = -exp;
		mant >>= exp;
		mant = -mant;
	} else
	{
		exp -= 0x3ff;
		if (exp < 0)
			return 0;
		exp -= 31;
		if (exp >= 0)
		{
			/* would cause runtime error 2 in interpreter */
			return 0xffffffffUL;
		}
		exp = -exp;
		mant >>= exp;
	}
	return mant;
}

And use it like this later:

                unsigned long ul;
...
		case 219:
			src++; /* skip filler byte at odd address */
			/* FALLTROUGH */
		case 220:
			/* binary 8byte double -> ASCII hexa */
			*dst++ = '&';
			*dst++ = 'H';
			ul = dgfaieeetolong(src);
			src += 8;
			pushnum(dst, ul, 16);
			break;

And that seems to work properly ;)

Looks also like dgfabintoieee (which is now only used when converting to decimal or real floats) has to be slightly adjusted. If if understand the code correctly, the actual format of doubles in gfa is:
48 bits mantissa, 1bit sign, another 4 bits of mantissa, 7 bits exponent.

But this is currently wrong only for the least significant 4 bits of the mantissa.

PS: sorry for not providing a patch, but i had already reformatted the source.

@th-otto
Copy link
Author

th-otto commented Oct 10, 2022

Some more info: long ago someone already managed to decode the format:

http://www.tho-otto.de/hypview/hypview.cgi?url=%2Fhyp%2Fgfabasic.hyp&charset=UTF-8&index=24

So there are actually only 48 bits of mantissa, and a 16 bits exponent which also carries the sign (as mentioned above),
That is a format that cannot be represented in an IEEE 754 format (the exponent can have values up to 999 decimal)

@th-otto th-otto changed the title Wrong conversion of floating constants specified with hex (&H), bin (&X), or octa(&O) prefixes Wrong conversion of floating constants specified with hex (&X), bin (&H), or octa(&O) prefixes Oct 10, 2022
@th-otto th-otto changed the title Wrong conversion of floating constants specified with hex (&X), bin (&H), or octa(&O) prefixes Wrong conversion of floating constants specified with hex (&H), bin (&X), or octa(&O) prefixes Oct 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant