Skip to content

Conversation

@abrahamwolk
Copy link
Collaborator

This pull request changes the method FormatOptionHandler.formatNumber() so that the type long is always used when converting a number to hexadecimal notation.

FormatOptionHandler.formatNumber() is used by the Text Entry widget and the Text Update widget when the widget property "Format" is set to "Hexadecimal". If, furthermore, the widget property "Precision" is set to -1 (the default value), then FormatOptionHandler.formatNumber() will be called with its argument precision set to 3 when called with a value of type double, with the consequence that, prior to this pull request, the type (signed) int would be used to convert a number represented by a double to hexadecimal notation, potentially truncating the input.

To me, it seems incorrect that the argument precision is used to determine the number of digits. For the decimal notation, the parameter refers to the number of digits after the period, and in this context, the default value of 3 makes sense for a value of type double. I think that for the property to mean something different for the hexadecimal notation is confusing, and I think the default (if any) should be a different value. Should this be changed, e.g., by introducing a new widget property for the number of digits before the period to display?

Copy link
Collaborator

@georgweiss georgweiss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A unit test would maybe make sense?

@kasemir
Copy link
Collaborator

kasemir commented Dec 12, 2025

Might be OK to always use Long in the conversion.
Also good idea to add a unit test.
What we need to accomplish:

A precision of 2 should show numbers 0x00 to 0xFF.
A precision of 4 should show numbers 0x0000 to 0xFFFF.
A precision of 6 should show numbers 0x000000 to 0xFFFFFF.
A precision of 8 should show numbers 0x00000000 to 0xFFFFFFFF.

That's because hex notation tends to be used on lower-level device displays to show register values, and the precision is used to match the size of the register: 2 digits for 8 bit byte, 4 digits for 16 bit word, 6 for 24 bit word (rare, but that's what we have in our timing system), 8 for 32 bit words.
These days there are of course also 64 bit registers, but Channel Access cannot transfer them, so they don't end up in displays that often (they're converted to double, so you get about 52 bits of data instead of 64). With PVAccess, you can get the 64 bit value, so we should be prepared for

A precision of 16 should show numbers 0x0000000000000000 to 0xFFFFFFFFFFFFFFFF.

Beware of sign extension.
If the register holds 0xFF (byte), we don't want to turn that into 0xFFFFFFFF as a long and then show 0xFFFFFFFF.

@abrahamwolk
Copy link
Collaborator Author

abrahamwolk commented Dec 15, 2025

A unit test would maybe make sense?

I have added tests: 82cebbd

Beware of sign extension. If the register holds 0xFF (byte), we don't want to turn that into 0xFFFFFFFF as a long and then show 0xFFFFFFFF.

That's a very good point; that was a bug! (And on the master branch this is a bug, as well: integer types are sign extended if they are converted. E.g., Byte and Short will always be sign extended (since they are always converted), while int will be sign extended if the precision is > 8.) I have fixed this now: fb37c8c

Two open questions:

  1. I have tried to ensure that no sign extension occurs by checking the type of the number that is received and converting according to the type. Of course, there may be possible types that I am not aware of, and that are therefore not covered. At the moment, the conversion will then happen using the to.Long() method, which, depending on the implementation, may do sign extension:
    } else {
    longValue = value.longValue();
    }
    An alternative could be to signal an error in case the type of the number is not covered. Which is preferable?
  2. Suppose the precision is set to 2 and the value 0xFFAA arrives. Should 0xAA or 0xFFAA be shown? (I.e., should the precision parameter be used to pad leading zeroes only, to make the display look nicer, or should it also be used to truncate the value in case the value is "too large"? I think the value 0xFFAA should be shown, i.e. that the parameter only should be used to pad leading zeroes to "too small" values to make the display look nicer, and not to truncate the value.

@kasemir
Copy link
Collaborator

kasemir commented Dec 15, 2025

I'm afraid there's no perfect answer ...

tried to ensure that no sign extension occurs by checking the type of the number

When you read a number from an ai, mbbi, longIn, .. record, you always get a "signed" number, even if the intent was to treat the number as unsigned. So unfortunately we can't easily figure out what to do...

Maybe it's best to treat all numbers as unsigned for the hex format?
That tends to be appropriate when you display the raw content of hardware registers.

Suppose the precision is set to 2 and the value 0xFFAA arrives. Should 0xAA or 0xFFAA be shown?

I would show 0xFFAA because that's the value. We'll use the precision as a hint to format values. With a precision of 2, a value 5 is shown as 0x05; 0xAA is shown as 0xAA. But a value that needs more digits will use them, better than showing the wrong value.

@abrahamwolk
Copy link
Collaborator Author

abrahamwolk commented Dec 15, 2025

I'm afraid there's no perfect answer ...

tried to ensure that no sign extension occurs by checking the type of the number

When you read a number from an ai, mbbi, longIn, .. record, you always get a "signed" number, even if the intent was to treat the number as unsigned. So unfortunately we can't easily figure out what to do...

Maybe it's best to treat all numbers as unsigned for the hex format? That tends to be appropriate when you display the raw content of hardware registers.

The correct method to convert the received value to an unsigned long seems to be type dependent. For this reason, the pull request currently checks the type of the received value and calls a method to convert the received value to an unsigned long according to its type:

if (value instanceof Byte valueByte) {
longValue = Byte.toUnsignedLong(valueByte);
} else if (value instanceof Short valueShort) {
longValue = Short.toUnsignedLong(valueShort);
} else if (value instanceof Integer valueInt) {
longValue = Integer.toUnsignedLong(valueInt);
} else if (value instanceof Long valueLong) {
longValue = valueLong;
} else if (value instanceof Float valueFloat) {
longValue = valueFloat.longValue();
} else if (value instanceof Double valueDouble) {
longValue = valueDouble.longValue();
} else if (value instanceof UByte valueUByte) {
longValue = valueUByte.longValue();
} else if (value instanceof UShort valueUShort) {
longValue = valueUShort.longValue();
} else if (value instanceof UInteger valueUInteger) {
longValue = valueUInteger.longValue();
} else if (value instanceof ULong valueULong) {
longValue = valueULong.longValue();
} else {
longValue = value.longValue();
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants