-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix constant float values being converted to ints #1674
Fix constant float values being converted to ints #1674
Conversation
Signed-off-by: Declan Russell <declan.russell@autodesk.com>
Signed-off-by: Declan Russell <declan.russell@autodesk.com>
If there wasn't a failing unit test to catch the original issue, could one be added to check this? |
This is a totally reasonable change, but... Can you show us a snippet of before/after of the oso, explain how the symptom manifests in practice? The "{}" should be the equivalent of "{:g}". I think that changing to "{:.9g}" primarily serves to make sure it's expressed as a floating point number instead of rounding to an integer if it's an integral value (e.g., 1.0 would be "1"). But how is that harmful? Is there someplace downstream when reading these oso files that are getting confused by the fact that the number looks like it could be an int rather than that it could only be a float because it has the decimal point? Maybe in some sense that is the real bug to deal with? |
I've attached an example that shows the problem that manifesting. I have 3 files, an example .osl shader, and the compiled oso before and after the change. The osl shader simply assigns the value |
I see, weird! I was still not quite getting why this was problematic, so I poked around a bit, and here's what I think is a more full explanation: When outputting the oso and using fmt library to write the value, But if both will parse to the same float, why does the shader print The answer, and the real core of the problem, is that when we parse the oso file it sees -1000000000000 and thinks it's an int, lexically. Now, once it gets to figuring out what to do with that, if the initial value of a float parameter looks like an int, it's still no problem, it will convert the int to the equivalent float. Except that in this case, because the text that looked lexically like an int is out of range for a 32 bit int, it gets INT_MIN, so by the time it's converted back to float, it's -2147483648.000000 Your fix works (at least in this instance) because it forces the value to print in a way that lexically can only be a float and not an int. I'm going to keep tinkering with this just a bit. I feel like a more full solution is going to involve more robust handling of out-of-range int values in the oso parser. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. This seems to result in preventing floats that don't exactly represent an int from printing as if it was an int (lexically, it will only look like a float, not an int, if it's not an exact int).
I will separately look into the parsing of the oso and see if there's a way to be more robust with out-of-32-bit-range int values.
…ndation#1674) This issue was caused by 8c52d48 . The problem is that the constant float values are being turned into ints. The PR restores the previous behavior and preserves the float format. Note from LG explaining: When outputting the oso and using fmt library to write the value, {} (equivalent to {:g}) prints the "simplest" text representation that, when re-parsed as a float, will give you the SAME binary float value. So the "true" value of -9.99999996e+11 is printed as -1000000000000, which when reparsed will give you the closest float to that, which is the original value. (Remember that the value of -1e12 is not exactly representable as a float.) But if both will parse to the same float, why does the shader print -2147483648.000000 instead of -999999995904.000000? That's the interesting part. The answer, and the real core of the problem, is that when we parse the oso file it sees -1000000000000 and thinks it's an int, lexically. Now, once it gets to figuring out what to do with that, if the initial value of a float parameter looks like an int, it's still no problem, it will convert the int to the equivalent float. Except that in this case, because the text that looked lexically like an int is out of range for a 32 bit int, it gets INT_MIN, so by the time it's converted back to float, it's -2147483648.000000 Your fix works (at least in this instance) because it forces the value to print in a way that lexically can only be a float and not an int. Signed-off-by: Declan Russell <declan.russell@autodesk.com>
Thanks for finding the root cause of the issue, that makes much more sense as to what is going wrong 🙂 |
Highlights: * GPU/OptiX support of ReParameter (AcademySoftwareFoundation#1686) * Fix OptiX userdata derivatives for interpolated params on GPU (AcademySoftwareFoundation#1685) * Fix userdata binding corner case (AcademySoftwareFoundation#1673) * Fix constant float values being converted to ints (AcademySoftwareFoundation#1674) Restock the abi dump file as we push to 1.13.4. See merge request spi/dev/3rd-party/osl-feedstock!48
Description
This issue was caused by 8c52d48 . The problem is that the constant float values are being turned into ints. The PR restores the previous behavior and preserves the float format.
Checklist: