Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Treat float literals as decimal float literals and not as binary float literals #47

Open
dumblob opened this issue Jul 2, 2021 · 2 comments

Comments

@dumblob
Copy link

dumblob commented Jul 2, 2021

With decimal float literals finally making it into C23 (implemented 2021 and formally adopted in 2022) and seeing XL making significant changes in its basics incl. float literals, I'd like to propose making decimal float literals the default in XL. C++ already has decimal floats as an optional extension since 2011 if I'm not mistaken.

To use binary float literals (e.g. to double the performance and unpredictably lose precision) one can easily cast a decimal float literal to a binary one. Note, it's impossible to do the other way around (cast a decimal float lit. to a binary float lit.) if we wanted to maintain the precision the user wrote the float literal with.

dumblob referenced this issue Jul 2, 2021
The math module is one of the easiest modules to convert to the "new" syntax for
module specifications.

Functions there are lifted from `<math.h>` on Linux. Presumably, the functions
are generally available on all platforms. In the long run, it would be
interesting to write all of them in XL, but for now it's mostly a waste of time.

Signed-off-by: Christophe de Dinechin <christophe@dinechin.org>
@c3d
Copy link
Owner

c3d commented Jul 2, 2021

Interesting idea. I wonder if the problem is not what default representation for real, ie should it be binary or decimal. Note that decimal FP has other issues (e.g. correct rounding I believe is more difficult).

@dumblob
Copy link
Author

dumblob commented Jul 2, 2021

I wonder if the problem is not what default representation for real, ie should it be binary or decimal.

I'd definitely go also for decimal FP as I feel it must not be "worse than float". But thinking of it there are at least two ideas to explore:

  1. real could actually be made even closer to mathematical notation used "on a paper" by introducing something like a "variable" decimal FP in a sense that one would choose beforehand how many "true" significant digits incl. zeros (not to be confused with decimal places nor with "usual" significant digits) should be used for output (e.g. a literal 1.500 would have 4 significant digits, 0001.500 would have 4 significant digits, 300 -> 3 sigdigits, 0.000100 -> 7 sigdigits) and internally a higher precision would be used to account for few rounding errors whereas float would stay compatible with IEEE 754 and thus support only _Decimal32 _Decimal64 _Decimal128 variants under the hood.

  2. leave float as binary floating point and make only real into decimal floating point (possibly with the significant digits extension for better user experience and output purposes)

Note that decimal FP has other issues (e.g. correct rounding I believe is more difficult).

Could you be more specific? It still has infinity (actualy many infinities and both negative and positive ones) but otherwise most (if not all) of the issues I know of are basically gone. Actually correct rounding of decimal FP is one of the reasons why decimal FP is being used over binary FP. Feel free to take a look at the "best" library emulating decimal FP to see how rounding is being done (btw. mpdecimal is being used also by Python etc.).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants