-
Notifications
You must be signed in to change notification settings - Fork 475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decimal constructor doesn’t use the requested precision when converting from a number #200
Comments
Setting the If you want to create an instance from the underlying binary value of a number you can use pi = Decimal(`0b${Math.PI.toString(2)}`)
// or
pi = Decimal(Math.PI.toFixed(50)); If you want the exact value of Pi to 100 digits you can use Decimal.set({ precision: 100 });
pi = Decimal.acos(-1); |
This seems like the wrong thing to do when the argument is a number. Why not do something like Edit: or more specifically this might look more like (x) => {
if ((typeof x === "number") && isFinite(x)) {
if (x > 0) return Decimal('0b' + x.toString(2));
if (x < 0) return Decimal('-0b' + x.toString(2).slice(1))
}
return Decimal(x);
} Edit 2: This claim also doesn’t seem to even always be true. For example, passing in a bignum to Decimal throws an error rather than using the string value. |
You're welcome. When the argument is a number, the value of an instance is created from the argument's string value. |
I don’t understand what you mean here. Thanks for the sorta kludgy workaround, I guess? It’s definitely a bit less kludgy than my first try. (Perhaps it should be added to the documentation, if the code can’t change.) One other possible implementation (or workaround) would be to use pi = Decimal(Math.PI.toPrecision(Decimal.precision))
Fair enough, but my point wasn’t really to nitpick the statement, and I did not deeply investigate the internals of the library. The point of this bug report is: this seems like clearly incorrect behavior, inferior for every use case I can think of to the alternative. If people explicitly ask for n digits, the obvious thing to do is believe them and try to provide n digits rather than min(n, 15) digits or whatever. The current behavior is not mentioned in the documentation, and I doubt many people would have their code broken by the change. There is even a potential cross-javascript-implementation compatibility problem with the current behavior, since Number.toString is implementation defined. The ECMAScript standard points out:
(Is there some performance problem with giving the full requested precision? I didn’t try benchmarking it or anything...) But while we’re at it, the string value would be a better source to use when the input is a bignum, instead of choking and throwing an error. I’m happy to spend some time digging into the guts of your library and write a patch if that would be helpful. But your preceding reply doesn’t sound super receptive to that. |
Actually the safest for number inputs would probably be to replace https://github.com/MikeMcl/decimal.js/blob/master/decimal.js#L4370 with: return parseDecimal(x, v.toPrecision(53)); Then if fewer than 53 digits are needed, whatever preferred rounding method can be used, and the result should be predictable. |
No, it wouldn't be safe at all. I think you are confusing bits with decimal digits. And the maximum argument to Sorry but I have neither the time or incentive to address your longer post above which is similarly ignorant. See the README for some cautions regarding passing numbers to the constructor, and an example of passing a binary value to get the exact value of a number. |
Aha. I tricked myself. I was thinking only of double-precision numbers with a non-negative exponent. 1 + 1/2^n has at most n digits after the decimal point, because 10 = 2·5. For example 5/4 = 1.25 (2 digits), 17/16 = 1.0625 (4 digits), 2049/2048 = 1.0004882813 (11 digits), and so on. But if we consider e.g. 2**(-100) * (1 + 2**-52) this of course needs more digits. So for an exact result it’s probably necessary to explicitly get the number’s mantissa as an integer, construct a Decimal from that, then divide by the exponent. Though the
Are you always so hostile and insulting to people trying to help your project? You could instead e.g. say “I spent a lot of time building this project a few years ago, but nowadays my interests lie elsewhere and this change doesn’t seem important enough to spend my time thinking about, even if someone else is willing to write the patch. Sorry; feel free to fork the project if you need different behavior and a workaround wrapper doesn’t cut it for your use case.”
Yes, I read this. The “potential loss of precision” described in that readme is a completely different problem: someone writing a number down in their code that can’t be represented as a JavaScript number, and then being surprised when the JS parser turns their code string into a number that evaluates to something different than they expected. It doesn’t actually discuss this specific point. Nowhere does it mention that numbers are ingested using the result of My problem is not with explicit strings in my code being accidentally converted to numbers, but with quantities that are already numbers being silently rounded by up to 1/2 ulp even when I explicitly asked for 100 digits. The use case here is trying to do higher-precision arithmetic to double-check the results of numerical algorithms. It is important for such use cases that the inputs to the double-precision algorithm and the higher-precision algorithm are exactly the same, because even slight differences can in some cases become very significant. For example, I was trying to check the area of a very skinny spherical triangle using various alternative formulas for computing spherical area. A perturbation of <1/2 ulp in each coordinate of each vertex from Decimal’s lossy constructor directly caused a 50% relative error in the computed area of the triangle I was looking at, even with 100-digit arithmetic. (By comparison, the double-precision algorithm had relative error of about 0.01%, compared to the exact result.) I found another triangle where the perturbation from the lossy Decimal constructor caused a sign change (if you like, ∞ relative error). I can of course write my own |
Your solution would not work reliably for numbers with positive exponents either as almost all numbers over about
Yes, or more reliably using For example, using simple helper functions: const binary = n => (n < 0 ? '-0b' : '0b') + Math.abs(n).toString(2);
Decimal(binary(Math.PI)) + ''; // "3.141592653589793115997963468544185161590576171875"
// Round to a supplied precision or to Decimal.precision
Decimal.fromDouble = (n, prec) => Decimal(binary(n)).toSignificantDigits(prec ?? Decimal.precision);
Decimal.precision = 10;
Decimal.fromDouble(Math.PI) + ''; // "3.141592654"
Decimal.fromDouble(Math.PI, 50) + ''; // "3.141592653589793115997963468544185161590576171875" However, using Decimal.fromDouble = (num, prec) => {
const abs = Math.abs(num);
return Decimal(abs > 1e+99 || abs < 1e-20
? (num < 0 ? '-0b' : '0b') + abs.toString(2)
: num.toPrecision(100)
).toSignificantDigits(prec ?? Decimal.precision);
};
Why did you assume otherwise? The README makes it obvious to the attentive reader, for example: 0.3 - 0.1 // 0.19999999999999998
x = new Decimal(0.3)
x.minus(0.1) // '0.2'
x // '0.3' Obviously, The whole point of using this libray is to avoid the 'errors' that come from using binary floating point. For the most part, the only time when it more useful to use the underlying binary value of a number is when you are analysing binary floating point or the results of its arithmetic, and in that case it is perfectly reasonable to be required to explicitly ask for that binary value using
You didn't explicitly ask for 100 digits. The precision documentation makes it clear that
There is no potential problem as long as the
Anyway, if I wanted this library to create a Decimal's value from the underlying binary value of a number I wouldn't use
I don't know what you mean by "bignum", and we have to be careful of just calling x = [123];
Decimal(x) + ''; // '123' |
This is a good start, but doesn’t quite work for Infinity, NaN, or -0. Maybe a Decimal.fromDouble method would be a valuable addition to the library, because getting it right for every edge case has some subtleties. It also gets very big with extremely large or small exponents, e.g.
My experience is that every other mathematical/numerical library that takes in floating point input uses the floating point number passed in. I have never in my life encountered another library which first converts the floating point number to a string and then parses the string as a new number. No first-time user of any library is going to be so “attentive” to the documentation that they will carefully parse every example hunting for clues about ways the internal implementation might violate their basic expectations, and end up perfectly inferring how the library works. It would be much clearer with an explicit comment pointing out that
Yes, the result of I hadn’t noticed that this example demonstrated the constructor rounding number inputs; I skimmed past it because I would expect such a use to stick to string inputs to the constructor. Also this example is explicitly supposed to be demonstrating “Decimal instances are immutable in the sense that they are not changed by their methods” rather than anything about what happens to numeric input to the constructor per se.
I thought the whole point of the library was to do decimal arithmetic with some specified k digits of precision, not to absolve users from understanding how computer arithmetic works.
The Decimal constructor did round the value. Except instead of rounding to 100 digits, it rounded to ~15 digits. It would also probably be helpful for the documentation (starting with the README) to make it even more explicit that the Decimal constructor doesn’t use
The problem is that Such seemingly trivial cross-platform differences have caused hard-to-track (and sometimes practically significant) bugs before, when someone used a library/function they believed to be deterministic and completely specified and it turned out there was a subtle non-specified difference they never considered.
Sorry, I mean a bigint, a built-in object now included in the current version of all major browsers https://caniuse.com/bigint Decimal(1n)
=> Error: [DecimalError] Invalid argument: 1
Fair enough. And the README is explicit here: “...constructor function, Decimal, which expects a single argument that is a number, string or Decimal instance”. Adding quick support for ingesting bigints probably still a good idea. Even if it’s just checking for type 'bigint' and then returning |
You missed The following can be inserted into if (typeof num != 'number' || !isFinite(num) || num === 0) return Decimal(num);
I have considered that possibility before but I have not yet felt compelled to include it, particularly as the binary value is readily available from
It's worth noting that such values can be created using binary exponential notation for example: Decimal('0b1p-1074').toNumber() === Number.MIN_VALUE; // true
Many such libraries represent values internally in a power-of-2 base which makes it simple and fast to convert the binary value of a number to the value stored. Here, values are represented in a power-of-ten base which makes it more complex and slower. Another reason why I am not keen to promote the use of this library for analysing binary floating point.
It can equally be considered to be the other way around. With, for example, I think most users of this library would just expect
Yes, most users will actually test out the basic functionality for themselves rather than steaming ahead with their own cocksure assumptions. It is very discoverable how Decimal parses numbers and also that it does not round them to
Well, it would be x = new Decimal(123.4567)
y = new Decimal('123456.7e-3')
z = new Decimal(x)
x.equals(y) && y.equals(z) && x.equals(z) // true The fact that the numeric literal's last fraction digit is I am not sure but I think there did use to be a note here that Decimal values were created from a number's When creating a BigNumber from a Number, note that a BigNumber is created from a Number's decimal new BigNumber(Number.MAX_VALUE.toString(2), 2) I think I removed it or did not include it here because I thought it would be more of a challenge to some users' preconceptions for them to discover it for themselves.
Here, the value of
Yes, it's already an open issue #181 |
Anyway, Jacob, thanks for the feedback. Your opinions have some merit and have been noted. The door is always open for a pull request - although it may not be dealt with immediately. |
It doesn’t seem to matter how many digits of precision I request; the decimal number with the fewest digits that is within half an ulp of the given floating point number is used, rather than the decimal number which precisely represents the floating point number to the requested precision.
For example:
I only got 16 digits when I asked for up to 100. But Javascript’s
Math.PI
is literally 884279719003555/2**48. If I am more careful I get:Notice this is now an exact representation of the double-precision number Math.PI (with 49 digits, 33 more than the naive attempt).
This matters for my particular use case, which is trying to use higher-precision arithmetic to double-check the results of some careful numerical algorithms. The slight perturbations here of up to half an ulp can make a big difference in edge cases.
The text was updated successfully, but these errors were encountered: