-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression: BigDecimal is broken on 2.13.0, works on 2.12.8 #11590
Comments
I would say that it is broken in 2.12.8 because it doesn't take in account a math context with rounding precision and rules, which is |
Test case works also with 2.11.12. So even if 2.11 and 2.12 are broken in sense of math context, this is change in default behaviour and could break subtle way code with numerical calculations. |
There is behaviour change with Scala 2.13.0: scala/bug#11590 Signed-off-by: 35V LG84 <35vlg84-x4e6b92@e257.fi>
possibly related: scala/scala#6884, |
@ 35VLG84 it is broken behaviour (in both 2.11 and 2.12 versions) that was ignored until 2.13... |
@plokhotnyuk What changes can a user do to their code to maintain the behaviour in 2.11/2.12 (even if it's technically broken)? Is there some import or something they can add at all use-sites of BigDecimal? |
Need just to use |
For clarity: that's an optional constructor parameter of I'm happy to close this issue as "not a bug", having provided that detail for any users needing to mitigate the fix in 2.13.0. |
There has been talk of redesigning it so that the |
If we can find a set of symmetrizing operations for So we can label this as "not a bug", I guess, but the entire design is kind of a bug. Everything relating to the current design forces tradeoffs that result in surprising behavior like the above. |
Sounds like a good plan to me. Closing off this issue, meanwhile. Do you want to straight open that as an issue here, or go via contributors.scala-lang.org first? |
There is second problem with
Which is probably caused by wrong math context with I can't stress enough that this change with 2.13 is highly surprising, even though orinal implementation with 2.11 and 2.12 has been broken. P.S. Above code works with 2.12.8 |
Yeah, arguably the |
The reason why you don't want to use MathContext for sum operations is that they will not change number of decimals in the operation. We were trying to upgrade app and this was detected on several places. Suggested workarounds in this thread are not viable at all. which trips up rounding to 2 decimals ;( |
@zapov - I don't understand how you could be suffering the problems you state. scala> val mc = new java.math.MathContext(40)
mc: java.math.MathContext = precision=40 roundingMode=HALF_UP
scala> val a = BigDecimal("0.004999999999999999999999999999981", mc)
a: scala.math.BigDecimal = 0.004999999999999999999999999999981
scala> val r = a.round(new java.math.MathContext(2))
r: scala.math.BigDecimal = 0.0050 This seems to round correctly. Can you be more precise about how that value is being (incorrectly) generated, and how your rounding fails? |
They are the result of different addition operation due to rounding error usage in such operations.
Keep in mind that this denomination does not need to be 0.01 While I did fix this particular place by adding BigDecimal(0, UNLIMITED) at the beggining, BigDecimals are added and subtracted all over the codebase and its not really realistic to go through each of them and check if they need special handling. The 0.004999999999999999999999999999981 is the result of precision loss during addition, so while I could do rounding like you suggested, it would also round incorrectly numbers which really are less than 0.005 It seems to me that original PR which changed this (scala/scala#6884) I've tried reading through various explanations about the change and didn't really find one. Even if there is one (please do tell) how can you change such fundamental behavior in the language? |
First of all, this isn't "fundamental behavior in the language". It's a library, and you can take it and rewrite it (call it Because it really is just a library, not a language-level feature. Secondly, the old behavior was problematic because the MathContext was breaking symmetry and associativity. When you only use So the old way had inconsistent imprecision. The new way has consistent imprecision. Consistency is generally better. Apparently the docs didn't get updated properly. If you relied on the old inconsistencies--I guess it matters to you that 0.005 gets rounded up while 0.004999....981 is rounded down--then fork the library under a new name and you're good to go. |
I would consider standard library types with aliases in Predef quite fundamental to the language, regardless if language wants to consider everything a library. If you want inprecise math, you use Double. BigDecimal was the only way to retain correct math for various operations. You can't just introduce UNLIMITED into MC because all other stuff will break (just for context our code did not use MC once). The reason I pointed out documentation was to show that BD behaved as it did on purpose. It was a well defined behavior which had inconsistent implementation (by carrying context around). Again, the change was done with someone saying Please do ask yourself, if such a test existed prior to the change, would you do the change? |
It's always a danger changing any aspect of the operation of library code because someone may have (probably has) relied upon the particular behavior in the library. However, the original behavior was incredibly fragile, despite being documented--if you happen to create any of your values by multiplication, your precision will disappear. So if you had such a change, where teensy values were getting added that were supposed to produce a value that was supposed to be exactly on a rounding boundary, and the change made it round the wrong way, I personally would still advise both that (1) the library may as well change, and (2) the original code is suspect and should change. That you even get a 10^-32-scale error in your code, and that you rely upon not having it, is a troubling feature of your code base. Where, exactly, does this error come from? It is similar to the default precision of BigDecimal anyway, which suggests that you are doing an inaccurate division somewhere, and then propagating this inaccuracy through several steps, where you have carefully conspired to have it exactly cancel out (because at this point you're exactly precise). Alternatively, you may be using the scala> val inexact = BigDecimal.binary(17.33)
inexact: scala.math.BigDecimal = 17.32999999999999829469743417575955
scala> val also = new java.math.BigDecimal(17.33)
also: java.math.BigDecimal = 17.3299999999999982946974341757595539093017578125
scala> val better = BigDecimal(17.33)
better: scala.math.BigDecimal = 17.33 So I am sorry that the library change has made things more difficult for you, and I agree that it's not 100% clear that this was the right choice given that there is always the chance to break working code. But (1) You can fork the library and regain the old behavior (or some other behavior you like), and with grep and search-and-replace this is actually not a difficult thing to do (if you use libraries, adding implicit conversions both ways make interop easy too), and One advantage of having your own class is that you can forbid certain operations as inherently dangerous (e.g. division), and then the compiler will tell you everywhere that the dangerous operation is being used. For instance,
looks okay until you realize that only $17.34 is paid if you do it that way. All sorts of other things are slightly off, too, at that point...even under 2.12, However, if division isn't allowed, then you can write sensible alternate rules about how to handle things like this. import scala.math.BigDecimal.RoundingMode.HALF_EVEN
def distribute(
amount: BigDecimal, n: Int, already: List[BigDecimal] = Nil
): List[BigDecimal] =
if (n <= 1) amount :: already
else {
val part = (amount / n).setScale(2, HALF_EVEN)
distribute(amount - part, n - 1, part :: already)
}
val shares = distribute(bill, 6) Anyway, I understand it's frustrating when working code stops working due to what ought to be an innocuous change. But from what you've described so far--maybe if you describe more I'll see that I'm mistaken--it seems to me that in the long run, you would benefit from a code base that doesn't rely upon this particular feature of BigDecimal. It would be more robust regardless. |
Before I explain what the code is doing which resulted in new problems, some disclaimers first:
Now to explain why problem happens in our codebase. For this to work when you have a BigDecimal number you split it into rounded number and rounding error. Also, we resolved this in our codebase by duplicating BigDecimal and reverted + and - behavior to correct one. I'm just leaving you my thoughts on the matter as I'm sure there were others affected by this :( |
The new implementation is more consistent than the old one. In this one, every operation obeys It's also slightly less to cause program failure: if you have enormous + 1/enormous, and unbounded precision on addition, you allocate a spectacular amount of memory, probably completely by accident. However, it the new way is less accurate in some calculations, and apparently your codebase requires accuracy there. How it doesn't still have mistakes is something of a mystery to me, given how hard it is to mix precise and imprecise operations. It also occurs to me that I overstated how easy a switch is, because equality is cooperative, and nothing knows how to cooperate with a new variant of Anyway, I'm sorry that the change has caused you difficulty. I hope there aren't lingering issues with equality. And hopefully someday we'll have a better story than As I said, I think the new way is more consistent and therefore overall better, but I'm not sure it was wise to change the behavior given that there were ways to take advantage of the old behavior. |
I think this was 1) a good change, and 2) totally fair game to change in a major Scala version (2.13). However, scala/scala#6884 should have been listed in the 2.13.0 release notes. I will go rectify that right now. |
Hello,
Following code works with Scala 2.12.8 but it doesn't work with 2.13.0.
Scala 2.12.8 is ok:
But Scala 2.13.0 breaks (it will truncate value and reduces scale, hence error):
It breaks first time on on step
bd6 + bd7 + bd8 + bd9
because resulting value has reduced scale and value.The text was updated successfully, but these errors were encountered: