Replies: 1 comment 3 replies
-
In current testers, the accuracy of MPFR is already set close to the minimum required. The reason why testing functions such as sinpi requires accuracy of more than 1000 bits is because that many digits of PI are needed to process the reduction of argument. However, since the latest version of MPFR includes sinpi and other functions, it is not necessary to specify a precision of more than 1000 bits to test those functions. Since only a few test items are calculated with high precision, modifying the code for those functions will not reduce the test time that much. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
There has been a number of inaccuracies reported in issues. Some of them reporting errors slightly above 1ULP on
*_u10
routines. In #570 we seem to be in a case where MPFR calculates an error of1 + eps
whereeps < unit roundoff
, naturallyeps
gets rounded out once we display the final error in double precision.I can understand that we wish to represent errors in the working precision, in this case double precision, as we have to set a limit to our final representation of errors. It seems like an acceptable compromise, although some user might expect ULP errors to be strictly inferior to threshold, in this case
error < 1ULP
.What I'm wondering though is whether we actually need MPFR computation to be that precise.
Other libraries like Arm Optimized Routines only use an MPFR extended precision of 96 bits, which does not seem to limit accuracy assessment. I can dig to see what other libraries use.
So I have these 2/3 questions:
Beta Was this translation helpful? Give feedback.
All reactions