-
-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Description
Currently, the precision of BigFloats is determined by a global variable stored in an array DEFAULT_PRECISION, which is manipulated by set_bigfloat_precision.
This is not very Julian. I have been thinking about two possibilities:
-
The precision is given explicitly, e.g. as a second argument to the various
BigFloatconstructors and to thebigfunction, e.g.a = BigFloat(3.1, 100)
b = big(3.1, 200)
Here, though, the precision is still hidden inside the object.
-
An arguably more Julian approach, which I would favour, is that the precision be a parameter of the
BigFloat{prec}type, so that we could writea = BigFloat{100}(3.1)
b = BigFloat{200}(3.1)
This version would have the advantage that operations on BigFloats could explicitly track the precision of the objects being operated on (which MPFR explicitly states that it does not do). E.g., a + b in this example should have only 100 bits. (If I understand correctly, with MPFR there is the possibility to specify any precision for the result, but bits beyond 100 will be incorrect.)
If there is a consensus about whether either of these would be useful, or am I missing a reason why this would not be possible?
c.c. @simonbyrne, @andrioni, @lbenet