-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
linspace(...)[1] can differ from linspace start value #14420
Comments
If this is the code getindex{T}(r::LinSpace{T}, i::Integer) = (checkbounds(r, i); unsafe_getindex(r, i))
unsafe_getindex{T}(r::LinSpace{T}, i::Integer) = convert(T, ((r.len-i)*r.start + (i-1)*r.stop)/r.divisor) then changing things to
would address this. If the current code is deemed accurate enough, then storing |
I wonder whether it's possible to solve for a floating point
exactly reproduces the values at the end points. Then you'd get the best of both worlds (at the cost of some additional setup), though a multiply would be required to get |
If
I'm not sure of a good way to adjust mult, start, and end. Using the formula
|
It's not what I expected, but I get the same results locally (I put the code into functions and primed the JIT to make sure). Even implementing the linearly interpolated version in terms of multiplication (as I was proposing above) seems a bit slower than your version with a branch. It's a bit surprising that the branch really doesn't hurt performance. I checked the assembly and the branch does seem to have ended up in the inner loop, I guess it's just perfectly predictable. I have a hunch that the linear interpolated version should be better behaved numerically due to potential cancellation error in computing |
I guess the JIT could generate a CMOV instead of a branch, but the branch does seem fast enough.
The only inaccuracy I found is with a very small start and large end, or vice versa:
The error is proportional to the smaller endpoint. For this particular error to crop up, all of the other elements will be much larger than the smaller endpoint, so the relative error is tiny. |
Hmm, looks like this problem has quite some history already: #2333 #9637 and probably several other issues. I expect the aim of linspace should be to produce something as close as possible to the correctly rounded linear interpolation between start and end, hitting the end points exactly. Perhaps that's just me though. The single sided version does have a nasty failure case when julia> x1,xN,N = -1f6, 0.1f0, 1e6
(-1.0f6,0.1f0,1.0e6)
julia> function lerp_onesided{T}(x1::T, xN::T, N, i)
mult = (xN-x1)/(N-1)
x1 + mult*(i-1)
end
lerp_onesided (generic function with 1 method)
julia> (lerp_onesided(x1,xN,N,N) - xN)/xN
0.2499999802093956
julia> (linspace(x1,xN,N)[Int(N)] - xN)/xN
0.0f0 |
With a simple use of
function lerp_fix{T}(x1::T, xN::T, N, i)
# Setup. Compute values at end points.
mult = one(T)/T(N-1)
a = T(N-1)*mult*x1
b = T(N-1)*mult*xN
# If they don't match, adjust so that linear interpolation formula exactly
# hits the ends of the range.
if a != x1
x1 = a < x1 ? nextfloat(x1) : prevfloat(x1)
end
if b != xN
xN = b < xN ? nextfloat(xN) : prevfloat(xN)
end
# The actual computation at index `i`
T(N-i)*mult*x1 + T(i-1)*mult*xN
end I haven't had time to test the above as extensively as I'd like yet, but it seemed to work when I tried it for a large number of log and uniformly distributed end points and range sizes. |
That seems good. I can't find a proof, but it does seem that the error in "x*(1.0 / x)" is at most 1 in the mantissa, so either "nextfloat" or "prevfloat" should be able to fix the endpoints. If there's not a strong argument that this always works, it might be good to check that the updated endpoints are correct if |
I think a proof of this is required if it's going to go into julia Base. So far I've come to a similar conclusion - first prove that
Follow that by proving that adjusting the endpoints by 1ulp is enough to correct a possible 1ulp error in I haven't tried anything like this before, so I'm making slow progress. (Ok, I should probably just read the introduction in a good book on numerical analysis and it might be obvious. Suggestions welcome.) Some observations which might be useful:
As a side note, |
I think there's a way to fix this and retain the |
@StefanKarpinski Isn't the |
It's related because the |
Btw, the potential solution that I have in mind for this that doesn't require ditching |
I'm not sure which metric is in "get all the intermediate points right". E.g. with @c42f's example above I get immutable MyRange{T<:FloatingPoint}
x1::T
xn::T
n::Int
end
Base.getindex(r::MyRange, i::Integer) = (r.n - i)/(r.n - 1)*r.x1 + (i - 1)/(r.n - 1)*r.xn
julia> x1,xN,N = -1f6, 0.1f0, 10^6
julia> fr1 = MyRange(x1,xN,N);
julia> fr2 = linspace(x1,xN,N);
julia> br1 = MyRange(BigFloat(-1.0e6),BigFloat(0.1f0),N);
julia> norm(Float64[fr1[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, Inf)
2.220446049250313e-16
julia> norm(Float64[fr1[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, 2)
9.650009737328148e-14
julia> norm(Float64[fr1[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, 0)
249953.0
julia> norm(Float64[fr2[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, Inf)
1.635118732634666e-7
julia> norm(Float64[fr2[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, 2)
4.3156720262505825e-5
julia> norm(Float64[fr2[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, 0)
999998.0 |
Passing these tests is what I mean. |
Yes, but that seems like an arbitrary criterion. What are they exactly testing? |
...to be more specific. It seems like wrong to me not to respect the actual floating point that is passed to |
I completely agree that this issue is a bug and should be fixed. But why is passing all of those tests I linked to arbitrary? Those test cases are tricky, but do you disagree that they are the desired behavior? |
I'm not sure what the rule for the tests is. E.g. I'm puzzled by the test [linspace(0.0,0.3,4);] == [0:3;]./10 The test seems to suggest that float(0//10),float(1//10),float(2//10),float(3//10) but I think that is wrong. The range should be computed for I'm not sure if it applies to all the tests, but the test just mentioned seems to compute the range for the closest decimal representation and then round to binary, but when our floating points are binary that seems wrong to me. I think the right behavior would be to use the binary floiting point values that are passed to the |
This is what matches our range behavior. Does that seem wrong to you as well? Or do you just think that we should accept that linspace and ranges are going to give different results? Note that there's nothing inherently decimal about what this algorithm does – it looks for rational "liftings" that work for both endpoints and then does the computation in the lifted space with integers and then projects back down. |
Unfortunately, yes. I think I've now understood what is going on in the This approximation of the end points happens to produce something that is closer to the decimal representation that people are used to and what is used for I/O because users most often type a rational number with a fairly small denominator. Sometimes, though, this approach can be pretty far off julia> Rational(Base.rat(0.33333333)...) - 0.33333333
3.333333331578814e-9 Effectively we are treating floating point numbers differently for ranges compared to any other floating point function. The fact that decimal numbers are used for I/O, but internally are stored in binary, has caused much confusion and will continue to do so, but I don't think we are simplifying things by obscuring this in the float ranges. Since floating point number are in fact a rational number, we could still compute some julia> 0.3 |> t -> Rational(Int(2^52*significand(t)), 2^(52-exponent(t))) |>
t -> float(collect(0:t/3:t))
4-element Array{Float64,1}:
0.0
0.1
0.2
0.3 but this would probably overflow in many cases. So what about float ranges constructed from a step argument? As argued by Guido van Rossum and also mentioned in the original float range issue (#2333), this might not be a good API for float ranges. So if we really want to keep float ranges constructed from a step size, I think that the only good alternatives are either to continue to explain how binary floatings work (maybe with a bot) when people get something like julia> v=.1:.1:.3
0.1:0.1:0.2 or to switch to decimal floating points by default which will probably give fewer surprises, but certainly much slower code. |
You're understanding is close but not quite correct. There is no approximation occurring, rather given
Note the exact equality – these are never approximated; the rational values have to give the float range components exactly. If such rational values are found, computations are done so that the result is the correctly rounded result of what you'd get from doing the computation with those exact rational values. The alternative to doing something like for ranges is to insist that The fact that the endpoints for LinSpace are not exact is a bug and not related to any approximation. |
For the record, I've managed to prove that In the meantime I need to read the existing algorithm in detail (it sounds really neat, but could sure use some comments!). |
It looks to me like the existing expression for At an endpoint x = 0.10000000000000003
# Some candidate endpoints adjacent to x
x_ = [prevfloat(prevfloat(x)), prevfloat(x), x, nextfloat(x), nextfloat(nextfloat(x))]
M = 49.0
julia> (x_*M)/M - x
5-element Array{Float64,1}:
-2.77556e-17
-1.38778e-17
-1.38778e-17
1.38778e-17
2.77556e-17 |
On further thought, the above counter example only shows that a new endpoint can't be solved for when the divisor is fixed. If divisor is allowed to vary (as it does when attempting the rational lifting), it may still be possible to solve simultaneously for a noninteger divisor and new endpoints so that everything works. |
I'm pretty sure that does work but as I said, I haven't proved it. |
Fair enough. To summarize progress so far - With the existing system, if the rational lifting fails we fall back to using the floating point begin and end provided. What's clear is that this fallback will need to be replaced because it doesn't always reproduce the endpoints. The example above shows that keeping the fallback value for divisor and modifying the endpoints by 1ulp also won't work. The question now is whether scaling the divisor while also scaling/modifying the endpoints will work. |
I got the chance to read the existing code in more detail and think a little more about the interaction between
I agree with @andreasnoack that constructing float ranges from a step argument is generally a poor API. How about removing the Rather than hack directly in Base I've put together some bits and pieces at https://github.com/c42f/LinSpaces2.jl. Some tests that this solves the current issue are included but there's a long way to go to have the same level of consistency with FloatRange. I've also included some quick & dirty benchmarks which may also be relevant to #13401. |
I chose this value because FloatRange applies to Float32, Float64, BigFloat, etc. You don't really want the interpretation of an endpoint to depend on the types given and Float32 is the smallest "computational" floating-point type we support. Also, realistically, if the rational approximation of a floating-point value has numerator or denominator larger than 16 million, that's probably not what was intended. |
|
It already depend on the types. The rational is computed partially using floating point arithmetic, and required to round correctly only at the epsilon of the floating point type you put in. I think this type dependence is an inherent aspect of the current algorithm, since the rationals are expanded only to within the precision of the epsilon of a particular floating point type. Here's the way I look at
Which of these two cases is most important depends on your prior belief about the distribution of real numbers which are coming into To me, the current behavior of trying to round correctly but also make an approximation which minimize the rational number description length is a rather clever sweet spot and I doubt I would have thought of it myself. It also introduces some constraints on the possible solutions to this bug which are very hard to meet :-) |
After some more experimentation I must admit you're spot on with this. Even making Luckily, it seems that it is always possible to adjust m = 1.0
for iters=1:10000
m = rand()
a = start*m
c = stop*m
s = n*m
if a*n/s != start
a = a*n/s > start ? prevfloat(a) : nextfloat(a)
end
if c*n/s != stop
c = c*n/s > stop ? prevfloat(c) : nextfloat(c)
end
if a*n/s == start && c*n/s == stop
return LinSpace3(a, c, len, s)
end
end It's not possible to make I'll think about how to make this deterministic and avoid iterating. |
Just an update on this one - I've come back to looking at it and have tried several different tacks. The task as laid out above is to simultaneously solve for
I think this is doable with careful backward error analysis. I also tried using compensated floating point arithmetic to get very close to correct rounding across the whole range. It does work, but would be several times more floating point operations in |
Implement StepRangeHiLo Split out TwicePrecision utilities from basic Range functionality Define AbstractTime numeric traits Needed for tests with new ranges on 32-bit CPUs New range performance optimizations Add StepRangeLen to stdlib docs [ci skip] Misc cleanups for range code Fix printing bug with range of non-scalars Simpler implementation of unsafe_getindex for LinSpace Conflicts: base/dates/types.jl base/deprecated.jl base/exports.jl base/float.jl base/operators.jl base/range.jl doc/src/stdlib/math.md test/ranges.jl
Implement StepRangeHiLo Split out TwicePrecision utilities from basic Range functionality Define AbstractTime numeric traits Needed for tests with new ranges on 32-bit CPUs New range performance optimizations Add StepRangeLen to stdlib docs Misc cleanups for range code Fix printing bug with range of non-scalars Simpler implementation of unsafe_getindex for LinSpace Conflicts: base/dates/types.jl base/deprecated.jl base/exports.jl base/float.jl base/operators.jl base/range.jl doc/src/stdlib/math.md test/ranges.jl
Demo:
I think the error here is in
unsafe_getindex(::LinSpace, ::Integer)
, which assumesb*a/b == a
, which unfortunately isn't true. The above amounts to computingProbably needless to say, this is a big problem for numerical code which should be able to assume
LinSpace
will exactly preserve the start and end values.The text was updated successfully, but these errors were encountered: