Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exp() accuracy is severly affected when using --fast-mode=fast #41592

Closed
robsmith11 opened this issue Jul 15, 2021 · 4 comments
Closed

exp() accuracy is severly affected when using --fast-mode=fast #41592

robsmith11 opened this issue Jul 15, 2021 · 4 comments

Comments

@robsmith11
Copy link
Contributor

robsmith11 commented Jul 15, 2021

As discussed on the forum, the accuracy of exp() suffers when using fast-math because llvm optimizes away a numerical trick that is used for rounding:
https://discourse.julialang.org/t/whats-going-on-with-exp-and-math-mode-fast/64619/21

While some inaccuracy is expected when using fast-math, it seems like this specific situation could be improved by instructing llvm to not optimize away the numerical rounding trick.

There's probably a better way to do this, but I was able to get exp() working with fast-math by replacing the subtraction:
N_float -= MAGIC_ROUND_CONST(T)
with
N_float = forcesub(N_float, MAGIC_ROUND_CONST(T))
where

function forcesub(a,b)
    Base.llvmcall("%3 = fsub double %0, %1\n ret double %3", Float64, Tuple{Float64,Float64}, a, b)
end

Before (with --math-mode=fast):

julia> exp(1.0)
2.7158546124258023

julia> exp(1e-3)
1.0

After (with --math-mode=fast):

julia> exp(1.0)
2.718281828459045

julia> exp(1e-3)
1.0010005001667084
@antoine-levitt
Copy link
Contributor

There's a couple of open issues about just getting rid of math-mode=fast, this issue is more likely to reinforce that position than anything else.

To save math-mode=fast, we'd need a better way of doing this type of hacks. Something like a @nofastmath would be very useful for this. By placing this macro in front of a few functions that rely on strict floating point semantics, math-mode=fast could possibly be useful. There are some concerns about inference in #25028 that I don't understand, however.

@chriselrod
Copy link
Contributor

chriselrod commented Jul 15, 2021

Not as bad as sinpi and cospi:

julia> x = randn(10)'
1×10 adjoint(::Vector{Float64}) with eltype Float64:
 0.411952  -1.20621  -0.780723  0.69826  -1.60801  -1.88011  0.363784  0.663141  -0.874339  0.613377

julia> sinpi.(x)
1×10 Matrix{Float64}:
 0.0  -0.0  -0.0  0.0  -0.0  -0.0  0.0  0.0  -0.0  0.0

julia> cospi.(x)
1×10 Matrix{Float64}:
 1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0

There's probably a better way to do this, but I was able to get exp() working with fast-math by replacing the subtraction:

That's a hack to round quickly over the range of numbers for which exp is valid. So as @oscardssmith noted, unsafe_trunc will work as well.
But I think the llvmcall approach is also reasonable.

@JeffBezanson
Copy link
Member

There are some concerns about inference in #25028 that I don't understand, however.

I believe it's that the compiler might constant fold a call to e.g. exp, and then starting julia with a different fast-math setting invalidates that optimization but we will not recompile the code.

I'm not sure this issue is really valid. There is no "spec" for what constitutes an acceptable result under fast-math, and no way to implement such a spec either.

@c42f
Copy link
Member

c42f commented Sep 30, 2022

Closed by #41638

@c42f c42f closed this as completed Sep 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants