Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Julia ports of hyperbolic (including inverse) functions from openlibm. #24159

Merged
merged 28 commits into from
Feb 14, 2018

Conversation

pkofod
Copy link
Contributor

@pkofod pkofod commented Oct 15, 2017

Still need to fix the license stuff and add benchmarks in BaseBenchmarks.

Edit: since I added some inverse hyperbolics as well, their tests are missing.

@ararslan ararslan added maths Mathematical functions potential benchmark Could make a good benchmark in BaseBenchmarks labels Oct 15, 2017
# 2. Find the the branch and the expression to calculate and return it
# a) x <= COSH_SMALL_X
# return T(1)
# b) 0 <= x <= ln2/2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe COSH_SMALL_X <= x <= ln2/2? ;-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would make sense :)

@giordano
Copy link
Contributor

If these functions are faster than the fastmath ones, you may also want to change them :-)

@pkofod
Copy link
Contributor Author

pkofod commented Oct 16, 2017

If these functions are faster than the fastmath ones, you may also want to change them :-)

Yes I will! I know I missed it with some of the others. Let's benchmark first :)

# is preserved.
# ====================================================

_ldexp_exp(x::Float64, i::T) where T = ccall(("__ldexp_exp", libm), Float64, (Float64, T), x, i)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a quick comment, this should just use the julia ldexp function, if you don't need the error handling, you can write a _ldexp function and remove the error/exception handling in ldexp

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh! I looked for a Julia binding/version.. but I must have mistyped the name when searching.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this worth changing now?

Copy link
Contributor Author

@pkofod pkofod Feb 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, this function is actually not in Julia yet, but yes, it is worth changing, and it should be a simple function to port.

edit: to clarify this requires porting 7-8 lines of code if I recall correctly. It should be in the k_exp.c file, and I might actually have port locally already.

edit2: yeah, I had them locally. I'll make a pr tomorrow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this not the function @musm was suggesting?

Copy link
Contributor Author

@pkofod pkofod Feb 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, that computes x*2^n, what is needed here is exp(n)*2^n

edit: as I said, I'll make a pr tomorrow, as I'm off to bed now, but it goes something like the following, although it will not run because I have not included a few helper functions in the gist.

https://gist.github.com/pkofod/75a237b43cbcb82ea26410aecd8dc4ca

edit2: I know the doc string is wrong for the second function... I'm still mid-PR-preparation, but I'm too tired to continue today

# acosh methods
ACOSH_LN2(T) = 6.93147180559945286227e-01
ACOSH_LN2(T) = 6.9314718246e-01
AH_LN2(T::Type{Float64}) = 6.93147180559945286227e-01
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for T

@@ -172,9 +176,41 @@ end
tanh(x::Real) = tanh(float(x))

# Inverse hyperbolic functions
# asinh methods
AH_LN2(T) = 6.93147180559945286227e-01
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can remove?


H_LARGE_X(::Type{Float32}) = 88.72283f0
H_OVERFLOW_X(::Type{Float32}) = 89.415985f0
function sinh(x::T) where T <: Union{Float32, Float64}
Copy link
Member

@stevengj stevengj Oct 17, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be cleaner to use the syntax sinh(x::Union{Float32, Float64}) for functions like this? Oh, nevermind, you use T in the function body.

Copy link
Contributor Author

@pkofod pkofod Oct 17, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I use these to pick the branching cutoffs. Do you think it would be cleaner to write sinh(x::Union{Float32, Float64}) and then just define H_LARGE_X(::Float32) = 88.72283f0? on the actual x instead of the type of x? Should compile down to the same thing, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's fine as-is.

test/math.jl Outdated
by = tanh(big(x))
@test Float32((tanh(x) - by))/eps(abs(Float32(by))) <= 1
bym = tanh(big(-x))
@test Float32(abs(tanh(-x) - bym))/eps(abs(Float32(bym))) <= 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No tests for e.g. acosh?

In general, it would be good to make sure you have test coverage for all of the branches.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I should have put a wip. All these will get tests and benchmarks, sorry for not making that more clear.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inverse functions were not there originally, but I decided to add them here and just haven't gotten the tests in yet.

@pkofod pkofod changed the title Add Julia ports of hyperbolic functions from openlibm. Add Julia ports of hyperbolic (including inverse) functions from openlibm. Oct 17, 2017
@pkofod
Copy link
Contributor Author

pkofod commented Nov 23, 2017

I'm not using the __ldexp_exp from openlibm again, I will implement these in another PR.

All functions now have tests. Need benchmarks and license-thing.

@pkofod
Copy link
Contributor Author

pkofod commented Nov 24, 2017

benchmark PR JuliaCI/BaseBenchmarks.jl#141

@StefanKarpinski
Copy link
Member

If either @simonbyrne or @stevengj have a look at this and approve, it's good to squash merge then!

test/math.jl Outdated
end
for x in (pcnfloat(2.7755602085408512e-17)..., pcnfloat(22.0)..., pcnfloat(709.7822265633563)...)
by = cosh(big(x))
@test Float64((cosh(x) - by))/eps(abs(Float64(by))) <= 1
Copy link
Member

@stevengj stevengj Feb 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be clearer to use @test cosh(x) ≈ cosh(big(y)) rtol=eps(Float64) here and similarly in the other tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

big(y) is meant to be big(x), right?

test/math.jl Outdated
by = acosh(big(x))
@test Float64((acosh(x) - by))/eps(abs(Float64(by))) <= 1
end
for x in (nextfloat(1f0), pcnfloat(2f0)..., pcnfloat(2f0^28)...)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't you merge these two loops by putting them into the for T loop as

for x in (nextfloat(T(1)), pcnfloat(T(2))..., pcnfloat(T(2^28))...)
    @test acosh(x)  acosh(big(x) rtol=eps(T)
end

test/math.jl Outdated
by = tanh(big(x))
@test Float64((tanh(x) - by))/eps(abs(Float64(by))) <= 1
bym = tanh(big(-x))
@test Float64(abs(tanh(-x) - bym))/eps(abs(Float64(bym))) <= 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@test tanh(x) ≈ tanh(big(x)) ≈ -tanh(x) rtol=eps(Float64)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could merge the for x loops into

for x in Iterators.flatten(pcnfloat.(T == Float64 ? [2.0^-28,1.0,22.0] : [2f0^-12,1f0,9f0]))
    @test tanh(x)  tanh(big(x))  -tanh(x) rtol=eps(T)
end

acosh(x::Real) = acosh(float(x))

# atanh methods
@noinline atanh_domain_error(x) = throw(DomainError(x, "atanh(x) is only defined for x <= 1."))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for |x| ≤ 1

asinh(x::Real) = asinh(float(x))

# acosh methods
@noinline acosh_domain_error(x) = throw(DomainError(x, "acosh(x) is only defined for x >= 1."))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unicode is a bit nicer

@pkofod
Copy link
Contributor Author

pkofod commented Feb 12, 2018

Thanks @stevengj ! I hope I've addressed your comments.

test/math.jl Outdated
@test sinh(-T(1000)) === -T(Inf)
@test isnan_type(T, sinh(T(NaN)))
end
for x in (pcnfloat(2.0^-28)..., pcnfloat(22.0)..., pcnfloat(709.7822265633563)...)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like it would be better to use SINH_SMALL_X(T), H_MEDIUM_X(T), etcetera (importing those symbols into the test). That way it will unify the tests for both T, make it clearer where these thresholds come from, and ensure the tests are updated if the thresholds change (or if the sinh code is rewritten).

test/math.jl Outdated
@test isnan_type(T, sinh(T(NaN)))
end
for x in (pcnfloat(2.0^-28)..., pcnfloat(22.0)..., pcnfloat(709.7822265633563)...)
@test sinh(x) ≈ sinh(big(x)) rtol=eps(Float64)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems simpler to do @test sinh(x) ≈ sinh(big(x)) ≈ -sinh(x) rtol=eps(T), but it looks like this syntax isn't supported by @test. 😢

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you suggested that above and I tried it but without luck so I just split it out into two.

@stevengj stevengj mentioned this pull request Feb 13, 2018
test/math.jl Outdated
@test asinh(nextfloat(zero(T))) === nextfloat(zero(T))
@test asinh(prevfloat(zero(T))) === prevfloat(zero(T))
@test isnan_type(T, asinh(T(NaN)))
for x in Iterators.flatten(pcnfloat.(T == Float64 ? [2.0^-28,2.0,2.0^28] : [2f0^-12,2f0,22f28]))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused about this conditional. Looking back at the asinh code, it seems like it uses 2^-28 and 2^28 for both single and double precision. 2^-12 is only used for tanh.

Again, this would be clarified by using TANH_SMALL_X(T) etcetera here for precision-dependent thresholds.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good idea. Originally it was unsure if I there were going to be tests in separate commits before the actual change (just to make sure nothing changed), but I guess this will be squashed anyway, so might as well just use the thresholds from the actual code.

test/math.jl Outdated
@@ -907,7 +901,7 @@ end
@test asinh(nextfloat(zero(T))) === nextfloat(zero(T))
@test asinh(prevfloat(zero(T))) === prevfloat(zero(T))
@test isnan_type(T, asinh(T(NaN)))
for x in Iterators.flatten(pcnfloat.(T == Float64 ? [2.0^-28,2.0,2.0^28] : [2f0^-12,2f0,22f28]))
for x in Iterators.flatten(pcnfloat.([T(2)^-28,T(2),T(2)]))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second T(2) doesn't look right. I assume you mean T(2)^28?

test/math.jl Outdated
@@ -919,7 +913,7 @@ end
@test_throws DomainError acosh(T(0.1))
@test acosh(one(T)) === zero(T)
@test isnan_type(T, acosh(T(NaN)))
for x in (nextfloat(T(1)), pcnfloat(T(2))..., pcnfloat(T(2^28))...)
for x in Iterators.flatten(pcnfloat.([T(1) T(2), T(2^28)]))
Copy link
Member

@stevengj stevengj Feb 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't it be Iterators.flatten([nextfloat(T(1)); pcnfloat.([T(2), T(2^28)])])? Or [nextfloat(T(1)), pcnfloat(T(2))..., pcnfloat(T(2^28))...]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I seem to have pushed without this fix.

@pkofod
Copy link
Contributor Author

pkofod commented Feb 13, 2018

stevengj approved these changes an hour ago

Thanks for great comments @stevengj ! The tests are so much cleaner now.

@StefanKarpinski StefanKarpinski merged commit 76e5232 into JuliaLang:master Feb 14, 2018
@pkofod pkofod deleted the hyperbolic branch February 14, 2018 20:16
@KristofferC KristofferC removed the potential benchmark Could make a good benchmark in BaseBenchmarks label Oct 8, 2018
vchuravy pushed a commit to JuliaPackaging/LazyArtifacts.jl that referenced this pull request Oct 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
maths Mathematical functions
Projects
None yet
Development

Successfully merging this pull request may close these issues.