Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster Rational-like type #11522

Open
timholy opened this issue Jun 1, 2015 · 47 comments
Open

Faster Rational-like type #11522

timholy opened this issue Jun 1, 2015 · 47 comments
Labels
help wanted Indicates that a maintainer wants help on an issue or pull request maths Mathematical functions performance Must go faster rationals The Rational type and values thereof

Comments

@timholy
Copy link
Member

timholy commented Jun 1, 2015

Over in JuliaMath/Interpolations.jl#36 (comment) and JuliaMath/Interpolations.jl#37 it was discovered that doing computations with Rational is slow, because basically every usage calls gcd and div. The advantage of calling gcd and div is that it makes the type much less vulnerable to overflow, and that is a Good Thing. But as we discovered, certain computations may not need that kind of care, so there may be room for a faster variant. Switching to a stripped down variant provided an approximate 50-fold speed boost.

I suspect certain computations may demand an implementation that is as minimalistic as that Ratio type. There may also be an area of intermediate interest, where a Rational-like object is represented in terms of pre-factorized numbers, perhaps numerator and denominator each being a Dict{Int,Int} representing the base and power of the factors.

@timholy timholy added the help wanted Indicates that a maintainer wants help on an issue or pull request label Jun 1, 2015
@simonbyrne
Copy link
Contributor

One option, suggested by @StefanKarpinski in #8672, is to get rid of the gcd call in the Rational constructor, just keeping it in //. This would allow elimination of at least one call to gcd in * and /.

We could take this even further and get rid of the coprime requirement altogether? We could keep the current operations (*, + etc) as they are, and define different symbols (e.g. \boxplus, etc) as fast non-cancelling operations.

@StefanKarpinski
Copy link
Member

We could have a function (reduce is already taken, maybe coprime) that reduces a Rational to lowest terms. I think that for presentation doing the reduction still makes sense – people want to see their rational numbers in canonical form. And of course, when one asks for the numerator and denominator explicitly, one presumably wants it in lowest terms – otherwise the answer is ill defined.

@simonbyrne
Copy link
Contributor

Another option (perhaps encompassing Stefan's proposal), would be to do fast, non-cancelling operations by default, and only fallback on cancelling operations when overflow is detected?

@simonbyrne
Copy link
Contributor

@timholy Do you have some suggestions for useful benchmarks?

@timholy
Copy link
Member Author

timholy commented Jun 1, 2015

I've posted https://github.com/timholy/Ratios.jl as a playground (and because I need this for Interpolations, and as @tlycken pointed out it's better not to bury it inside Interpolations). Feel free to play here or elsewhere.

@timholy
Copy link
Member Author

timholy commented Jun 1, 2015

Operationally, I'd say anything fast enough so that it's not a bottleneck for Interpolations is currently the benchmark I care about 😄.

@IainNZ
Copy link
Member

IainNZ commented Jun 1, 2015

only fallback on cancelling operations when overflow is detected

This was first thing that sprung to mind too. When coupled with simplification only when absolutely needed (i.e. display, querying numerator/denominator), it could be quite nice. I assume (LOL) that it'd be closer to Ratios.jl performance than the current Rationals, but... not sure. The Ratios code is so simple that it should be blistering fast (SIMD-able even)

@JeffBezanson JeffBezanson added performance Must go faster rationals The Rational type and values thereof maths Mathematical functions labels Jun 1, 2015
@simonbyrne
Copy link
Contributor

I tested out the idea on the checked branch of Ratios.jl

Unfortunately, the resulting performance is somewhat disappointing: using @timholy's test on JuliaMath/Interpolations.jl#37, makes Interpolations slightly slower than Grid, though nowhere near as slow as using the Rational type (see here for the test script).

@simonbyrne
Copy link
Contributor

I've also added a rational branch which just aliases SimpleRatio to Rational: the rough timings:

  • master (unchecked): 47 ms
  • checked: 980 ms (~20x slower)
  • rational: 15 s (~320x slower)
  • Grid.jl (for reference): 660 ms (~14x slower)

Given that we still get an order of magnitude speedup, I think this is worth pursuing. We could also then add (possibly unexported) unchecked_... operations for examples such as this where the bounds are known to be safe.

@IainNZ
Copy link
Member

IainNZ commented Jun 3, 2015

Pretty compelling to me. It looks like you are using exceptions, so is it plausible that a Int-specific version that does checking for overflow without exceptions could get to something like 10x slower?

@timholy
Copy link
Member Author

timholy commented Jun 3, 2015

+1 for the experiment, and the 10x speedup. Interpolations will still use the blisteringly-fast unchecked variants, but I agree this is quite promising.

@StefanKarpinski
Copy link
Member

The main issue with the checked stuff is that we only expose it via exceptions, which are a total performance trap. We need to expose some way of doing operations and then checking the overflow bit.

@garrison
Copy link
Member

garrison commented Jun 3, 2015

And of course, when one asks for the numerator and denominator explicitly, one presumably wants it in lowest terms – otherwise the answer is ill defined.

There are currently packages where rat.num and rat.den are accessed directly (ContinuedFractions.jl, ValidatedNumerics.jl, perhaps others) instead of through the num and den functions. If this were to replace Rational, I would prefer to see these fields renamed so that 1. people are less likely to use them directly; and 2. existing code that uses them will break, giving the users a chance to make an explicit decision about whether they want the reduced numerator or whether the unreduced numerator will suffice. (Perhaps unreduced_num and unreduced_den would make better field names.)

EDIT: As an explicit example of potential subtle breakage, ValidatedNumerics takes a different code path based on whether iseven(r.den) is true. A different path could be taken here depending on whether the fraction is in reduced form or not.

@simonbyrne
Copy link
Contributor

I don't really have time to play around with this much more the moment, but I did arrive at this:

function null_checked_add(x::Int, y::Int)
    n, x = Base.llvmcall("""
    %3 = call { i64, i1 } @llvm.sadd.with.overflow.i64(i64 %0, i64 %1)
    %4 = extractvalue { i64, i1 } %3, 1
    %5 = zext i1 %4 to i8
    %6 = extractvalue { i64, i1 } %3, 0
    %7 = insertvalue { i8, i64 } undef, i8 %5, 0
    %8 = insertvalue { i8, i64 } %7, i64 %6, 1
    ret { i8, i64 } %8""",
    Tuple{Bool,Int64},Tuple{Int64,Int64},x,y)
end

This returns a (Bool, Int64) tuple, with the Bool indicating whether or not overflow occurred. It should be possible to wrap this in a Nullable, except for the fact that there is no Nullable constructor which takes 2 arguments.

@timholy
Copy link
Member Author

timholy commented Jun 4, 2015

I was thinking along the same directions, minus all the llvmcall magic :-). Can you write that in julia, or not possible?

@simonbyrne
Copy link
Contributor

Not that I know of: the current checked_add instructions are defined in src/intrinsics.cpp, which includes the exception machinery.

@JeffBezanson
Copy link
Member

This is a great use of llvmcall. We could adjust the intrinsics to return the overflow bit, and then throw the exception in a julia-level definition, but we don't want to add many more intrinsics.

@StefanKarpinski
Copy link
Member

We could adjust the intrinsics to return the overflow bit, and then throw the exception in a julia-level definition, but we don't want to add many more intrinsics.

+1 to this – I was thinking that as well.

@hayd
Copy link
Member

hayd commented Jun 7, 2015

Is #11604 needed to use @llvm.smul.with.overflow.i64 (and friends) ?

@simonbyrne
Copy link
Contributor

Is #11604 needed to use @llvm.smul.with.overflow.i64 (and friends) ?

Apparently not: I assume because they've already been declared by intrinsics.cpp?

@hayd
Copy link
Member

hayd commented Jun 7, 2015

sadd works. But when trying the above with smul:

julia> null_checked_mul(6, 7)
ERROR: error compiling null_checked_mul: Failed to parse LLVM Assembly:
julia: <string>:3:23: error: use of undefined value '@llvm.smul.with.overflow.i64'
%3 = call { i64, i1 } @llvm.smul.with.overflow.i64(i64 %0, i64 %1)

It may be that I'm doing something else wrong!

@simonbyrne
Copy link
Contributor

Ah, sorry I missed that. Yes, you're right (though if you run it a second time it does work correctly).

@simonbyrne
Copy link
Contributor

@JeffBezanson This is one thing I have often wondered: do we actually need most of the intrinsics? Would there be any disadvantage to using llvmcall for a lot of those (once #11604 is ironed out)?

@simonbyrne
Copy link
Contributor

A rough plan for this issue:

@garrison
Copy link
Member

@simonbyrne any thoughts on the issue I raised above? Renaming the num and den fields to something more obscure would at least explicitly (rather than subtly) break code relying on the current behavior.

Also, I assume calling num(frac) would reduce a fraction before returning the numerator. But then calling num(frac) repeatedly would be slower than it currently is, as it must check each time that the fraction is in reduced form.

@StefanKarpinski
Copy link
Member

Could keep a flag of whether it's been reduced or not and have a reduce function that returns the same value in reduced form.

@simonbyrne
Copy link
Contributor

Renaming the fields seems reasonable. The idea of a flag seems reasonable, though perhaps worth having some examples of where this might be a problem.

@StefanKarpinski
Copy link
Member

Could use the sign of the denominator or something like that. We've also talked about having a separate powers of two field, which would give bigger range and make it possible to represent all floating-point values, which would be pretty useful.

@simonbyrne
Copy link
Contributor

A quick update: I've managed to get the llvm-checked operations working, (on the llvm-checked branch), and it's down to 6x slower than completely unchecked ops.

@timholy
Copy link
Member Author

timholy commented Feb 12, 2016

Really nice! That's a heck of an improvement from 320x slower! Sounds like Base material to me (assuming we aren't planning on moving Rationals out of Base).

@oscardssmith
Copy link
Member

What happened with this? A 20x performance increase would be a bad thing to lose to the sands of time.

@simonbyrne
Copy link
Contributor

Note that add/sub/mul_with_overflow in Base.Checked should now make this easier to implement if someone is interested.

@JeffreySarnoff
Copy link
Contributor

I spent the afternoon playing with this.

It seems to me that one may carry an unreduced rational if it become reduced on these occasions:

  • after it is entered, read, parsed, externally retrieved
  • before it is printed, shown, displayed, written, externally stored
  • after an arithmetic operation overflows, before one attempt at recalculation

For all other calculation processing, the use of unreduced rationals would be ok.
Next, we see that Rational{Int32} is not the target for this strategy.

T floor( sqrt( T ) ) floor( cbrt( T ) )
Int16 181 31
Int32 46_340 1_290
Int64 3_037_000_499 2_097_152
Int128 13_043_817_825_332_783_104 5_541_191_377_756

I found this to be marginally faster than the current version:

immutable Rational{T<:Integer} <: Real
    num::T
    den::T

    function Rational(num::Integer, den::Integer)
        !iszero(den) && return new(num, den)
        !iszero(num) && return new(flipsign(one(T),num), zero(T))
        throw(ArgumentError("invalid rational: zero($T)//zero($T)"))
    end
end

Is it acceptable to use two Val{} types as a second parameter, encoding IS_REDUCED or MAY_REDUCE?

That is a way to work without a state field and let calculations with unreduced rationals go on unless there is overflow. The only other way that is type size respecting, as I read above, appropriates the denominator's sign bit for use as state bit ( signbit(den) ? IS_REDUCED. : MAY_REDUCE ). To date, Julia base has stayed away from reclaiming an internal bit of a built-in numeric type (I have).

@StefanKarpinski
Copy link
Member

Nice work, @JeffreySarnoff. It would be great to have a faster rational type based on this approach. I'm not enthused about the type parameter indicating reduction status, but maybe it would be ok? At that point, we could actually just have reduced and unreduced rational types. I.e. this:

abstract Rational{T<:Integer} <: Real
immutable ReducedRational{T<:Integer} <: Rational{T} ... end
immutable UnreducedRational{T<:Integer} <: Rational{T} ... end

Then some operations would produce reduced rationals, while others would produce unreduced ones. Of course, the trouble is that you can't always predict statically when you'll get which. Which is why I don't think it really helps. Instead, I think having some sort of reduced flag to avoid repeated reduction would be the way to go.

@timholy
Copy link
Member Author

timholy commented Jan 27, 2017

👍 to run-time checking of the flag (I'd bet money that using the type system for this would make things worse).

@JeffreySarnoff
Copy link
Contributor

This is a proof of concept.

To keep type constancy, there is no widening.
If a calculation overflows, any unreduced inputs are reduced and the calculation is reattempted once.

With element types of Int64 or Int32 the speedups are utilitarian.
A test script given relative performance is in the readme.

@Peiffap
Copy link
Contributor

Peiffap commented May 23, 2019

Any progress on this? As @oscardssmith said, "A 20x performance increase would be a bad thing to lose to the sands of time."

@JeffreySarnoff
Copy link
Contributor

which one of these approaches to handle overflow .. rationals tend to grow their sigdigits

(a)   throw an OverflowException  

(b)   substitute a nearby rational of the same eltype     for values < typemax(Rational{T}
      saturate                                            for values > typemax(Rational{T}

(c)   substitute BigInt // BigInt 

@newptcai
Copy link

I was trying to compute the 1000th harmonic numbers exactly. Have to use bigint in this case. It seems to be a bit slow.

@JeffreySarnoff
Copy link
Contributor

JeffreySarnoff commented Apr 14, 2020

try this for calculating harmonic numbers

using FastRationals
n = 10_000
qs = [Rational{BigInt}(1,i) for i=1:n];
fastqs = [FastQBig(1,i) for i=1:n];
qs_time = @belapsed sum($qs);
fastqs_time = @belapsed sum($fastqs);
round(qs_time / fastqs_time, digits=2)

I get 20x, 10x for n=1_000.

@newptcai
Copy link

try this for calculating harmonic numbers

using FastRationals
n = 10_000
qs = [Rational{BigInt}(1,i) for i=1:n];
fastqs = [FastQBig(1,i) for i=1:n];
qs_time = @belapsed sum($qs);
fastqs_time = @belapsed sum($fastqs);
round(qs_time / fastqs_time, digits=2)

I get 20x, 10x for n=1_000.

Indeed much faster. See benchmark here. Thanks.

https://newptcai.github.io/harmonic-number-and-zeta-functions-explained-in-julia-part-1.html

@JeffreySarnoff
Copy link
Contributor

Great! @StefanKarpinski

@lesobrod
Copy link

Hi! I’m trying to use FastRationals package for long computations associated with the harmonic numbers.
Here’s a branch without FastRationals:
https://github.com/lesobrod/egyptian-fractions/tree/main
and with it:
https://github.com/lesobrod/egyptian-fractions/tree/fastRationals
The essence of the task is easy to understand from README.
Unfortunately, calculations with package occur much slower than without it ((
I’d appreciate it if someone could see if I’m using FastRationals correctly

@JeffreySarnoff
Copy link
Contributor

Are you following the guidelines? Can you use Q64 staying around the tabulated sweet spot? If so, that is worth doing.

What is the result when you run this:

using FastRationals

n = 10_000
qs = [Rational{BigInt}(1,i) for i=1:n];
fastqs = [FastQBig(1,i) for i=1:n];
qs_time = @belapsed sum($qs);
fastqs_time = @belapsed sum($fastqs);
round(qs_time / fastqs_time, digits=2)

@lesobrod
Copy link

I've found that results are very unstable (o_O)
test

@JeffreySarnoff
Copy link
Contributor

I have no idea why your results are unstable.

julia> n = 10_000
julia> qs = [Rational{BigInt}(1,i) for i=1:n]; fastqs = [FastQ64(1,i) for i=1:n];
julia> qs_time = @belapsed sum($qs);  fastqs_time = @belapsed sum($fastqs);
julia> round(qs_time / fastqs_time, digits=2)

repeating that five times, I see these results (showing a slowdown!)
(0.62, 0.62, 0.61, 0.62, 0.61)
-- clearly, something has changed in Julia that has altered processing of FastQBigs
(this package has not been edited materially since 2018; bugfix for exp in 2020)

using FastQ64 tells another story (n <= 46, as FastQ64 overflows this calc at n=47 )

julia> n = 10
julia> qs = [Rational{BigInt}(1,i) for i=1:n]; fastqs = [FastQ64(1,i) for i=1:n];
julia> qs_time = @belapsed sum($qs);  fastqs_time = @belapsed sum($fastqs);
julia> round(qs_time / fastqs_time, digits=2)
141.84

julia> n = 30 ... round(qs_time / fastqs_time, digits=2)
30.45

julia> n = 40 ... round(qs_time / fastqs_time, digits=2)
19.18

julia> n = 46 ... round(qs_time / fastqs_time, digits=2)
8.45

@JeffreySarnoff
Copy link
Contributor

I placed a large CAUTION about FastQBig at the top of the readme.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Indicates that a maintainer wants help on an issue or pull request maths Mathematical functions performance Must go faster rationals The Rational type and values thereof
Projects
None yet
Development

No branches or pull requests