Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use in-place inv, since lufact already made a copy #22767

Merged
merged 1 commit into from
Jul 20, 2017

Conversation

fredrikekre
Copy link
Member

@fredrikekre fredrikekre commented Jul 11, 2017

Basically cuts allocated memory in half for generic inv calls.

Matrix size master PR
10 1.750 μs (9 allocations: 7.28 KiB) 1.688 μs (7 allocations: 6.25 KiB)
100 165.047 μs (11 allocations: 208.38 KiB) 156.498 μs (8 allocations: 129.30 KiB)
500 4.250 ms (11 allocations: 4.07 MiB) 3.976 ms (8 allocations: 2.16 MiB)
1000 27.447 ms (11 allocations: 15.76 MiB) 26.299 ms (8 allocations: 8.13 MiB)
2000 210.670 ms (11 allocations: 62.04 MiB) 204.308 ms (8 allocations: 31.51 MiB)
4000 1.539 s (13 allocations: 246.16 MiB) 1.482 s (9 allocations: 124.05 MiB)

@ararslan ararslan requested a review from andreasnoack July 12, 2017 00:03
@ararslan ararslan added linear algebra Linear algebra performance Must go faster labels Jul 12, 2017
Copy link
Member

@Sacha0 Sacha0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch! :)

@fredrikekre
Copy link
Member Author

Failure because LU{T} where T <: Union{BigFloat, Rational} does not support inv!, so I added a fallback for such cases.

@tkelman
Copy link
Contributor

tkelman commented Jul 12, 2017

I don't think that fallback is right, since

inv!(F);
F

won't have the desired effect for some types

@fredrikekre
Copy link
Member Author

You mean because you expected the matrix wrapped in the factorization to be the inverse? That's true...
For this PR it is only needed to have an in-place for LU, so I can add that instead? One that updates the correct matrix.

@fredrikekre
Copy link
Member Author

Pushed an updated version.

@fredrikekre
Copy link
Member Author

Timeout on Travis i686

inv!(LU(copy(A.factors), copy(A.ipiv), copy(A.info)))
inv!(A::LU{T,<:StridedMatrix}) where {T} =
@assertnonsingular A_ldiv_B!(A.factors, copy(A), eye(T, size(A,1))) A.info
inv(A::LU{<:Any,<:StridedMatrix}) = inv!(copy(A))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so does this copy twice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, for the non BlasFloat case..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a copy(A) on this line, then another copy(A) in the inv!(A::LU{T,<:StridedMatrix}) that it calls, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we can not use the inv(A) = inv!(copy(A)) pattern for the non BlasFloat case then. Instead

inv(A::LU) = A_ldiv_B!(A, eye(size(A)))

then. But we do still need to make the copy in the fallback inv! such that we can overwrite the stored matrix and use it as the left hand side in the solve at the same time.

@fredrikekre
Copy link
Member Author

Rebased, good to go?

@ararslan
Copy link
Member

Yes indeed. I'll let you do the honors. 😄

@fredrikekre fredrikekre merged commit 8cd3c3f into JuliaLang:master Jul 20, 2017
@fredrikekre fredrikekre deleted the fe/inplace-inv branch July 20, 2017 08:48
@fredrikekre
Copy link
Member Author

Thanks! It was scary, but I managed :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants