-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add left division operator #82
Conversation
Codecov Report
@@ Coverage Diff @@
## master #82 +/- ##
==========================================
+ Coverage 95.65% 95.85% +0.20%
==========================================
Files 5 5
Lines 322 314 -8
==========================================
- Hits 308 301 -7
+ Misses 14 13 -1
Continue to review full report at Codecov.
|
@dkarrasch in #39 (comment) what did you mean by checking that the blocks are diagonal? |
Co-authored-by: Miha Zgubic <mzgubic@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe let's just add a test for nonsquare matrices and we are good to go :)
Oh, and CI and version bump!
Co-authored-by: Miha Zgubic <mzgubic@users.noreply.github.com>
Hmm, what's up with CI? |
The new test I added, has issues visibly 😅 |
Oh that's because I am using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! One main question about the error case here
result = similar(vm) | ||
for block in blocks(B) | ||
nrow = size(block, 1) | ||
result[row_i:(row_i + nrow - 1), :] = block \ vm[row_i:(row_i + nrow - 1), :] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've not thought too much, but perhaps you know: is there any rewriting of this that would allow us to access vm
col-wise rather than row-wise?
also does using views of vm
help performance here at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think accessing vm
colwise is possible for this problem.
views
could help I suppose. I am going to run some benchmark and see
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Views does not seem to help that much in this simple example
julia> using BenchmarkTools
julia> N = 400
julia> A = BlockDiagonal([rand(N, N) for _ in 1:5])
julia> x = rand(5N, 100)
julia> A \ x
julia> @btime $A \ $x # No views
7.085 ms (37 allocations: 10.70 MiB)
julia> @btime $A \ $x # With views
7.403 ms (27 allocations: 9.17 MiB)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think the reduced allocations suggests let's use views:
e.g. on a matrix of the same size, but more blocks:
julia> M = 40
julia> B = BlockDiagonal([rand(M, M) for _ in 1:5])
julia> x = rand(5N, 100)
julia> @btime $A \ $x # No views
2.155 ms (302 allocations: 5.22 MiB)
julia> @btime $A \ $x # With views
2.264 ms (202 allocations: 3.69 MiB)
(i know the times are ~equal but i usually trust allocations as at least as good a guide in practice)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks again!
This follows from the discussion from #39. This is the same implementation presented in the issue.
It is currently done with in place allocation, but I could also change it if you think it would be better.