-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix corner cases of binary arithmetic ops between sparse vectors and scalars (#21515) #22715
Conversation
@nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
Hm, I thought these specializations were put in for performance reasons, but they do get the corner cases wrong. Not covered by benchmarks, or is generic broadcast just as good now? |
It probably isn't covered. I see a performance hit julia> mulsp(x, a) = Base.Broadcast.broadcast_c(*, AbstractSparseArray, x, a)
mulsp (generic function with 1 method)
julia> x = sprand(10^6, 0.1);
julia> a = 2.3;
julia> @btime $x*$a;
111.876 μs (5 allocations: 1.53 MiB)
julia> @btime mulsp($x,$a);
204.978 μs (6 allocations: 1.53 MiB) This is probably also within the scope of https://github.com/JuliaLang/julia/issues/22733. I guess the specialized |
IIRC you remember correctly, these specializations (and perhaps a few others like them?) were retained for performance.
Echoing
👍 Hopefully we can continue to whittle away the remaining performance gap! :) |
Other comments? Or in shape to merge? :) |
Bump – no comments so I assume this is good to go as soon as CI approves. |
Only half of this (removing the broadcast methods) is uncontroversial as mentioned in #22715 (comment). The other part depends on #22733 |
Having the result of |
The reason is that |
Cheers, I will strip the changes to |
I think we'll have to keep the current behavior for |
what symbolic computation? that's not how our api's work, with a handful of exceptions (not related to these specific functions) in internals that call cholmod or umfpack but aren't public. if we had split symbolic-numeric apis in more places, then the numeric step would be responsible for verifying that the input data satisfies the assumptions the symbolic step used. but that's not how these work. |
The ability to perform structural/symbolic computations (here, for example, pattern intersection or union calculation) certainly has value and merits consideration. Such functionality presently exists implicitly in some operations, if somewhat ad hoc and inconsistently. Better (and explicitly) supporting such functionality would be lovely! On the one hand, improving/extending the existing implicit support would constrain/compromise the semantics, implementation, and performance of both a variety of operations over sparse/structured objects (for example [most?] arithmetic operations, and associated higher order functions) as well as the embedded, implicit structural/symbolic functionality. This thread is a case in point. On the other hand, providing separate, dedicated functionality for structural/symbolic computations would obviate the above constraints/compromises. Additionally, such separation should lead to greater clarity and consistency in semantics, documentation, and code, and potentially better reusability and composability. Hence in part the usual separation of such functionality I wager. As such, providing separate, dedicated functionality for structural/symbolic computations strikes me as a better approach than continuing to yoke other operations with that responsibility. Thoughts? Best! :) |
We can use a symbolic/numerical decoupling in implementations without exposing a symbolic API. So roughly speaking there are three solutions for each function: no decoupling (which is how most of the sparse broadcasting works), implicit decoupling (which is how most of the sparse linear algebra works) or explicit decoupling where we expose a symbolic API. #22733 is the place for discussing pros and cons. For now, the status is that broadcasting and linear algebra have different behavior. Hence, |
Doesn't correctness matter more than imposing an artificial discrepancy between linear algebra functions and broadcasting functions when the dimensions are such that they should behave identically? Even if there were implicit numeric/symbolic calculation stages in the internal implementations of sparse linear algebra functions, it would be fairly straightforward to branch early and call the numeric-symbolic implementation when its assumptions are satisfied by the input data, and call more general consistent-with-dense fallbacks when the assumptions aren't satisfied.
And part of the difference is a correctness bug that's being fixed here. |
Sparse |
…arse vectors and scalars (#21515).
038ee59
to
5891d4a
Compare
I stripped the removal of the contentious non-broadcast arithmetic specializations. Thoughts? Thanks! |
Thanks all! :) |
This pull request fixes a few corner cases of binary arithmetic operations between sparse vectors and scalars. Ref. https://github.com/JuliaLang/julia/issues/21515#issuecomment-313868609 and downstream discussion. (Backport candidate? Tentatively added to the 0.6.x milestone.) Best!