-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate scale #15258
Deprecate scale #15258
Conversation
@@ -1,2 +1,2 @@ | |||
OPENBLAS_BRANCH=v0.2.15 | |||
OPENBLAS_SHA1=53e849f4fcae4363a64576de00e982722c7304f9 | |||
OPENBLAS_BRANCH=develop |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't hide this in an unrelated pr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sneaky sneaky.
0974b9e
to
a963258
Compare
|
||
If ``A`` is a matrix and ``b`` is a vector, then ``scale(A,b)`` scales each column ``i`` of ``A`` by ``b[i]`` (similar to ``A*diagm(b)``\ ), while ``scale(b,A)`` scales each row ``i`` of ``A`` by ``b[i]`` (similar to ``diagm(b)*A``\ ), returning a new array. | ||
|
||
Note: for large ``A``\ , ``scale`` can be much faster than ``A .* b`` or ``b .* A``\ , due to the use of BLAS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so is this not true any more?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The deprecation suggestion is to use Diagonal(b)*A
which calls BLAS. Actually, the difference is not really because of BLAS, but broadcasting overhead. With a nested loop, the code vectorizes and is as fast as BLAS on my machine.
We could move the "Note" to an entry for *(Diagonal,Matrix)
or broadcast
, but I'm not sure it will be easy to find in any of those places.
a963258
to
63969cb
Compare
63969cb
to
dfbbedd
Compare
In favor of
Diagonal(x)*A
orα*A
.Fixes JuliaLang/LinearAlgebra.jl#248.
Update: Note that the potential small slowdown of
Diagonal(x)*A
instead ofscale(x,A)
should be eliminated thanks to #15259.