-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow @simd
to use vectorised math operations when available
#15265
Comments
No, what would really be cool is getting the same feature with this. :-) foo(X) = @simd sum(log(x) for x in X) (Anyway I guess having the longer form work is a step towards this.) |
I have a C++ library Vecmathlib https://bitbucket.org/eschnett/vecmathlib/wiki/Home that goes very much in this direction. This library is implemented in C++, using various vector intrinsics; a long-standing plan of mine is to rewrite this in Julia, and targeting LLVM directly. This would essentially provide a new implementation of The |
i'm going to close this as a non-actionable feature-wish. with #15244, i think the content here should be implementable in a package. |
I think #15244 is unrelated, since it is addressing tuple vectorization. What we want for this request is for Julia transcendentals to be lowered to LLVM intrinsics, and vectorization of those to target whatever vector library is available. That seems like pure LLVM work. The Apple Accelerate patch seems like a good reference. |
This feature request is actionable. The transformations required for this are equivalent to those performed by |
It must be spring time – the weather is nice and Jameson is on his annual issue closing spree! |
|
The closing worked to make me pay attention :-) |
I was imagining a |
I'm worried about code bloat (and compilation time) from inlining stuff as complicated as |
As a rough idea, I was thinking that when the vectorizer sees a call to |
That's what the table I think a similar discussion came up years ago, and there was some stumbling block (on Windows) where if we mapped transcendentals to LLVM intrinsics, we were stuck with them being mapped to the system libm in the backend, but I don't remember the details. |
That would be awesome, though I don't know how easy that would be
I think that this is still a problem. |
Is it reasonable to close this issue given all the progress in the SIMD ecosystem? cc @chriselrod |
IIUC this is still an open issue that is only solved in special-cases. |
|
I don't think it's solved.
I think applying the vector function abi to call sites when using
|
It would be nice if we could write something like
and have it automatically use a vector math function such as
_mm_log_ps
from MKL VML, or its Accelerate or Yeppp equivalents (we do have packages for those libraries, but they only expose the direct array operations, which require memory allocation).It seems that LLVM has some optimisations for Accelerate, such that the following works on master on a Mac:
but ideally this shouldn't require custom LLVM, and should work across multiple libraries. Perhaps we may be able to take advantage of developments in #15244 to get this to work more generally.
cc: @SimonDanisch, @eschnett, @ArchRobison
The text was updated successfully, but these errors were encountered: