-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for subarrays in linalg #589
Comments
As you've noticed before, we're really hesitant to extend everything to In the mean time, a more conservative union that doesn't blow up method complexity (driving up the time to insert them into the method table), increase risk of ambiguities, or include the borderline undefined behavior that Adapt.jl's WrappedArray does, would probably be acceptable too. I guess something sitting in between DenseXArray and StridedXArray, and AnyXArray. Is there any precedent for this in the Julia array ecosystem (I haven't encountered the IndexXArray you're suggesting)? My only reservation is that the conservativeness of such a union is completely arbitrary; we can index every type of array in a kernel, so the only reason to not have it include every array wrapper is to avoid running in the above issues. So where to you draw the line? |
Thanks for the link to the AbstractWrappedArray and appreciate the complexity of that! I found IndexGPUArray in here: GPUArrays.jl/src/host/indexing.jl Line 140 in e8e9b03
|
Yeah, that's to indicate which arrays can be used as indices, which is unrelated to arrays being indexed within kernels (which is something that's generally possible with any array). I wonder if at this point we shouldn't consider getting rid of Adapt.jl's horrible |
Hello,
I would like to use the triu! and transpose! functions on a non-contiguous view (eg. view(a', 1:2:6,4:2:8)) - is there a way make this possible (ideally for all functions in src/host/linalg.jl; and for copyto! in src/host/abstractarray) without severly increasing runtimes/compiletimes due to multiple-dispatch overhead?
Earlier discussions on this topic:
#452
#458
JuliaGPU/CUDA.jl#1778
JuliaGPU/CUDA.jl#2078
Perhaps some type of a union of subarrays. transposes, and abstractarrays (to avoid switching to AnyGPUArrays; also AnyGPUArrays does not include transposes) ?
Edit: I just saw IndexGPUArray might be an option, if it were expanded with
SubArray{T, <:Any, <:LinearAlgebra.Adjoint{T, <:AbstractGPUArray }}
Let me know your thoughts and happy to draft a PR
@maleadt @vchuravy
The text was updated successfully, but these errors were encountered: