-
-
Notifications
You must be signed in to change notification settings - Fork 55
Creating an MArray performs a dynamic allocation #340
Comments
Until this bug is fixed, is there an alternative way I can use to get an array of registers on the device? |
There's no special support, just use regular Julia constructs. For example, an NTuple with some convenience functions for creating a new tuple with items modified (i.e., implementing mutability). |
wrt. the illegal memory access: |
And concerning the GC allocation, we used to run the GC lowering pass and fail if there were actual GC allocations after that, while nowadays we eagerly lower Not sure how to proceed here. Either we do some similar escape analysis during our GC lowering pass, or we start relying on the Julia GC lowering pass again but then it needs to be adapted not to emit platform specific IR as it does now (this is something @jonathanvdc is probably going to have a look at soon). |
MArray
and allocation instead of an alloca
343: Check for OOM when doing malloc. r=maleadt a=maleadt Addresses #340 Crashes on `-g2` with the following MWE though: ``` using CUDAnative, StaticArrays function knl!() r_a = MArray{Tuple{4}, Float32}(undef) for k in 1:4 @inbounds r_a[k] = 0 end nothing end @cuda threads=(5, 5) blocks=4000 knl!() using CUDAdrv CUDAdrv.synchronize() ``` Co-authored-by: Tim Besard <tim.besard@gmail.com>
Thanks for the clarification and for looking into this! I am excited to get back to optimizing my kernels once this issue is resolved. |
Except that it doesn't, that's of course the job of |
Thanks for all your help on this issue so far. I just tried to see if #349 with Julia 1.2.0-DEV.388 (gotten from the nightly builds section of the Julia website) fixes this issue and I now get a different error message. Is there something I am doing wrong? The MWE is in the file using StaticArrays
using CUDAdrv
using CUDAnative
function knl!()
r_a = MArray{Tuple{4}, Float32}(undef)
for k in 1:4
r_a[k] = 0
end
nothing
end
@cuda threads=(5, 5) blocks=4000 knl!() when I run the MWE I see
|
I'm seeing the same behavior, while
Not sure what's up, I'll try to have a look tomorrow. |
Thanks! |
Could you try again? |
On the latest branch the error messages have gone away! |
Fails sometimes with
ERROR: LoadError: CUDA error: an illegal memory access was encountered (code #700, ERROR_ILLEGAL_ADDRESS)
There are two things going here that I found odd/interesting. First-of-all the write to illegal address, but secondly:
I could have sworn that we used to be able to turn this into a
alloca
on CUDAnative v0.10.1 we got rid of this entire allocation.cc: @lcw who encountered this
The text was updated successfully, but these errors were encountered: