Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CuArray broadcast eerror #697

Closed
Moelf opened this issue Mar 25, 2019 · 5 comments
Closed

CuArray broadcast eerror #697

Moelf opened this issue Mar 25, 2019 · 5 comments

Comments

@Moelf
Copy link
Contributor

Moelf commented Mar 25, 2019

pkg versions

(v1.1) pkg> st
    Status `~/.julia/environments/v1.1/Project.toml`
  [fbb218c0] BSON v0.2.2
  [6e4b80f9] BenchmarkTools v0.4.2
  [336ed68f] CSV v0.4.3
  [3a865a2d] CuArrays v1.0.1
  [a93c6f00] DataFrames v0.17.1
  [31c24e10] Distributions v0.17.0
  [587475ba] Flux v0.8.1

minimal sample:

using Flux, Flux.Data.MNIST, Statistics
using Flux: onehotbatch, onecold, crossentropy, throttle
using Base.Iterators: repeated
using CuArrays

imgs = MNIST.images()
X = hcat(float.(reshape.(imgs, :))...) |> gpu

labels = MNIST.labels()
Y = onehotbatch(labels, 0:9) |> gpu
m = Chain(
  Dense(28^2, 32, relu),
  Dense(32, 10),
  softmax) |> gpu
accuracy(x, y) = mean(onecold(m(x)) .== onecold(y))

@show accuracy(X,Y)

Error message:

ERROR: LoadError: GPU compilation of #23(CuArrays.CuKernelState, CUDAnative.CuDeviceAr
ray{Bool,1,CUDAnative.AS.Global}, Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{
Int64}},typeof(==),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Int64,1,CUDA
native.AS.Global},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{Array{Int64,1},Tup
le{Bool},Tuple{Int64}}}}) failed
KernelError: passing and using non-bitstype argument

Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{Nothing,Tuple
{Base.OneTo{Int64}},typeof(==),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{
Int64,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{Array{
Int64,1},Tuple{Bool},Tuple{Int64}}}}.
That type is not isbits, and such arguments are only allowed when they are unused by t
he kernel.

Stacktrace:
 [1] check_invocation(::CUDAnative.CompilerContext, ::LLVM.Function) at /home/akako/.j
ulia/packages/CUDAnative/PFgO3/src/compiler/validation.jl:35
 [2] compile(::CUDAnative.CompilerContext) at /home/akako/.julia/packages/CUDAnative/P
FgO3/src/compiler/driver.jl:94
 [3] #compile#109(::Bool, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{()
,Tuple{}}}, ::Function, ::VersionNumber, ::Any, ::Any) at /home/akako/.julia/packages/
CUDAnative/PFgO3/src/compiler/driver.jl:45
 [4] compile at /home/akako/.julia/packages/CUDAnative/PFgO3/src/compiler/driver.jl:43
 [inlined]
 [5] #compile#108(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}
}}, ::Function, ::CUDAdrv.CuDevice, ::Function, ::Any) at /home/akako/.julia/packages/
CUDAnative/PFgO3/src/compiler/driver.jl:18
 [6] compile at /home/akako/.julia/packages/CUDAnative/PFgO3/src/compiler/driver.jl:16
 [inlined]
 [7] macro expansion at /home/akako/.julia/packages/CUDAnative/PFgO3/src/execution.jl:
269 [inlined]
 [8] #cufunction#123(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tupl
e{}}}, ::typeof(CUDAnative.cufunction), ::getfield(GPUArrays, Symbol("##23#24")), ::Ty
pe{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Bool,1,CUDAnative.AS.Global},
Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(==),Tuple{Base.Broa
dcast.Extruded{CUDAnative.CuDeviceArray{Int64,1,CUDAnative.AS.Global},Tuple{Bool},Tupl
e{Int64}},Base.Broadcast.Extruded{Array{Int64,1},Tuple{Bool},Tuple{Int64}}}}}}) at /ho
me/akako/.julia/packages/CUDAnative/PFgO3/src/execution.jl:240
 [9] cufunction(::Function, ::Type) at /home/akako/.julia/packages/CUDAnative/PFgO3/sr
c/execution.jl:240
 [10] macro expansion at /home/akako/.julia/packages/CUDAnative/PFgO3/src/execution.jl
:208 [inlined]
 [11] macro expansion at ./gcutils.jl:87 [inlined]
 [12] macro expansion at /home/akako/.julia/packages/CUDAnative/PFgO3/src/execution.jl
:205 [inlined]
 [13] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Bool,1}, ::Tuple{CuAr
ray{Bool,1},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(==),Tup
le{Base.Broadcast.Extruded{CuArray{Int64,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.E
xtruded{Array{Int64,1},Tuple{Bool},Tuple{Int64}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}
}) at /home/akako/.julia/packages/CuArrays/qZCAt/src/gpuarray_interface.jl:59
 [14] gpu_call(::Function, ::CuArray{Bool,1}, ::Tuple{CuArray{Bool,1},Base.Broadcast.B
roadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(==),Tuple{Base.Broadcast.Extruded{C
uArray{Int64,1},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{Array{Int64,1},Tuple
{Bool},Tuple{Int64}}}}}, ::Int64) at /home/akako/.julia/packages/GPUArrays/t8tJB/src/a
bstract_gpu_interface.jl:151
 [15] gpu_call at /home/akako/.julia/packages/GPUArrays/t8tJB/src/abstract_gpu_interfa
ce.jl:128 [inlined]
 [16] copyto! at /home/akako/.julia/packages/GPUArrays/t8tJB/src/broadcast.jl:48 [inli
ned]
 [17] copyto! at ./broadcast.jl:797 [inlined]
 [18] copy(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Tuple{Base.
OneTo{Int64}},typeof(==),Tuple{CuArray{Int64,1},Array{Int64,1}}}) at ./broadcast.jl:77
3
 [19] materialize(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Noth
ing,typeof(==),Tuple{CuArray{Int64,1},Array{Int64,1}}}) at ./broadcast.jl:753
 [20] accuracy(::CuArray{Float32,2}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}
) at /home/akako/Documents/Kaggle/model-zoo/vision/mnist/mlp.jl:15
 [21] top-level scope at show.jl:555
 [22] include at ./boot.jl:326 [inlined]
 [23] include_relative(::Module, ::String) at ./loading.jl:1038
 [24] include(::Module, ::String) at ./sysimg.jl:29
 [25] exec_options(::Base.JLOptions) at ./client.jl:267
 [26] _start() at ./client.jl:436
in expression starting at /home/akako/Documents/Kaggle/model-zoo/vision/mnist/mlp.jl:1
7

[Process exited 1]
@Moelf
Copy link
Contributor Author

Moelf commented Mar 25, 2019

current workaround:

accuracy(x, y) = mean( (onecold(m(x)) |>cpu)  .== (onecold(y) |> cpu))

@vchuravy
Copy link

What is the type of onecold(y) or onecold(m(x)):

The error says that you have a Base.Broadcast.Extruded{Array{Int64,1}... so some part of your program is working with CPU arrays.

@KristofferC
Copy link
Contributor

Ref #582, #660, #612.

@DhairyaLGandhi
Copy link
Member

#612 works for OneHot* cases.

@Moelf
Copy link
Contributor Author

Moelf commented Mar 27, 2019

Ref #582, #660, #612.

right, I filed this issue too because the same code was running fine on an older version of Flux.

I guess I'll just have to wait for them to merge the fix.

@Moelf Moelf closed this as completed Mar 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants