Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Typical accuracy function using onecold with a OneHotMatrix fails to compile on GPU #582

Closed
KristofferC opened this issue Jan 29, 2019 · 3 comments

Comments

@KristofferC
Copy link
Contributor

KristofferC commented Jan 29, 2019

The following crashes when being compiled to GPU:

julia> x = rand(Float32, 10, 3) |> gpu;

julia> y = Flux.onehotbatch(1:3, 1:10) |> gpu;

julia> accuracy(x, y) = Flux.onecold(x) .== Flux.onecold(y);

julia> accuracy(x, y)
ERROR: GPU compilation of #23(CuArrays.CuKernelState, CUDAnative.CuDeviceArray{Bool,1,CUDAnative.AS.Global}, Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(==),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Int64,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{Array{Int64,1},Tuple{Bool},Tuple{Int64}}}}) failed
KernelError: passing and using non-bitstype argument

Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64}},typeof(==),Tuple{Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Int64,1,CUDAnative.AS.Global},Tuple{Bool},Tuple{Int64}},Base.Broadcast.Extruded{Array{Int64,1},Tuple{Bool},Tuple{Int64}}}}.
That type is not isbits, and such arguments are only allowed when they are unused by the kernel.
@KristofferC KristofferC changed the title Typicaly accuracy function using onecold fails to compile on GPU Typical accuracy function using onecold fails to compile on GPU Jan 29, 2019
@KristofferC KristofferC changed the title Typical accuracy function using onecold fails to compile on GPU Typical accuracy function using onecold with a OneHotMatrix fails to compile on GPU Jan 30, 2019
@zsz00
Copy link
Contributor

zsz00 commented Feb 1, 2019

i meet the same problem..
run the mnist.jl demo use GPU . the code run on Flux 0.7.1 is OK, but error in 0.7.2

Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E3-1230 v5 @ 3.40GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.1 (ORCJIT, skylake)

CUDN 10 + CUDNN 7.4

@KristofferC
Copy link
Contributor Author

The PR got reverted, so would be good to reopen.

@darsnack
Copy link
Member

darsnack commented Feb 12, 2021

This is fixed with all the latest changes to one-hot-related stuff (see #1448)

julia> x = rand(Float32, 10, 3) |> gpu;

julia> y = Flux.onehotbatch(1:3, 1:10) |> gpu;

julia> accuracy(x, y) = Flux.onecold(x) .== Flux.onecold(y);

julia> accuracy(x, y)
3-element CuArray{Bool,1}:
 1
 0
 0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants