-
-
Notifications
You must be signed in to change notification settings - Fork 609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
simplify test machinery #2498
simplify test machinery #2498
Conversation
@pxl-th I'm seeing something odd with the convolutions on AMDGPU using Flux, AMDGPU, Zygote
m = Conv((2, 2), 3 => 4)
x = rand(Float32, 5, 5, 3, 2)
y, g = Zygote.withgradient(x -> sum(m(x)), x)
dev = gpu_device(force=true)
m_gpu = m |> dev
x_gpu = x |> dev
y_gpu, g_gpu = Zygote.withgradient(x -> sum(m_gpu(x)), x_gpu)
@assert y_gpu ≈ y atol=1e-4
@assert Array(g_gpu) ≈ g atol=1e-4 |
@CarloLucibello something broke cpu-gpu transfer for convolutions. This is not executed anymore: Flux.jl/ext/FluxAMDGPUExt/functor.jl Line 89 in 09a16ee
|
Ah right, that is when using |
I investigated a bit and hooking up into MLDataDevices's data transfer mechanism is not straightforward. So I propose we merge this PR as it is and proceed with a fix later. The problem is not introduced by this PR in any case, it was pre-existing and now just exposed. Can I get an approve? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I can take a look at it as well tomorrow, we can fix this is a separate PR.
I think the solution involves modifying this line |
This PR introduces a single function
test_gradients
that can be used across all possible differentiation scenarios (on cpu and any gpu backend, on layer or simple functions...).