Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Significant time spent moving medium-size arrays to GPU, type instability #2414

Closed
BioTurboNick opened this issue Mar 27, 2024 · 10 comments
Closed

Comments

@BioTurboNick
Copy link
Contributor

BioTurboNick commented Mar 27, 2024

There are occasions where @profview shows seemingly inordinate amount of time spent moving data to the GPU given the array size, and possibly excessive GPU memory usage? Not sure what I should be expecting. Could be related to having two array outputs rather than one? I've also see type instability reported through @code_warntype and Cthulhu that I'm not sure how to resolve.

Using a toy example to show the effect:

]activate --temp
]add cuDNN, CUDA, Flux

using CUDA, cuDNN, Flux, Statistics

struct Split{T1 <: Dense, T2 <: Dense}
    s1::T1
    s2::T2
    max_sources::Int
end

function Split(feature_dim::Int, max_sources::Int)
    Split(
            Dense(feature_dim => 2 * max_sources),
            Dense(feature_dim => max_sources),
            max_sources)
end

Flux.@layer Split

(m::Split)(input) = reshape(m.s1(input), 2, m.max_sources, :), m.s2(input)

function imagegen_test(batch_size)
    return randn(Float32, 2048, batch_size), (randn(Float32, 2, 20, batch_size), randn(Float32, 20, batch_size))
end

function test()
    training_batch_size = 256
    iters_per_eval = 64
    network = Split(2048, 128) |> gpu
    optimizer = Flux.Optimise.Adam(1E-4)
    opt_state = Flux.setup(optimizer, network)
    kernel_sigmas_gpu = Float32[64.0, 320.0, 640.0, 1920.0] |> gpu
    for i  1:iters_per_eval
        training_data = imagegen_test(training_batch_size) |> gpu
        Flux.train!(network, (training_data,), opt_state) do m, x, y
            θ_pred, intensity_pred = m(x)
            loss_func(θ_pred, intensity_pred, y..., kernel_sigmas_gpu)
        end
    end

    return nothing
end

pairwise_cityblock(c) =
    dropdims(sum((Flux.unsqueeze(c, 2) .- Flux.unsqueeze(c, 3)) .|> abs, dims = 1), dims = 1)

function kernel_loss(K, predicted_weights, target_weights)
    weights = [predicted_weights; -target_weights]
    embedding_loss = batched_vec(Flux.unsqueeze(weights, 1), batched_vec(K, weights))
    return dropdims(embedding_loss, dims = 1)
end

function multiscale_l1_laplacian_loss(θ_predicted, w_predicted, θ_target, w_target, inv_scale_factors)
    D = pairwise_cityblock([θ_predicted θ_target])
    losses = kernel_loss.(eachslice(exp.(-D ./ reshape(inv_scale_factors, 1, 1, 1, :)), dims = 4), Ref(w_predicted), Ref(w_target))
    return sum(losses)
end

function loss_func(x1, y1, x2, y2, kernel_sigmas)
    mean(multiscale_l1_laplacian_loss(x1, y1, x2, y2, kernel_sigmas))
end

test()
@profview test()
@ToucheSir
Copy link
Member

Do you have a MWE which only captures the CPU <-> GPU data movement? 95% of the code here is unrelated, so it'd be tricky to determine what the culprit might be.

@BioTurboNick
Copy link
Contributor Author

BioTurboNick commented Mar 28, 2024

I trimmed a bit but I'm not sure if I can minimize it much further? This is already quite reduced from the real network. If I replace the body of loss_func with a call to randn(), the amount of time that this line, training_data = imagegen_test(training_batch_size) |> gpu takes in test() goes from a profiler count of 2700 to 84.

@ToucheSir
Copy link
Member

In that case, it's possible that the allocation or copying required for gpu is having to wait for a backlog of other CUDA functions or kernels. You could look into preallocating GPU buffers for the input data, using CUDA.unsafe_free! or wrapping your batches in https://cuda.juliagpu.org/stable/usage/memory/#Batching-iterator . Otherwise, the only other suggestions I can make without a deep dive into the example code would be to do the usual GPU library memory handling tricks (e.g. CUDA.reclaim()) and to try to reduce memory allocations in your loss function's call stack.

@BioTurboNick
Copy link
Contributor Author

Hmm okay, thanks. Putting CUDA.reclaim() at the end of each training loop shrinks the time spent on that line, but it's almost entirely replaced by reclaim() time-wise. unsafe_free! on the training data has no effect.

I must not be understanding something about GPUs here. The total size of the training data generated in each iteration in this example is 2 MB. The weights and biases of the network are 3 MB. The optimizer state is 6 MB. How am I saturating 16 GB of VRAM and 16 GB of shared memory?

@ToucheSir
Copy link
Member

reclaim is quite expensive, so I'd only recommend running it every few iterations unless you're hitting OOMs (which is clearly not a problem here). Can you check if gpu is taking less time (not samples) now? If it's still taking around the same amount of time, then the problem may lie elsewhere. If it's taking significantly less time, we can get more into what might be happening (e.g. how the Julia GC sucks for GPU because it doesn't know about GPU memory pressure).

@BioTurboNick
Copy link
Contributor Author

BioTurboNick commented Mar 29, 2024

Watching Task Manager's record of GPU memory, reclaim doesn't seem to do much until memory is full - If I have reclaim theoretically run 6 times, I only see 2-3 drops in GPU memory usage.

Using @timed on that line and @time on the whole call:

reclaim frequency gpu time total time
every 0.002 s 35.1 s
skip 1 0.01 s 21.5 s
none 0.1 s 28.5 s

EDIT: And when I check what reclaim returns, it always returns nothing, which according to the docstring means it didn't do anything?

@BioTurboNick
Copy link
Contributor Author

BioTurboNick commented Mar 29, 2024

If I run the GC in every other iteration instead of reclaim:

0.001 s or 0.05 s for the line (every other iteration - longer on iterations where the GC runs) and 9 s total, though 41% of that is GC.

If I run the GC every iteration, 0.002 s for the line consistently, but 11 s total and 67% GC time.

EDIT: Ah, sweet spot... GC.gc(false) in every other iteration: 0.001 s for the line and 0.8 seconds total, 3% GC time. And CUDA.reclaim() at the end of the run drops GPU memory to minimal.

@ToucheSir
Copy link
Member

Yes, that sounds about right. I neglected to mention GC.gc because the linked CUDA.jl docs do, but it's usually required before calling reclaim. Looks like this is a classic case of the GC not playing nice with allocation-heavy GPU code then.

@BioTurboNick
Copy link
Contributor Author

Thanks for your help! I'm new to GPU work so I was primarily relying on Flux docs. I'll see if there's something to note there. Meanwhile, I saw there's initial work being done over on CUDA.jl to run the GC heuristically, which would be nice.

@CarloLucibello
Copy link
Member

closing as addressed by #2416, feel free to reopen if needed though

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants