help with parallel computing application #517
Replies: 2 comments 1 reply
-
Hi @dukandender ! In order to help you determine how best to use Dagger to parallelize your evaluations, it would be helpful to know what kind of function you're evaluating, especially if you want to run it on GPUs. I would personally recommend a simpler approach than using As an example of an inner loop: N_evals = 100
lck = Threads.ReentrantLock()
current_best_params = ...
current_best_fitness = ...
function update_and_get_best!(result)
lock(lck) do
if best_fitness(result) < current_best_fitness
current_best_fitness = best_fitness(result)
current_best_params = best_candidate(result)
end
# Now could be a good time to checkpoint your current best result!
# Consider incrementing a `Ref{Int}` counter here to determine when to checkpoint
return (current_best_fitness, copy(current_best_params))
end
end
function run_sim!()
local_best_fitness = ...
local_best_candidate = ...
for eval_idx in 1:N_evals
result = Dagger.@spawn myfunc(local_best_candidate)
wait(Dagger.@spawn scope=Dagger.scope(worker=myid()) update_and_get_best!(result))
end
end
@sync for _ in 1:Threads.nthreads()
Threads.@spawn run_sim!()
end
# Final result is in `current_best_*` |
Beta Was this translation helpful? Give feedback.
-
@dukandender I do optimization and got thinking about your problem statement. Typically in optimization you have a global state and need to iterate over many small jobs that rely on that global state, Than the state needs to be updated and used in subsequent runs of those jobs. In such optimization setting another approach would be to use green threads to dispatch remote jobs, but manipulate the global state in a single thread. The code could look more or less like this:
Does this template look like useful for your purpose? |
Beta Was this translation helpful? Give feedback.
-
hello, i am performing research in which i need to scale up my slow function evaluation massively. i am performing optimization with BlackBoxOptim.jl and would like to parallelize this in the following way with Dagger and DTables:
For my current setup, the probablistic descent works very well and seemingly always decreases on a 20second cycle. However, this drop is extremely minimal, so by paralellizing, hopefully this optimization will happen much, much quicker. To restate my aim, I would like to parallelize my optimizer by having workers search the space concurrently and properly handle saving the best pairs, when they come up.
I am new to Dagger and DTables and Distributed, so any help would be extremely helpful. If anyone is able to help me conceptualize this in a MWE, I will be able to probably adapt this for my application. For now, here is how I want to call the optimizations, but if a function is not the best way to allow for parallel computing, please let me know. Thank you for reading this and let me know if anything needs to be clarified.
julia
function optimize_worker(initial_params, duration)
res = bboptimize(
loss, initial_params;
NumDimensions=length(initial_params),
MaxTime=duration,
SearchRange=(-17, 17),
TraceMode=:silent,
PopulationSize=5000,
Method=:probabilistic_descent,
lambda=100,
)
best_params = best_candidate(res)
best_fitness = best_fitness(res)
return (best_params=best_params, best_fitness=best_fitness)
end
Beta Was this translation helpful? Give feedback.
All reactions