-
-
Notifications
You must be signed in to change notification settings - Fork 727
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale by number of cores or amount of memory #2208
Comments
I'm working on a first example for the first proposed solution. |
Couple thoughts on this. First there tend to be physical limits on clusters that will at some point make scaling this way impractical at least at a certain point. For instance asking for 100 cores is (unless I'm mistaken) not going to be able to allocate them all on the same CPU. Depending on the application this could be problematic. How do we handle cases where users intended to have cores on the same CPU? This is more likely to be a problem when it comes to memory, but the same question stands there. Second some clusters have different options in terms of nodes that could be chosen. How does the job scheduler decide what nodes are sufficient to add up to the user's requirements? Related how does a user specify a constraint such as keeping using larger portions of single nodes (e.g. more CPU, more memory on each node that has workers). |
I'm not sure to understand what you mean, are you talking about scaling with |
As this functionality would be common across the downstream projects it would make more sense to me to put this in the upstream project. Nice proposal though! |
If I'm not mistaken, we are in upstream project, aren't we? |
@guillaumeeb yes but Matt asked if it should remain here or move downstream. |
@jacobtomlinson sorry about my remark, I did not understand what you meant! Totally agree with you. |
Just wondering if we need the concept of different pools of workers where each pool can have different worker specs. For this to work well there would probably have to be a way to specify both run this job/task only on the specified pools as well as prefer to run this job/task on these pools but use other pools if the specified ones are busy. Each pool type could then have its own min/max workers and the scheduler could scale up/down the workers associated with each pool depending on the number of jobs associated with each. |
Just to mention that a first (simple) PR for this issues is available in #2209 in case people here missed it! @dhirschfeld the concept of different pool seems interesting for scaling with specific worker profiles (GPU, big memory nodes...) We could have something like this for dask-jobqueue cluster.add_pool(processes=1, cores=1, memory='16GB', queue='qgpu', pool_name='GPU', walltime='02:00:00')
cluster.scale(10, pool='GPU')
cluster.scale(100) # default pool Maybe this belongs more to #2118? But both issues are linked. |
When creating a cluster object we currently scale by number of workers
Where
10
is the number of workers we want to have. However, it is common for users to think about clusters in terms of number of cores or amount of memory, rather than in terms of number of dask workersWhat is the best way to achieve this uniformly across the dask deployment projects? I currently see two approaches, though there are probably more that others might see.
Establish a convention where clusters define information about the workers they will produce, something like the following:
Then the core
Cluster.scale
method would translate this into number of workers and then call the subclass'sscale
method appropriatelyLet the downstream classes handle this themselves, but ask them all to handle it uniformly. This places more burden onto downstream implementations, but also gives them more freedom to select worker types as they see fit based on their capabilities.
cc
dask-jobqueue
dask-yarn
dask-kubernetes
The text was updated successfully, but these errors were encountered: