-
-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Learning from ipyparallel #49
Comments
It's a good point.
Also, just for fun, but ipyparallel is also a reasonable way to deploy dask
itself:
http://ipyparallel.readthedocs.io/en/latest/api/ipyparallel.html#ipyparallel.Client.become_dask
…On Tue, May 1, 2018 at 12:06 PM, jakirkham ***@***.***> wrote:
ipyparallel <https://github.com/ipython/ipyparallel> (formerly part of
the Jupyter Notebook) provides similar functionality to dask distributed
and dask-jobqueue in that it allows users to startup a cluster and submit
work to it. The model of the ipyparallel is a bit different from that of
dask distributed, but that doesn't really concern us here.
What is interesting is ipyparallel has a trove of knowledge regarding
starting jobs on various common HPC Schedulers. This knowledge is largely
baked into one file
<https://github.com/ipython/ipyparallel/blob/6.1.1/ipyparallel/apps/launcher.py>.
For HPC Schedulers already in dask-jobqueue, it's worth comparing notes to
ipyparallel and see what can be learned on this front. As to the schedulers
not present in dask-jobqueue, it's worth taking a look at ipyparallel's
implementations and seeing what can be gleaned from it and how it might be
used here. It's probably also worth learning how things have been
refactored out in ipyparallel to see if there are any useful strategies for
modeling HPC Schedulers generally.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#49>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AASszFPJhnT7gzvs7X27oR7D_X43g-uBks5tuIgjgaJpZM4TuIab>
.
|
Thanks for sharing. I had a quick skim. There are some remarkable similarities between the two projects, particularly on the level of abstraction between different scheduling systems. May be worth using this as a starting point when developing the LSF cluster (#4). |
Wow, big source file! Scary at first, but a lot of information in there, probably worth taking a second look. Thanks for sharing. |
Closing this for now, we may want to look at it again for some other scheduler implementation, was useful for LSF. We may also want to look at this for #133, along with https://github.com/jupyterhub/batchspawner. |
ipyparallel (formerly part of the Jupyter Notebook) provides similar functionality to dask distributed and dask-jobqueue in that it allows users to startup a cluster and submit work to it. The model of the ipyparallel is a bit different from that of dask distributed, but that doesn't really concern us here.
What is interesting is ipyparallel has a trove of knowledge regarding starting jobs on various common HPC Schedulers. This knowledge is largely baked into one file. For HPC Schedulers already in dask-jobqueue, it's worth comparing notes to ipyparallel and see what can be learned on this front. As to the schedulers not present in dask-jobqueue, it's worth taking a look at ipyparallel's implementations and seeing what can be gleaned from it and how it might be used here. It's probably also worth learning how things have been refactored out in ipyparallel to see if there are any useful strategies for modeling HPC Schedulers generally.
The text was updated successfully, but these errors were encountered: