-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dask-cuda should automatically register GPU memory resource tracking #36
Comments
I agree that this would be great to have. It would also be interesting to think about what other dashboards we might provide for users about their GPUs. It would also be interesting to think about dashboards that might be useful outside of the context of Dask. I know that some folks have been interested in this generally. I'd be happy to help anyone that wanted to push on this effort longer term. Long term We might want to use a library like pynvml to do this with a little less overhead. Those are both long terms comments. I have no strong objection to using |
One other thing that might be interesting when thinking about memory specifically is leveraging Python's tracemalloc. This would generally give us a way of tracking memory allocations and that input could be fed into other things like dashboards or even used by scripts, libraries, or other user applications. To use this we would need to register those allocations ourselves. It could be something that RMM would do in the Python interface level. Alternatively several different libraries could do this and we could filter out their individual contributions to memory usage. That said, |
As dask_cudf usage has grown, the number of one-off scripts for monitoring GPU memory are starting to proliferate. Is there someone who can work on a first version of built-in GPU memory monitoring? |
I'm actually a little bit ahead of you on this one :)
dask/distributed#2932
If you're looking for dashboards today then @rjzamora's work here is pretty
slick: https://github.com/rjzamora/jupyterlab-bokeh-server/tree/pynvml
…On Thu, Aug 8, 2019 at 6:18 PM Randy Gelhausen ***@***.***> wrote:
As dask_cudf usage has grown, the number of one-off scripts for monitoring
GPU memory are starting to proliferate.
Is there someone who can work on a first version of built-in GPU memory
monitoring?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#36?email_source=notifications&email_token=AACKZTDIR5BYK2OCWXDMWVDQDSLULA5CNFSM4HG6R5IKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD35B2LI#issuecomment-519707949>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AACKZTAV7FCMV7SQ4TNGMGLQDSLULANCNFSM4HG6R5IA>
.
|
Randy is this what you meant? Or were you looking more for spill-to-disk
kinds of thiings. Or, more broadly, why do you want us to track GPU memory?
…On Thu, Aug 8, 2019 at 6:25 PM Matthew Rocklin ***@***.***> wrote:
I'm actually a little bit ahead of you on this one :)
dask/distributed#2932
If you're looking for dashboards today then @rjzamora's work here is
pretty slick:
https://github.com/rjzamora/jupyterlab-bokeh-server/tree/pynvml
On Thu, Aug 8, 2019 at 6:18 PM Randy Gelhausen ***@***.***>
wrote:
> As dask_cudf usage has grown, the number of one-off scripts for
> monitoring GPU memory are starting to proliferate.
>
> Is there someone who can work on a first version of built-in GPU memory
> monitoring?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#36?email_source=notifications&email_token=AACKZTDIR5BYK2OCWXDMWVDQDSLULA5CNFSM4HG6R5IKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD35B2LI#issuecomment-519707949>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AACKZTAV7FCMV7SQ4TNGMGLQDSLULANCNFSM4HG6R5IA>
> .
>
|
It's a start. When debugging workflows, there's often a lengthy pause while someone driving a notebook jumps back to (multiple) terminals to check GPU memory usage via Having the dashboard show total GPU memory used by persisted DataFrames would be a great first step in alleviating that pain. I would think a future improvement would be surfacing peak memory used, since DataFrame operations often cause significant, albeit temporary spikes in GPU memory usage. Beyond that, being having access to display this type of data tells me that we might be able to determine that a given workflow could benefit safely from using multiple threads or processes per GPU worker. |
If you're on a single GPU then you probably want the solution in https://github.com/rjzamora/jupyterlab-bokeh-server/tree/pynvml This has been done for a while, we just needed to package it up (which requires people comfortable with both Python and JS, which is somewhat rare). @jacobtomlinson seemed interested in doing this. Jacob, if this is easy for you and not too much of a distraction could you prioritize it? |
Regardless, I imagine I'll have the Dask version done in a week or two. It could be done sooner if this is a high priority for you Randy. I get the sense that it's only a mild annoyance for now and not a burning fire, but I could be wrong. |
You read me well =) I'm glad to know that there's been significant progress in the meantime. |
There is an initial pair of plots in dask/distributed#2944 . They won't be very discoverable until dask/dask-labextension#75 But you can navigate to them directly by going to We might ask someone like @rjzamora to expand on that work, but I still think that, if you're on a single node then, you're probably better off with his existing project. |
I think https://github.com/rapidsai/jupyterlab-nvdashboard addresses most of the requests (if not all). Regardless, I think that's now a more appropriate project for future feature requests than dask-cuda, therefore I'm closing this but feel free to reopen should something in dask-cuda still be necessary. |
Hi All, Looks like this functionality has been out for a while. One of our users needs to get GPU metrics from dask-cuda-workers. In this set up the scheduler is running in one node and the dask-cuda-workers in different nodes. The question here is what On the other hand, I also tried the dask-labextension, but it only shows GPU metrics from the machine running the scheduler. This is the Dask related packages in use:
and Jupyter
|
I'm not sure about any of those questions. @quasiben @jakirkham are you familiar with those? |
Could we please move this over to a new issue? Edit: Also would recommend including a screenshot if you can. That should make it a bit clearer what's going on 🙂 |
It would be nice for dask-cuda workers to automatically add GPU memory tracking like this.
Nicer still would be for the ${dask_scheduler_ip}:8787/status dashboard to show GPU memory bytes stored in addition to host memory bytes stored.
The text was updated successfully, but these errors were encountered: