Skip to content

Conversation

@JohannesGaessler
Copy link
Collaborator

Currently, when trying to fit a model onto multiple GPUs the error message only tells you that you are running out of memory, but not on which GPU which is a little inconvenient. But knowing which device is causing issues could also be useful debugging information in general. This PR makes it so that the CUDA error messages mention the device on which the error occurred.

@cebtenzzre
Copy link
Collaborator

This would be much more convenient than having to watch nvidia-smi closely to find out which GPU ran low first.

Copy link
Member

@slaren slaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since errors may be caused by a previous asynchronous call, it might be a bit more accurate to only say what is the current device without making any claims about what device actually generated the error.

@JohannesGaessler JohannesGaessler merged commit 8a4ca9a into ggml-org:master Sep 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants