-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyTorch CUDA allocator optimization for dynamic batch shape dataloading in ASR #9061
Conversation
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beautiful. I appreciate that you print a warning when someone has already set the environment variable since this is a global configuration.
However, that is not the only way that someone may have set these options in the (that is, someone could have called _set_allocator_settings() on their own). Of course, this is not really expected here. It would be appreciated if you could take a look into whether there is a more robust way to check if this option is already set or not than by checking the environment variable.
Also, it's not clear to me how this setting might interact with a non-default allocator (for example, the RMM torch allocator provided by nvidia). Presumably setting this config has no effect in this case.
Approving anyway since I trust you to adequately investigating if any of my concerns above are real concerns.
I grepped through pytorch 2.3 code. Unfortunately it looks like only
Good point, I didn't know about RMM. It turns out it's available in our containers so I just tested it out on a 1-GPU training run. It seems to "just work" (although I had to decrease the batch size to avoid CUDA OOM after a ~100 steps) so I assume these options are discarded for custom allocators. |
…ng in ASR (NVIDIA#9061) * Option to auto-set expandable_segments in PyTorch CUDA allocator Signed-off-by: Piotr Żelasko <petezor@gmail.com> * warning Signed-off-by: Piotr Żelasko <petezor@gmail.com> * set opts after parsing config Signed-off-by: Piotr Żelasko <petezor@gmail.com> --------- Signed-off-by: Piotr Żelasko <petezor@gmail.com>
What does this PR do ?
I was profiling a particularly unlucky run that had dynamic batch shapes and operated close to max GPU RAM. The profile revealed it was re-allocating memory for every mini-batch, generating about 30% overhead in training. This can be resolved gracefully by turning on
expandable_segments
option in PyTorch CUDA allocator which instead of reallocating, extends existing allocations as needed, removing this significant overhead.In this PR I'm proposing to automatically set this option during dataloader instantiation. It can be disabled via configuration.
For documentation purposes, the profile before the change (red blocks in CUDA API timelines indicates malloc/free):
and the profile after the fix:
The blue bars at the top of the profiles in CUDA HW kernel utilization timelines are more condensed in the new profile, indicating improved GPU utilization.
Collection: ASR
Changelog
Usage
# Add a code snippet demonstrating how to use this
Jenkins CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
There's no need to comment
jenkins
on the PR to trigger Jenkins CI.The GitHub Actions CI will run automatically when the PR is opened.
To run CI on an untrusted fork, a NeMo user with write access must click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information