-
-
Couldn't load subscription status.
- Fork 10.8k
[TPU] Update dynamo dump file name in compilation test #19108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @lsy323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello! Gemini here, providing a summary of this pull request to help everyone get up to speed quickly.
This pull request, authored by @lsy323, addresses a breaking change in the tests/tpu/test_compilation.py test file. The test relies on finding specific dump files generated by torch.compile to verify the compiled FX graphs. However, recent PyTorch nightly builds have changed the naming convention for these dump files.
The PR updates the test to look for the new file name pattern (__compiled_fn*Forward_graph*.py) instead of the old one (__compiled_fn*Captured*.py), ensuring the test can correctly locate and analyze the compiled graph dumps generated by the latest PyTorch versions. Additionally, a breakpoint() was added, likely for debugging purposes during development.
Highlights
- Test Fix: Updates the
tests/tpu/test_compilation.pytest to adapt to a change intorch.compile's dump file naming convention in recent PyTorch nightlies. - File Pattern Update: Changes the glob pattern used to find compiled graph dump files from
__compiled_fn*Captured*.pyto__compiled_fn*Forward_graph*.py. - Debugging Aid: Adds a
breakpoint()within thetest_compilationfunction for easier debugging.
Changelog
- tests/tpu/test_compilation.py
- Updated the glob pattern on line 69 to search for
__compiled_fn*Forward_graph*.pyfiles. - Added a
breakpoint()on line 75.
- Updated the glob pattern on line 69 to search for
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Test fails, name changed,
Glob pattern now updated,
Green checkmark returns.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request updates the test to check for the correct FX graph dump file name. The change seems straightforward and necessary due to updates in PyTorch nightly. However, I have a few suggestions to improve the code's clarity and robustness.
Summary of Findings
- Unintentional breakpoint: The
breakpoint()call should be removed before merging to prevent accidental pausing of test execution. - Lack of context for filename change: Adding a comment explaining the reason for the filename change would improve code maintainability.
Merge Readiness
The pull request addresses an important issue with the test, but the leftover breakpoint() call is a significant concern. I recommend removing the breakpoint and adding a clarifying comment before merging. I am unable to approve this pull request, and recommend that others review and approve this code before merging. At a minimum, the critical and high severity comments should be addressed before merging.
| compiled_fns = sorted(glob.glob( | ||
| os.path.join(temp_dir, "__compiled_fn*Captured*.py")), | ||
| os.path.join(temp_dir, "__compiled_fn*Forward_graph*.py")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks gemini, added a comment
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for fixing this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
|
I hit Let me skip this for now, until we have a larger disk for the TPU CI Runner |
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
|
Let me merge #19115 with this PR. These 2 PRs should fix all issues now |
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
|
Hi @sarckk, thank you for supporting cross-layer KV sharing for TPU backend. I found the tests are failing on TPU CI in the original PR. (CI log). TPU CI has been soft-failing, so it shows the green check mark even if it fails ( I know it's confusing!). I will disable it in this PR to bring TPU CI back to green. Thanks! |
|
@lsy323 thanks for the heads up! sorry for breaking it, I'll put up a fix soon. |
Hi @sarckk, thank you for taking a look! If you don't have a TPU VM on hand, let me know I'd be happy to test it out for you. The error I got locally is as follows. It seems there are 2 issues:
|
…19108) Signed-off-by: Siyuan Liu <lsiyuan@google.com>
tests/tpu/test_compilation.pychecks the FX graph dump fromtorch.compile. It used to generate FX graphs with dump file name like__compiled_fn_15.Captured_Graph.0.py, using the latest PyTorch nightly, it doesn't generate this dump anymore. There are dumps like__compiled_fn_13.Forward_graph.0.pythat is the captured compiled code for theforwardfunction of the compilednn.Module. Let's check the__compiled_fn_*.Forward_graph.*.pyin the tests instead.