-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
Unit Test for run_dp_sharded_vision_model #19103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @cryptopic, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team,
Gemini here, providing a summary for this pull request authored by cryptopic.
The primary goal of this PR is to introduce a unit test for the run_dp_sharded_vision_model function. This function is designed to execute a vision model in a data-parallel sharded fashion across multiple GPUs. The new test aims to verify that the output produced by this sharded execution is identical (or very close) to the output obtained by running the same model directly on the full input batch without sharding.
The test is implemented in tests/multimodal/test_utils.py and uses a simple torch.nn.Module (SimpleLinearModel) for demonstration. It leverages pytest's parametrization and a multi-GPU test utility to run the test across 2 GPUs with varying batch sizes (1, 4, and 5) to ensure correctness under different input conditions.
Highlights
- New Unit Test: Adds a dedicated unit test (
test_run_dp_sharded_vision_model) for therun_dp_sharded_vision_modelfunction. - Correctness Verification: The test verifies that the output of the sharded vision model execution matches the output of direct model execution on the full batch.
- Multi-GPU Testing: The test is configured to run on 2 GPUs using
pytestfixtures andtorch.multiprocessing. - Batch Size Coverage: Includes test cases for different batch sizes (1, 4, 5) to check behavior with single, small, and odd-sized batches.
- Simple Test Model: Introduces a basic
SimpleLinearModelclass for use in the test.
Changelog
- tests/multimodal/test_utils.py
- Added necessary imports for distributed testing (
torch.distributed,torch.multiprocessing,vllm.distributedutilities). - Introduced
SimpleLinearModel, a basictorch.nn.Modulefor testing vision model execution. - Added the
test_run_dp_sharded_vision_modelfunction, decorated with@multi_gpu_test(num_gpus=2)and@pytest.mark.parametrizefor batch sizes. - Implemented the core test logic in
run_dp_sharded_vision_model_vs_direct, which sets up the distributed environment, runs the model directly and sharded, and asserts the equality of outputs usingtorch.allclose.
- Added necessary imports for distributed testing (
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Test runs on two GPUs,
Sharded vision, outputs true.
Batch sizes vary,
Results don't tarry,
Code correctness sees us through.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a unit test for run_dp_sharded_vision_model, which is a valuable addition for ensuring the correctness of this distributed utility. The test is well-designed, covering various batch sizes and correctly comparing the sharded execution against a direct execution.
The use of torch.multiprocessing.spawn and the setup of the distributed environment within the test processes are standard and look correct. The SimpleLinearModel is a good choice for a lightweight vision model for this test.
I have one suggestion regarding the hardcoded MASTER_PORT for increased robustness in diverse testing environments.
Summary of Findings
- Hardcoded
MASTER_PORT: The test uses a hardcodedMASTER_PORT('12345') for initializing the distributed environment. This could potentially lead to port conflicts in more complex CI setups where multiple tests might run concurrently. It's recommended to use a dynamically assigned port, for example, by usingvllm.utils.get_open_port().
Merge Readiness
The PR is in good shape and the test logic is sound. Addressing the point about the hardcoded MASTER_PORT would enhance the robustness of the test. Once that's considered or addressed, this PR should be ready for merging. I am not authorized to approve pull requests, so please ensure other reviewers take a look before merging.
Summary: Add unit test for run_dp_sharded_vision_model, following up on vllm-project#18368 pytest tests/multimodal/test_utils.py -k "test_run_dp_sharded_vision_model" =3 passed, 44 deselected, 5 warnings in 37.76s = Signed-off-by: Siqi Yan <siqi@meta.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this unit test!
tests/multimodal/test_utils.py
Outdated
| vision_model = SimpleLinearModel() | ||
|
|
||
| # Run the model directly on the full input | ||
| with torch.no_grad(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: inference_mode():
|
|
||
| # Run the model through the sharded function | ||
| with torch.no_grad(): | ||
| sharded_output = run_dp_sharded_vision_model(image_input, vision_model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should assert get_tensor_model_parallel_world_size() == world_size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, looks good. Please address the comment before landing.
Signed-off-by: Siqi Yan <siqi@meta.com>
Purpose
Add unit test for run_dp_sharded_vision_model, following up on #18368
Test Plan
pytest tests/multimodal/test_utils.py -k "test_run_dp_sharded_vision_model"
Test Result
=3 passed, 44 deselected, 5 warnings in 37.76s =