Skip to content

Conversation

@cryptopic
Copy link
Contributor

@cryptopic cryptopic commented Jun 3, 2025

Purpose

Add unit test for run_dp_sharded_vision_model, following up on #18368

Test Plan

pytest tests/multimodal/test_utils.py -k "test_run_dp_sharded_vision_model"

Test Result

=3 passed, 44 deselected, 5 warnings in 37.76s =

@github-actions
Copy link

github-actions bot commented Jun 3, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @cryptopic, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team,

Gemini here, providing a summary for this pull request authored by cryptopic.

The primary goal of this PR is to introduce a unit test for the run_dp_sharded_vision_model function. This function is designed to execute a vision model in a data-parallel sharded fashion across multiple GPUs. The new test aims to verify that the output produced by this sharded execution is identical (or very close) to the output obtained by running the same model directly on the full input batch without sharding.

The test is implemented in tests/multimodal/test_utils.py and uses a simple torch.nn.Module (SimpleLinearModel) for demonstration. It leverages pytest's parametrization and a multi-GPU test utility to run the test across 2 GPUs with varying batch sizes (1, 4, and 5) to ensure correctness under different input conditions.

Highlights

  • New Unit Test: Adds a dedicated unit test (test_run_dp_sharded_vision_model) for the run_dp_sharded_vision_model function.
  • Correctness Verification: The test verifies that the output of the sharded vision model execution matches the output of direct model execution on the full batch.
  • Multi-GPU Testing: The test is configured to run on 2 GPUs using pytest fixtures and torch.multiprocessing.
  • Batch Size Coverage: Includes test cases for different batch sizes (1, 4, 5) to check behavior with single, small, and odd-sized batches.
  • Simple Test Model: Introduces a basic SimpleLinearModel class for use in the test.

Changelog

  • tests/multimodal/test_utils.py
    • Added necessary imports for distributed testing (torch.distributed, torch.multiprocessing, vllm.distributed utilities).
    • Introduced SimpleLinearModel, a basic torch.nn.Module for testing vision model execution.
    • Added the test_run_dp_sharded_vision_model function, decorated with @multi_gpu_test(num_gpus=2) and @pytest.mark.parametrize for batch sizes.
    • Implemented the core test logic in run_dp_sharded_vision_model_vs_direct, which sets up the distributed environment, runs the model directly and sharded, and asserts the equality of outputs using torch.allclose.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Test runs on two GPUs,
Sharded vision, outputs true.
Batch sizes vary,
Results don't tarry,
Code correctness sees us through.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the multi-modality Related to multi-modality (#4194) label Jun 3, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a unit test for run_dp_sharded_vision_model, which is a valuable addition for ensuring the correctness of this distributed utility. The test is well-designed, covering various batch sizes and correctly comparing the sharded execution against a direct execution.

The use of torch.multiprocessing.spawn and the setup of the distributed environment within the test processes are standard and look correct. The SimpleLinearModel is a good choice for a lightweight vision model for this test.

I have one suggestion regarding the hardcoded MASTER_PORT for increased robustness in diverse testing environments.

Summary of Findings

  • Hardcoded MASTER_PORT: The test uses a hardcoded MASTER_PORT ('12345') for initializing the distributed environment. This could potentially lead to port conflicts in more complex CI setups where multiple tests might run concurrently. It's recommended to use a dynamically assigned port, for example, by using vllm.utils.get_open_port().

Merge Readiness

The PR is in good shape and the test logic is sound. Addressing the point about the hardcoded MASTER_PORT would enhance the robustness of the test. Once that's considered or addressed, this PR should be ready for merging. I am not authorized to approve pull requests, so please ensure other reviewers take a look before merging.

Summary:
Add unit test for run_dp_sharded_vision_model, following up on vllm-project#18368

  pytest tests/multimodal/test_utils.py -k "test_run_dp_sharded_vision_model"

=3 passed, 44 deselected, 5 warnings in 37.76s =

Signed-off-by: Siqi Yan <siqi@meta.com>
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this unit test!

@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 4, 2025
vision_model = SimpleLinearModel()

# Run the model directly on the full input
with torch.no_grad():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: inference_mode():


# Run the model through the sharded function
with torch.no_grad():
sharded_output = run_dp_sharded_vision_model(image_input, vision_model)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should assert get_tensor_model_parallel_world_size() == world_size

Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, looks good. Please address the comment before landing.

Signed-off-by: Siqi Yan <siqi@meta.com>
@DarkLight1337 DarkLight1337 merged commit f168b85 into vllm-project:main Jun 6, 2025
44 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants