Skip to content

Conversation

@Potabk
Copy link
Collaborator

@Potabk Potabk commented Mar 28, 2025

What this PR does / why we need it?

  • Add a new runner to the continuous integration system and keep the original CI runner until the new runner runs stably
  • Add distributed test cases

Does this PR introduce any user-facing change?

No

How was this patch tested?

CI passed

@Potabk Potabk force-pushed the ci branch 2 times, most recently from e8aa80c to dd0a9c2 Compare March 31, 2025 01:34
@Potabk Potabk force-pushed the ci branch 4 times, most recently from 1599f82 to ebdad10 Compare April 1, 2025 02:18
@Yikun Yikun changed the title [CI]Add new runner [CI] Add new runner and enable QwQ multinpu test Apr 1, 2025
tensor_parallel_size=4,
distributed_executor_backend=distributed_executor_backend,
) as vllm_model:
vllm_model.generate_greedy(example_prompts, max_tokens)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember last offline discussion, we'd also want to add transformer as compare, does it works (stable works)?

https://github.com/vllm-project/vllm/blob/c7e63aa4d84de4f0b076d2974d30cd1cd34a4191/tests/basic_correctness/test_basic_correctness.py#L152

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will finish at next pull request

fi
pip install /root/.cache/pta/torch_npu-2.5.1.dev20250320-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- name: Run vllm-project/vllm-ascend test
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

V1 engine should be tested as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we shouldn't test v1 like this. v1 doesn't fully support in some features. in addition we should mock the VLLM_USE_V1 in the related test modules if in need

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why? We only test V1 for vllm-ascend test, not vllm test.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean we will construct pytest fixture to decide which cases need to be tested on v1

@Potabk Potabk force-pushed the ci branch 4 times, most recently from 7b0213b to 59578d7 Compare April 7, 2025 06:40
env:
VLLM_USE_V1: 1
VLLM_WORKER_MULTIPROC_METHOD: spawn
VLLM_USE_V1: 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why removed v1 test

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add it here @pytest.mark.parametrize("use_v1", ["1", "0"])

Potabk added 6 commits April 8, 2025 01:26
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Potabk added 2 commits April 8, 2025 01:26
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
@Potabk Potabk force-pushed the ci branch 2 times, most recently from 5605150 to 83df77b Compare April 8, 2025 03:23
Signed-off-by: wangli <wangli858794774@gmail.com>
Potabk added 4 commits April 8, 2025 03:39
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
@@ -1,5 +1,8 @@
[pytest]
minversion = 6.0
markers =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this markers is useless now, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this markers help pytest detect what kind of cases should be run, just like pytest -m 'multinpu'

@wangxiyuan
Copy link
Collaborator

I'll fix the nit in the follow-up PR

@wangxiyuan wangxiyuan merged commit afdbf77 into vllm-project:main Apr 8, 2025
14 checks passed
ttanzhiqiang pushed a commit to ttanzhiqiang/vllm-ascend that referenced this pull request Apr 27, 2025
### What this PR does / why we need it?

- Add a new runner to the continuous integration system and keep the
original CI runner until the new runner runs stably
- Add distributed test cases

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI passed

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
Angazenn pushed a commit to Angazenn/vllm-ascend that referenced this pull request Oct 21, 2025
### What this PR does / why we need it?

- Add a new runner to the continuous integration system and keep the
original CI runner until the new runner runs stably
- Add distributed test cases

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI passed

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants