Skip to content

Commit 865d254

Browse files
committed
Add unit test local cpu guide and enable base testcase
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
1 parent 59237ea commit 865d254

File tree

9 files changed

+236
-51
lines changed

9 files changed

+236
-51
lines changed

docs/source/developer_guide/contribution/testing.md

Lines changed: 104 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,48 @@ The fastest way to setup test environment is to use the main branch container im
99
:::::{tab-set}
1010
:sync-group: e2e
1111

12-
::::{tab-item} Single card
12+
::::{tab-item} Local (CPU)
1313
:selected:
14+
:sync: cpu
15+
16+
You can run the unit tests on CPU with the following steps:
17+
18+
```{code-block} bash
19+
:substitutions:
20+
21+
cd ~/vllm-project/
22+
# ls
23+
# vllm vllm-ascend
24+
25+
# Use mirror to speedup download
26+
# docker pull quay.nju.edu.cn/ascend/cann:|cann_image_tag|
27+
export IMAGE=quay.io/ascend/cann:|cann_image_tag|
28+
docker run --rm --name vllm-ascend-ut \
29+
-v $(pwd):/vllm-project \
30+
-v ~/.cache:/root/.cache \
31+
-ti $IMAGE bash
32+
33+
# (Optional) Configure mirror to speedup download
34+
pip config set global.index-url https://mirrors.huaweicloud.com/repository/pypi/simple/
35+
36+
apt-get update -y
37+
apt-get install -y python3-pip git vim wget net-tools gcc g++ cmake libnuma-dev curl gnupg2
38+
39+
# Install vllm
40+
cd /vllm-project/vllm
41+
VLLM_TARGET_DEVICE=empty python3 -m pip -v install .
42+
43+
# Install vllm-ascend
44+
cd /vllm-project/vllm-ascend
45+
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/$(uname -m)-linux/devlib
46+
python3 -m pip install -r requirements-dev.txt
47+
export PIP_EXTRA_INDEX_URL=https://mirrors.huaweicloud.com/ascend/repos/pypi
48+
python3 -m pip install -v .
49+
```
50+
51+
::::
52+
53+
::::{tab-item} Single card
1454
:sync: single
1555

1656
```{code-block} bash
@@ -36,6 +76,16 @@ docker run --rm \
3676
-it $IMAGE bash
3777
```
3878

79+
After starting the container, you should install the required packages:
80+
81+
```bash
82+
# Prepare
83+
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
84+
85+
# Install required packages
86+
pip install -r requirements-dev.txt
87+
```
88+
3989
::::
4090

4191
::::{tab-item} Multi cards
@@ -63,20 +113,23 @@ docker run --rm \
63113
-p 8000:8000 \
64114
-it $IMAGE bash
65115
```
66-
::::
67-
68-
:::::
69116

70117
After starting the container, you should install the required packages:
71118

72119
```bash
120+
cd /vllm-workspace/vllm-ascend/
121+
73122
# Prepare
74123
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
75124

76125
# Install required packages
77126
pip install -r requirements-dev.txt
78127
```
79128

129+
::::
130+
131+
:::::
132+
80133
## Running tests
81134

82135
### Unit test
@@ -89,14 +142,48 @@ There are several principles to follow when writing unit tests:
89142
- Example: [tests/ut/test_ascend_config.py](https://github.com/vllm-project/vllm-ascend/blob/main/tests/ut/test_ascend_config.py).
90143
- You can run the unit tests using `pytest`:
91144

92-
```bash
93-
cd /vllm-workspace/vllm-ascend/
94-
# Run all single card the tests
95-
pytest -sv tests/ut
145+
:::::{tab-set}
146+
:sync-group: e2e
96147

97-
# Run
98-
pytest -sv tests/ut/test_ascend_config.py
99-
```
148+
::::{tab-item} Local (CPU)
149+
:sync: cpu
150+
151+
```bash
152+
# Run unit tests
153+
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/$(uname -m)-linux/devlib
154+
VLLM_USE_V1=1 TORCH_DEVICE_BACKEND_AUTOLOAD=0 pytest -sv tests/ut
155+
```
156+
157+
::::
158+
159+
::::{tab-item} Single card
160+
:selected:
161+
:sync: single
162+
163+
```bash
164+
cd /vllm-workspace/vllm-ascend/
165+
# Run all single card the tests
166+
pytest -sv tests/ut
167+
168+
# Run single test
169+
pytest -sv tests/ut/test_ascend_config.py
170+
```
171+
::::
172+
173+
::::{tab-item} Multi cards test
174+
:sync: multi
175+
176+
```bash
177+
cd /vllm-workspace/vllm-ascend/
178+
# Run all single card the tests
179+
pytest -sv tests/ut
180+
181+
# Run single test
182+
pytest -sv tests/ut/test_ascend_config.py
183+
```
184+
::::
185+
186+
:::::
100187

101188
### E2E test
102189

@@ -106,6 +193,12 @@ locally.
106193
:::::{tab-set}
107194
:sync-group: e2e
108195

196+
::::{tab-item} Local (CPU)
197+
:sync: cpu
198+
199+
You can't run e2e test on CPU.
200+
::::
201+
109202
::::{tab-item} Single card
110203
:selected:
111204
:sync: single

tests/ut/base.py

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,18 @@
1+
#
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software
9+
# distributed under the License is distributed on an "AS IS" BASIS,
10+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
# See the License for the specific language governing permissions and
12+
# limitations under the License.
13+
# This file is a part of the vllm-ascend project.
14+
#
15+
116
import unittest
217

318
from vllm_ascend.utils import adapt_patch

tests/ut/distributed/test_parallel_state.py

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,31 @@
1-
import unittest
1+
#
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software
9+
# distributed under the License is distributed on an "AS IS" BASIS,
10+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
# See the License for the specific language governing permissions and
12+
# limitations under the License.
13+
# This file is a part of the vllm-ascend project.
14+
#
15+
216
from unittest.mock import MagicMock, patch
317

418
import pytest
519
from vllm.distributed.parallel_state import GroupCoordinator
620

721
import vllm_ascend
22+
from tests.ut.base import TestBase
823
from vllm_ascend.distributed.parallel_state import (
924
destory_ascend_model_parallel, get_ep_group, get_etp_group,
1025
init_ascend_model_parallel, model_parallel_initialized)
1126

1227

13-
class TestParallelState(unittest.TestCase):
28+
class TestParallelState(TestBase):
1429

1530
@patch('vllm_ascend.distributed.parallel_state._EP',
1631
new_callable=lambda: MagicMock(spec=GroupCoordinator))

tests/ut/ops/expert_map.json

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
{
2+
"moe_layer_count":
3+
1,
4+
"layer_list": [{
5+
"layer_id":
6+
0,
7+
"device_count":
8+
2,
9+
"device_list": [{
10+
"device_id": 0,
11+
"device_expert": [7, 2, 0, 3, 5]
12+
}, {
13+
"device_id": 1,
14+
"device_expert": [6, 1, 4, 7, 2]
15+
}]
16+
}]
17+
}

tests/ut/ops/test_expert_load_balancer.py

Lines changed: 26 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,30 @@
1+
#
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software
9+
# distributed under the License is distributed on an "AS IS" BASIS,
10+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
# See the License for the specific language governing permissions and
12+
# limitations under the License.
13+
# This file is a part of the vllm-ascend project.
14+
#
15+
116
# fused moe ops test will hit the infer_schema error, we need add the patch
217
# here to make the test pass.
318
import vllm_ascend.patch.worker.patch_common.patch_utils # type: ignore[import] # isort: skip # noqa
419

520
import json
6-
import unittest
21+
import os
722
from typing import List, TypedDict
823
from unittest import mock
924

1025
import torch
1126

27+
from tests.ut.base import TestBase
1228
from vllm_ascend.ops.expert_load_balancer import ExpertLoadBalancer
1329

1430

@@ -28,31 +44,13 @@ class MockData(TypedDict):
2844
layer_list: List[Layer]
2945

3046

31-
MOCK_DATA: MockData = {
32-
"moe_layer_count":
33-
1,
34-
"layer_list": [{
35-
"layer_id":
36-
0,
37-
"device_count":
38-
2,
39-
"device_list": [{
40-
"device_id": 0,
41-
"device_expert": [7, 2, 0, 3, 5]
42-
}, {
43-
"device_id": 1,
44-
"device_expert": [6, 1, 4, 7, 2]
45-
}]
46-
}]
47-
}
48-
49-
50-
class TestExpertLoadBalancer(unittest.TestCase):
47+
class TestExpertLoadBalancer(TestBase):
5148

5249
def setUp(self):
53-
json_file = "expert_map.json"
54-
with open(json_file, 'w') as f:
55-
json.dump(MOCK_DATA, f)
50+
_TEST_DIR = os.path.dirname(__file__)
51+
json_file = _TEST_DIR + "/expert_map.json"
52+
with open(json_file, 'r') as f:
53+
self.expert_map: MockData = json.load(f)
5654

5755
self.expert_load_balancer = ExpertLoadBalancer(json_file,
5856
global_expert_num=8)
@@ -62,9 +60,9 @@ def test_init(self):
6260
self.assertIsInstance(self.expert_load_balancer.expert_map_tensor,
6361
torch.Tensor)
6462
self.assertEqual(self.expert_load_balancer.layers_num,
65-
MOCK_DATA["moe_layer_count"])
63+
self.expert_map["moe_layer_count"])
6664
self.assertEqual(self.expert_load_balancer.ranks_num,
67-
MOCK_DATA["layer_list"][0]["device_count"])
65+
self.expert_map["layer_list"][0]["device_count"])
6866

6967
def test_generate_index_dicts(self):
7068
tensor_2d = torch.tensor([[7, 2, 0, 3, 5], [6, 1, 4, 7, 2]])
@@ -142,6 +140,6 @@ def test_get_rank_log2phy_map(self):
142140
def test_get_global_redundant_expert_num(self):
143141
redundant_expert_num = self.expert_load_balancer.get_global_redundant_expert_num(
144142
)
145-
expected_redundant_expert_num = len(MOCK_DATA["layer_list"][0]["device_list"][0]["device_expert"]) * \
146-
MOCK_DATA["layer_list"][0]["device_count"] - 8
143+
expected_redundant_expert_num = len(self.expert_map["layer_list"][0]["device_list"][0]["device_expert"]) * \
144+
self.expert_map["layer_list"][0]["device_count"] - 8
147145
self.assertEqual(redundant_expert_num, expected_redundant_expert_num)
Lines changed: 20 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,27 @@
1+
#
2+
# Licensed under the Apache License, Version 2.0 (the "License");
3+
# you may not use this file except in compliance with the License.
4+
# You may obtain a copy of the License at
5+
#
6+
# http://www.apache.org/licenses/LICENSE-2.0
7+
#
8+
# Unless required by applicable law or agreed to in writing, software
9+
# distributed under the License is distributed on an "AS IS" BASIS,
10+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11+
# See the License for the specific language governing permissions and
12+
# limitations under the License.
13+
# This file is a part of the vllm-ascend project.
14+
#
15+
116
from tests.ut.base import TestBase
17+
from vllm_ascend.patch.worker.patch_common.patch_distributed import \
18+
GroupCoordinatorPatch
19+
20+
# import GroupCoordinator after GroupCoordinatorPatch to make base work
21+
from vllm.distributed.parallel_state import GroupCoordinator # noqa isort:skip
222

323

424
class TestPatchDistributed(TestBase):
525

626
def test_GroupCoordinator_patched(self):
7-
from vllm.distributed.parallel_state import GroupCoordinator
8-
9-
from vllm_ascend.patch.worker.patch_common.patch_distributed import \
10-
GroupCoordinatorPatch
11-
1227
self.assertIs(GroupCoordinator, GroupCoordinatorPatch)

tests/ut/patch/worker/patch_common/test_patch_sampler.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
import importlib
22
import os
3-
import unittest
43
from unittest import mock
54

65
import torch
76
from vllm.v1.sample.ops import topk_topp_sampler
87

8+
from tests.ut.base import TestBase
99

10-
class TestTopKTopPSamplerOptimize(unittest.TestCase):
10+
11+
class TestTopKTopPSamplerOptimize(TestBase):
1112

1213
@mock.patch.dict(os.environ, {"VLLM_ASCEND_ENABLE_TOPK_OPTIMIZE": "1"})
1314
@mock.patch("torch_npu.npu_top_k_top_p")

0 commit comments

Comments
 (0)