-
Notifications
You must be signed in to change notification settings - Fork 543
[Doc] Support kimi-k2-w8a8 #2162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -13,4 +13,5 @@ multi_npu_qwen3_moe | |
| multi_npu_quantization | ||
| single_node_300i | ||
| multi_node | ||
| multi_node_kimi | ||
| ::: | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,153 @@ | ||
| # Multi-Node-DP (Kimi-K2) | ||
|
|
||
| ## Verify Multi-Node Communication Environment | ||
|
|
||
| referring to [multi_node.md](https://vllm-ascend.readthedocs.io/en/latest/tutorials/multi_node.html#verification-process) | ||
|
|
||
| ## Run with docker | ||
| Assume you have two Atlas 800 A3(64G*16) nodes(or 4 *A2* 8), and want to deploy the `Kimi-K2-Instruct-W8A8` quantitative model across multi-node. | ||
|
|
||
| ```{code-block} bash | ||
| :substitutions: | ||
| # Update the vllm-ascend image | ||
| export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:|vllm_ascend_version| | ||
| export NAME=vllm-ascend | ||
|
|
||
| # Run the container using the defined variables | ||
| # Note if you are running bridge network with docker, Please expose available ports for multiple nodes communication in advance | ||
| docker run --rm \ | ||
| --name $NAME \ | ||
| --net=host \ | ||
| --device /dev/davinci0 \ | ||
| --device /dev/davinci1 \ | ||
| --device /dev/davinci2 \ | ||
| --device /dev/davinci3 \ | ||
| --device /dev/davinci4 \ | ||
| --device /dev/davinci5 \ | ||
| --device /dev/davinci6 \ | ||
| --device /dev/davinci7 \ | ||
| --device /dev/davinci8 \ | ||
| --device /dev/davinci9 \ | ||
| --device /dev/davinci10 \ | ||
| --device /dev/davinci11 \ | ||
| --device /dev/davinci12 \ | ||
| --device /dev/davinci13 \ | ||
| --device /dev/davinci14 \ | ||
| --device /dev/davinci15 \ | ||
| --device /dev/davinci_manager \ | ||
| --device /dev/devmm_svm \ | ||
| --device /dev/hisi_hdc \ | ||
| -v /usr/local/dcmi:/usr/local/dcmi \ | ||
| -v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \ | ||
| -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ | ||
| -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ | ||
| -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ | ||
| -v /etc/ascend_install.info:/etc/ascend_install.info \ | ||
| -v /mnt/sfs_turbo/.cache:/home/cache \ | ||
| -it $IMAGE bash | ||
| ``` | ||
|
|
||
| Run the following scripts on two nodes respectively | ||
|
|
||
| :::{note} | ||
| Before launch the inference server, ensure the following environment variables are set for multi node communication | ||
| ::: | ||
|
|
||
| **node0** | ||
|
|
||
| ```shell | ||
| #!/bin/sh | ||
|
|
||
| # this obtained through ifconfig | ||
| # nic_name is the network interface name corresponding to local_ip | ||
| nic_name="xxxx" | ||
| local_ip="xxxx" | ||
|
|
||
| export HCCL_IF_IP=$local_ip | ||
| export GLOO_SOCKET_IFNAME=$nic_name | ||
| export TP_SOCKET_IFNAME=$nic_name | ||
| export HCCL_SOCKET_IFNAME=$nic_name | ||
| export OMP_PROC_BIND=false | ||
| export OMP_NUM_THREADS=100 | ||
| export VLLM_USE_V1=1 | ||
| export HCCL_BUFFSIZE=1024 | ||
|
|
||
| # The w8a8 weight can obtained from https://www.modelscope.cn/models/vllm-ascend/Kimi-K2-Instruct-W8A8 | ||
| # If you want to the quantization manually, please refer to https://vllm-ascend.readthedocs.io/en/latest/user_guide/feature_guide/quantization.html | ||
| vllm serve /home/cache/weights/Kimi-K2-Instruct-W8A8 \ | ||
| --host 0.0.0.0 \ | ||
| --port 8004 \ | ||
| --data-parallel-size 4 \ | ||
| --api-server-count 2 \ | ||
| --data-parallel-size-local 2 \ | ||
| --data-parallel-address $local_ip \ | ||
| --data-parallel-rpc-port 13389 \ | ||
| --seed 1024 \ | ||
| --served-model-name kimi \ | ||
| --quantization ascend \ | ||
| --tensor-parallel-size 8 \ | ||
| --enable-expert-parallel \ | ||
| --max-num-seqs 16 \ | ||
| --max-model-len 32768 \ | ||
| --max-num-batched-tokens 4096 \ | ||
| --trust-remote-code \ | ||
| --no-enable-prefix-caching \ | ||
| --gpu-memory-utilization 0.9 \ | ||
| --additional-config '{"ascend_scheduler_config":{"enabled":true},"torchair_graph_config":{"enabled":true}}' | ||
| ``` | ||
|
|
||
| **node1** | ||
|
|
||
| ```shell | ||
| #!/bin/sh | ||
|
|
||
| nic_name="xxxx" | ||
| local_ip="xxxx" | ||
|
|
||
| export HCCL_IF_IP=$local_ip | ||
| export GLOO_SOCKET_IFNAME=$nic_name | ||
| export TP_SOCKET_IFNAME=$nic_name | ||
| export HCCL_SOCKET_IFNAME=$nic_name | ||
| export OMP_PROC_BIND=false | ||
| export OMP_NUM_THREADS=100 | ||
| export VLLM_USE_V1=1 | ||
| export HCCL_BUFFSIZE=1024 | ||
|
|
||
| vllm serve /home/cache/weights/Kimi-K2-Instruct-W8A8 \ | ||
| --host 0.0.0.0 \ | ||
| --port 8004 \ | ||
| --headless \ | ||
| --data-parallel-size 4 \ | ||
| --data-parallel-size-local 2 \ | ||
| --data-parallel-start-rank 2 \ | ||
| --data-parallel-address $node0_ip \ | ||
| --data-parallel-rpc-port 13389 \ | ||
| --seed 1024 \ | ||
| --tensor-parallel-size 8 \ | ||
| --served-model-name kimi \ | ||
| --max-num-seqs 16 \ | ||
| --max-model-len 32768 \ | ||
| --quantization ascend \ | ||
| --max-num-batched-tokens 4096 \ | ||
| --enable-expert-parallel \ | ||
| --trust-remote-code \ | ||
| --no-enable-prefix-caching \ | ||
| --gpu-memory-utilization 0.92 \ | ||
| --additional-config '{"ascend_scheduler_config":{"enabled":true},"torchair_graph_config":{"enabled":true}}' | ||
| ``` | ||
|
|
||
| The Deployment view looks like: | ||
|  | ||
|
|
||
| Once your server is started, you can query the model with input prompts: | ||
|
|
||
| ```shell | ||
| curl http://{ node0 ip:8004 }/v1/completions \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{ | ||
| "model": "kimi", | ||
| "prompt": "The future of AI is", | ||
| "max_tokens": 50, | ||
| "temperature": 0 | ||
| }' | ||
| ``` | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about tp16?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dp2 tp16?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or the ray backend you mean
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you run the model on A3 with 16 card per node. So why not set tp to 16? DP4+tp8 OR DP2+TP16, which is better