-
-
Couldn't load subscription status.
- Fork 10.8k
[Attention] Blackwell FP8 MLA support with CUTLASS_MLA backend #23289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Attention] Blackwell FP8 MLA support with CUTLASS_MLA backend #23289
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
This pull request has merge conflicts that must be resolved before it can be |
28d207d to
69fd772
Compare
|
@Mergifyio refresh |
✅ Pull request refreshed |
Signed-off-by: Matthew Bonanni <mbonanni@redhat.com>
Head branch was pushed to by a user without write access
6cdfa67 to
abf29da
Compare
|
Is it faster than fa3 ? |
@MatthewBonanni This test is failed on main. Seeing this in at least 2 recent PRs: |
|
@elvischenv hmm, thanks for bringing this up. It looks like it's passing on the most recent nightly: |
…project#23289) Signed-off-by: Matthew Bonanni <mbonanni@redhat.com>
…project#23289) Signed-off-by: Matthew Bonanni <mbonanni@redhat.com>
Purpose
Enable FP8 KV cache support on Blackwell in the CUTLASS_MLA backend.
Test Plan
Correctness
VLLM_ATTENTION_BACKEND=CUTLASS_MLA lm_eval --model vllm --model_args '{"pretrained": "deepseek-ai/DeepSeek-V2-Lite-Chat", "trust_remote_code": true, "kv_cache_dtype": "fp8"}' --tasks gsm8k --batch_size autoPerformance
V2 Lite
VLLM_ATTENTION_BACKEND=CUTLASS_MLA vllm bench throughput --model=deepseek-ai/DeepSeek-V2-Lite-Chat --dataset-name=random --input-len=8192 --output-len=1024 --num-prompts=1000 --kv-cache-dtype=fp8V2 (with EP4)
VLLM_ATTENTION_BACKEND=CUTLASS_MLA vllm bench throughput --model=deepseek-ai/DeepSeek-V2 --dataset-name=random --input-len=8192 --output-len=1024 --num-prompts=1000 --kv-cache-dtype=fp8 --tensor-parallel-size 4 --enable-expert-parallelTest Result
Correctness
With
kv_cache_dtype=auto:With
kv_cache_dtype=fp8:Performance
V2 Lite:
With
--kv-cache-dtype=auto: Throughput: 4.20 requests/s, 38668.98 total tokens/s, 4296.91 output tokens/sWith
--kv-cache-dtype=fp8: Throughput: 4.74 requests/s, 43678.48 total tokens/s, 4853.57 output tokens/sV2:
With
--kv-cache-dtype=auto: Throughput: 0.81 requests/s, 7509.07 total tokens/s, 834.41 output tokens/sWith
--kv-cache-dtype=fp8: Throughput: 1.08 requests/s, 9971.05 total tokens/s, 1107.99 output tokens/s(Optional) Documentation Update
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.