Skip to content

Conversation

@cynthieye
Copy link
Contributor

@cynthieye cynthieye commented Apr 11, 2025

Optimize Rotary Positional Embeddings with Triton Kernel in VLLM

This PR enhances rotary positional embedding computation by leveraging Triton-optimized kernels from flash_attn, addressing a significant performance bottleneck observed in models like Qwen2-VL.

Background
The original PyTorch-native rotary embedding implementation (rotary_emb) consumed 40-60% of total inference latency for Qwen2-VL, particularly scaling with output token count. Profiling revealed inefficiencies in tensor reshaping and element-wise operations.

Changes
Triton Kernel Integration:

Conditionally uses flash_attn.ops.triton.rotary.apply_rotary when flash-attn>=2.0 is available.

Falls back to the native PyTorch implementation otherwise.

Dynamic Implementation Selection:

Added _use_flash_attn flag to RotaryEmbedding classes, auto-detecting flash_attn availability during initialization.

Performance Gains
Qwen2-VL: Achieved 17% end-to-end speedup when generating 150 output tokens.

Generalization: Expected improvements for other mrotary-based models, pending further benchmarks.

Code Compatibility
Maintains backward compatibility with existing model architectures.

Requires no user-side changes—automatically prioritizes Triton kernel when dependencies are met.

This optimization significantly reduces rotary embedding overhead while preserving numerical equivalence, making it particularly impactful for long-context and long-generation scenarios.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@cynthieye cynthieye force-pushed the main branch 2 times, most recently from a115f1f to e630289 Compare April 11, 2025 06:59
@jeejeelee
Copy link
Collaborator

Perhaps using kernel from https://github.com/vllm-project/flash-attention would be more reasonable

@MagnetoWang MagnetoWang force-pushed the main branch 2 times, most recently from 39b1672 to b2379fd Compare April 16, 2025 07:34
@cynthieye cynthieye force-pushed the main branch 6 times, most recently from c435afc to 8d6303f Compare April 18, 2025 06:34
@Isotr0py
Copy link
Member

Shouldn't this triton kernel have been used for Qwen2-VL?

def apply_rotary_pos_emb_vision(t: torch.Tensor,
freqs: torch.Tensor,
use_flash_attn=False) -> torch.Tensor:
t_ = t.float()
cos = freqs.cos()
sin = freqs.sin()
apply_rotary_emb = apply_rotary_emb_torch
if use_flash_attn:
from flash_attn.layers.rotary import apply_rotary_emb
output = apply_rotary_emb(t_, cos, sin).type_as(t)
return output

use_flash_attn = self.attn_backend == _Backend.FLASH_ATTN
q = apply_rotary_pos_emb_vision(q,
rotary_pos_emb,
use_flash_attn=use_flash_attn)
k = apply_rotary_pos_emb_vision(k,
rotary_pos_emb,
use_flash_attn=use_flash_attn)

@cynthieye
Copy link
Contributor Author

cynthieye commented Apr 18, 2025

apply_rotary_pos_emb_vision

qwen2_vl.py

class Qwen2VLForConditionalGeneration(nn.Module, SupportsMultiModal,
                                      SupportsLoRA, SupportsPP):
    def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
        super().__init__()
        ......

        self.language_model = init_vllm_registered_model(
            vllm_config=vllm_config,
            prefix=maybe_prefix(prefix, "language_model"),
            architectures=["Qwen2ForCausalLM"],
        )
        ......
       
            
    def forward(
        self,
        input_ids: torch.Tensor,
        positions: torch.Tensor,
        intermediate_tensors: Optional[IntermediateTensors] = None,
        inputs_embeds: Optional[torch.Tensor] = None,
        **kwargs: object,
    ) -> Union[torch.Tensor, IntermediateTensors]:
        ......

        hidden_states = self.language_model.model(
            input_ids=input_ids,
            positions=positions,
            intermediate_tensors=intermediate_tensors,
            inputs_embeds=inputs_embeds,
        )
        return hidden_states

qwen2.py

from vllm.model_executor.layers.rotary_embedding import get_rope

class Qwen2Attention(nn.Module):
    def __init__(self,
                 hidden_size: int,
                 num_heads: int,
                 num_kv_heads: int,
                 max_position: int = 4096 * 32,
                 rope_theta: float = 10000,
                 cache_config: Optional[CacheConfig] = None,
                 quant_config: Optional[QuantizationConfig] = None,
                 rope_scaling: Optional[Tuple] = None) -> None:
        super().__init__()
        ......

        self.rotary_emb = get_rope(
            self.head_dim,
            rotary_dim=self.head_dim,
            max_position=max_position,
            base=self.rope_theta,
            rope_scaling=rope_scaling,
        )
        ......
    def forward(
        self,
        positions: torch.Tensor,
        hidden_states: torch.Tensor,
        kv_cache: torch.Tensor,
        attn_metadata: AttentionMetadata,
    ) -> torch.Tensor:
        ......
        q, k = self.rotary_emb(positions, q, k)
        ......

@Isotr0py In qwen2-vl, the Qwen2Model of qwen2 will be called, and the rotary-embedding in Qwen2Model does not use the Triton kernel. During the token ID generation stage, there will be a performance bottleneck, which accounts for 40-60% of the overall inference time
Uploading screenshot-20250418-203723.png…

Comment on lines 88 to 85
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if use_flash_attn:
return apply_rotary_emb(x.unsqueeze(0), cos, sin,
not is_neox_style).squeeze(0)
else:
return _apply_rotary_emb_torch(x, cos, sin, is_neox_style)
if current_platform.is_cuda_alike():
from vllm_flash_attn.layers.rotary import apply_rotary_emb
return apply_rotary_emb(x.unsqueeze(0), cos, sin,
not is_neox_style).squeeze(0)
else:
return _apply_rotary_emb_torch(x, cos, sin, is_neox_style)

Since triton is only available for Nvidia and ROCm GPUs, we can simplify the implementation here.

Copy link
Contributor Author

@cynthieye cynthieye Apr 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi,isotr0py,if I replace

        from flash_attn.layers.rotary import apply_rotary_emb
        return apply_rotary_emb(x.unsqueeze(0), cos, sin,
                                not is_neox_style).squeeze(0)

to

        from vllm_flash_attn.layers.rotary import apply_rotary_emb
        return apply_rotary_emb(x.unsqueeze(0), cos, sin,
                                not is_neox_style).squeeze(0)

Running the CI test will result in an error:
https://buildkite.com/vllm/ci/builds/18001#01964d1c-6acb-48b6-8b8e-6b296cdefd17

[2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 243, in forward

  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] hidden_states = self.self_attn(
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] ^^^^^^^^^^^^^^^
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] return self._call_impl(*args, **kwargs)
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] return forward_call(*args, **kwargs)
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 176, in forward
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] q, k = self.rotary_emb(positions, q, k)
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] return self._call_impl(*args, **kwargs)
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] return forward_call(*args, **kwargs)
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/rotary_embedding.py", line 992, in forward
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] query_rot = _apply_rotary_emb(query_rot, cos, sin, self.is_neox_style)
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/rotary_embedding.py", line 81, in _apply_rotary_emb
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] from vllm_flash_attn.layers.rotary import apply_rotary_emb
  | [2025-04-19T08:53:45Z] ERROR 04-19 01:53:45 [core.py:386] ModuleNotFoundError: No module named 'vllm_flash_attn'

How should I solve it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use flash.attn.layers.gross import
Mainly because qwen2-vl.py is used in this way, as follows

def apply_rotary_pos_emb_vision(t: torch.Tensor,
                                freqs: torch.Tensor,
                                use_flash_attn=False) -> torch.Tensor:
    t_ = t.float()
    cos = freqs.cos()
    sin = freqs.sin()
    apply_rotary_emb = apply_rotary_emb_torch
    if use_flash_attn:
        from flash_attn.layers.rotary import apply_rotary_emb
    output = apply_rotary_emb(t_, cos, sin).type_as(t)
    return output

Copy link
Member

@Isotr0py Isotr0py Apr 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, the import code should be vllm.vllm_flash_attn.layers.rotary import apply_rotary_emb, otherwise it won't work outside the repo folder.

Qwen2-vl use original FA due to vllm_flash_attn compatibility issue for ViT, however, we shouldn't use it here since original FA is not a requirement.

Copy link
Contributor Author

@cynthieye cynthieye Apr 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using vllm-vllm_flash_mattn.layers.total still encounters errors:
https://buildkite.com/vllm/ci/builds/18013#01964e8f-a7a0-4638-baa0-a57dfa5ac8b4

  | [2025-04-19T15:42:15Z] ERROR 04-19 08:42:15 [core.py:386] return forward_call(*args, **kwargs)
  | [2025-04-19T15:42:15Z] ERROR 04-19 08:42:15 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  | [2025-04-19T15:42:15Z] ERROR 04-19 08:42:15 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/rotary_embedding.py", line 992, in forward
  | [2025-04-19T15:42:15Z] ERROR 04-19 08:42:15 [core.py:386] query_rot = _apply_rotary_emb(query_rot, cos, sin, self.is_neox_style)
  | [2025-04-19T15:42:15Z] ERROR 04-19 08:42:15 [core.py:386] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  | [2025-04-19T15:42:15Z] ERROR 04-19 08:42:15 [core.py:386] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/rotary_embedding.py", line 81, in _apply_rotary_emb
  | [2025-04-19T15:42:15Z] ERROR 04-19 08:42:15 [core.py:386] from vllm.vllm_flash_attn.layers.rotary import apply_rotary_emb
  | [2025-04-19T15:42:15Z] ERROR 04-19 08:42:15 [core.py:386] ModuleNotFoundError: No module named 'vllm.vllm_flash_attn.layers'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that I need to make modifications https://github.com/vllm-project/flash-attention project ?
Can I modify it like this:

  1. Create a 'layers' sub directory in the' vllm_flash-attn 'directory
  2. Then copy 'flash_attn/layers/total. py' to the 'vllm_flash-attn/layers/' directory,
  3. Import or export 'apply_rotary-emb' in 'vllm_flash-attn/layers/init. py'.
  4. Ensure that the top-level 'init. py' of 'vllm_flash-attn' correctly exports the 'layers' module.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that I need to make modifications https://github.com/vllm-project/flash-attention project ?

Yes. We should not directly copy the source code from FA to vllm_flash_attn, because the copying should happen at the compilation during installation. So I recommend to make modifications in https://github.com/vllm-project/flash-attention to make rotary code copied during compilation.

Copy link
Contributor Author

@cynthieye cynthieye Apr 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file I want to copy/flash attention/flash attn/layers/rotary.py contains the following code
from flash_attn.ops.triton.rotary import apply_rotary
If I copy /flash-attention/flash-attn/layers/rotary.py to the /flash-attention/vllm_flash-attn/ directory during compilation, the error will still occur during execution,
I need to modify the code to become
from vllm.vllm_flash_attn.ops.triton.rotary import apply_rotary

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that I need to make modifications https://github.com/vllm-project/flash-attention project ?

Yes. We should not directly copy the source code from FA to vllm_flash_attn, because the copying should happen at the compilation during installation. So I recommend to make modifications in https://github.com/vllm-project/flash-attention to make rotary code copied during compilation.

Can you help me merge this flash attn PR?
vllm-project/flash-attention#64

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Isotr0py hi,I have merged the changes to the flash attn repository. Can you help me review and merge this PR again

@cynthieye cynthieye force-pushed the main branch 4 times, most recently from 4dca833 to 679db29 Compare April 18, 2025 17:39
@cynthieye cynthieye requested a review from Isotr0py April 18, 2025 21:16
Isotr0py
Isotr0py previously approved these changes Apr 19, 2025
Copy link
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM now, just leave some nits. PTAL!

@Isotr0py Isotr0py added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 19, 2025
@cynthieye cynthieye force-pushed the main branch 4 times, most recently from 73354b5 to bf7c6fc Compare April 19, 2025 18:07
@cynthieye cynthieye force-pushed the main branch 2 times, most recently from 0069947 to c04e840 Compare April 25, 2025 04:05
@mergify mergify bot removed the needs-rebase label Apr 25, 2025
…mproved inference performance

Signed-off-by: cynthieye <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>
@mergify
Copy link

mergify bot commented Apr 25, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @cynthieye.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

…mproved inference performance

Signed-off-by: cynthieye <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>
Copy link
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now, thanks for your patience!

@Isotr0py Isotr0py merged commit b22980a into vllm-project:main Apr 25, 2025
44 checks passed
gshtras added a commit to ROCm/vllm that referenced this pull request Apr 25, 2025
* [BugFix] Remove default multiproc executor `collective_rpc` timeout (vllm-project#17000)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Core][V1][TPU] Enable structured decoding on TPU V1 (vllm-project#16499)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Bugfix] validate urls object for multimodal content parts (vllm-project#16990)

Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>

* add Dockerfile build vllm against torch nightly (vllm-project#16936)

Signed-off-by: Yang Wang <elainewy@meta.com>

* [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 (vllm-project#13305)

Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: root <root@banff-cyxtera-s73-5.ctr.dcgpu>
Signed-off-by: Aleksandr Malyshev <maleksan@amd.com>
Signed-off-by: root <root@banff-cyxtera-s65-4.amd.com>
Signed-off-by: maleksan85 <maleksan@amd.com>
Signed-off-by: <>
Co-authored-by: Sage Moore <sage@neuralmagic.com>
Co-authored-by: root <root@banff-cyxtera-s73-5.ctr.dcgpu>
Co-authored-by: Aleksandr Malyshev <maleksan@amd.com>
Co-authored-by: qli88 <qiang.li2@amd.com>
Co-authored-by: root <root@banff-cyxtera-s65-4.amd.com>

* [V1][DP] More robust DP/EP dummy request coordination (vllm-project#16277)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [BugFix] Revert ROCm Custom Paged Attention Env Flag Check (vllm-project#17022)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* Revert "[Misc] Add S3 environment variables for better support of MinIO." (vllm-project#17021)

* [misc] tune some env vars for GB200 (vllm-project#16992)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [INTEL-HPU][v0] Port delayed sampling to upstream (vllm-project#16949)

Signed-off-by: Michal Adamczyk <michal.adamczyk@intel.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>

* [doc] add download path tips (vllm-project#17013)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Triton FA function takes no keyword arguments (vllm-project#16902)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [V1] Avoid socket errors during shutdown when requests are in in-flight (vllm-project#16807)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [BugFix] llama4 fa3 fix - RuntimeError: scheduler_metadata must have shape (metadata_size) (vllm-project#16998)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Misc] Improve readability of get_open_port function. (vllm-project#17024)

Signed-off-by: gitover22 <qidizou88@gmail.com>

* [Bugfix] Fix AssertionError: skip_special_tokens=False is not supported for Mistral tokenizers (vllm-project#16964)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [CI] Run v1/test_serial_utils.py in CI (vllm-project#16996)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Mistral-format support for compressed-tensors (vllm-project#16803)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Categorize `tests/kernels/` based on kernel type (vllm-project#16799)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Doc] Add top anchor and a note to quantization/bitblas.md (vllm-project#17042)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* Ensure that `pid` passed to `kill_process_tree` is `int` for `mypy` (vllm-project#17051)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [CI] Update structured-output label automation (vllm-project#17055)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Improve Transformers backend model loading QoL (vllm-project#17039)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* `CacheConfig.block_size` should always be `int` when used (vllm-project#17052)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Use `@property` and private field for `data_parallel_rank_local` (vllm-project#17053)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Frontend] Support guidance:no-additional-properties for compatibility with xgrammar (vllm-project#15949)

Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>

* [BugFix][V1] Fix int32 token index overflow when preparing input ids (vllm-project#16806)

* [V1][Spec Decode] Always use argmax for sampling draft tokens  (vllm-project#16899)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [CI/Build] workaround for CI build failure (vllm-project#17070)

Signed-off-by: csy1204 <josang1204@gmail.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>

* [Quantization]add prefix for commandA quantized model (vllm-project#17017)

* [Minor] Use larger batch sizes for A100/B100/B200/MI300x (vllm-project#17073)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix] Enable V1 usage stats (vllm-project#16986)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* More informative error when using Transformers backend (vllm-project#16988)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Addendum Fix to support FIPS enabled machines with MD5 hashing (vllm-project#17043)

Signed-off-by: sydarb <areebsyed237@gmail.com>

* [Bugfix][Core] add seq_id_to_seq_group clearing to avoid memory leak when s… (vllm-project#16472)

Signed-off-by: 开哲 <kaizhe.zy@alibaba-inc.com>
Co-authored-by: 开哲 <kaizhe.zy@alibaba-inc.com>

* [V1] Update structured output (vllm-project#16812)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [doc] update to hyperlink (vllm-project#17096)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Add docs for runai_streamer_sharded (vllm-project#17093)

Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Chore] Remove Sampler from Model Code (vllm-project#17084)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Disable enforce_eager for V1 TPU sampler and structured output tests (vllm-project#17016)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Simplify `TokenizerGroup` (vllm-project#16790)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Fix OOT registration test (vllm-project#17099)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [V1][PP] Optimization: continue scheduling prefill chunks (vllm-project#17080)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* [Misc] Remove OLMo2 config copy (vllm-project#17066)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Improve static type checking in `LoRAModelRunnerMixin` (vllm-project#17104)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [V1][Structured Output] Clear xgrammar compiler object when engine core shut down to avoid nanobind leaked warning (vllm-project#16954)

Signed-off-by: shen-shanshan <467638484@qq.com>

* [Frontend] Using matryoshka_dimensions control the allowed output dimensions. (vllm-project#16970)

* Add missing rocm_skinny_gemms kernel test to CI (vllm-project#17060)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Misc] refactor example series - structured outputs (vllm-project#17040)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [V1][Spec Decoding] Add num_drafts and num_accepted_tokens_per_position metrics (vllm-project#16665)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [CI] Add automation for the `tool-calling` github label (vllm-project#17118)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Updating builkite job for IBM Power  (vllm-project#17111)

Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>

* existing torch installation pip command fix for docs (vllm-project#17059)

* Molmo Requirements (vllm-project#17026)

Signed-off-by: Eyshika Agarwal <eyshikaengineer@gmail.com>
Signed-off-by: eyshika <eyshikaengineer@gmail.com>

* Add `:markdownhelp:` to `EngineArgs` docs so markdown docstrings render properly (vllm-project#17124)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Improve configs - `LoRAConfig` + `PromptAdapterConfig` (vllm-project#16980)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Docs] Generate correct github links for decorated functions (vllm-project#17125)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Add collective_rpc to llm engine (vllm-project#16999)

Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>

* Add chat template for Llama 4 models (vllm-project#16428)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>

* [Misc] Add example to run DeepSeek with Ray Serve LLM (vllm-project#17134)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* Better error message for missing mistral params.json (vllm-project#17132)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Use custom address for listening socket (vllm-project#15988)

Signed-off-by: Jens Glaser <glaserj@ornl.gov>

* [FEAT] [ROCm]: AITER Fused MOE V1 Support (vllm-project#16752)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Attention] FA3 decode perf improvement - single mma warp group support for head dim 128 (vllm-project#16864)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* fix float16 support for kimi-vl (vllm-project#17156)

Co-authored-by: zhouzaida <zhouzaida@msh.team>

* [Doc] V1 : Update LoRA status (vllm-project#17133)

Signed-off-by: varun sundar rabindranath <vsundarr@redhat.com>
Co-authored-by: varun sundar rabindranath <vsundarr@redhat.com>

* [Docs] Fix True->true in supported_models.md (vllm-project#17141)

* Move missed `SchedulerConfig` args into scheduler config group in `EngineArgs` (vllm-project#17131)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Misc] Clean up redundant code in uniproc_executor.py (vllm-project#16762)

Signed-off-by: Lifu Huang <lifu.hlf@gmail.com>

* [Bugfix][Misc] Use TritonPlaceholderModule to defensively import triton (vllm-project#15099)

Signed-off-by: Mengqing Cao <cmq0113@163.com>

* [Misc] Benchmark Serving Script Support Appending Results (vllm-project#17028)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Perf]Optimize rotary_emb implementation to use Triton operator for improved inference performance (vllm-project#16457)

Signed-off-by: cynthieye <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>

* [Bugfix] remove fallback in guided_json (int range, patterns) (vllm-project#16725)

Signed-off-by: csy1204 <josang1204@gmail.com>
Co-authored-by: 조상연[플레이스 AI] <sang-yeon.cho@navercorp.com>

* [Quantization][FP8] Add support for FP8 models with input_scale for output projection and QK quantization (vllm-project#15734)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Co-authored-by: Luka Govedič <lgovedic@redhat.com>

* [Doc] Add headings to improve gptqmodel.md (vllm-project#17164)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* Only turn on FastIncrementalDetokenizer when tokenizers >= 0.21.1 (vllm-project#17158)

* [Doc] Add two links to disagg_prefill.md (vllm-project#17168)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [Doc] Move todo out of beam search docstring (vllm-project#17183)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* [Bugfix] Fix mistral model tests (vllm-project#17181)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix Mistral ChatCompletionRequest Body Exception (vllm-project#16769)

Signed-off-by: Jasmond Loh <Jasmond.Loh@hotmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Fix API typo and remove FP8 on V1 restriction

---------

Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Chenyaaang <chenyangli@google.com>
Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: root <root@banff-cyxtera-s73-5.ctr.dcgpu>
Signed-off-by: Aleksandr Malyshev <maleksan@amd.com>
Signed-off-by: root <root@banff-cyxtera-s65-4.amd.com>
Signed-off-by: maleksan85 <maleksan@amd.com>
Signed-off-by: <>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Michal Adamczyk <michal.adamczyk@intel.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: gitover22 <qidizou88@gmail.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: csy1204 <josang1204@gmail.com>
Signed-off-by: sydarb <areebsyed237@gmail.com>
Signed-off-by: 开哲 <kaizhe.zy@alibaba-inc.com>
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: shen-shanshan <467638484@qq.com>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>
Signed-off-by: Eyshika Agarwal <eyshikaengineer@gmail.com>
Signed-off-by: eyshika <eyshikaengineer@gmail.com>
Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Jens Glaser <glaserj@ornl.gov>
Signed-off-by: varun sundar rabindranath <vsundarr@redhat.com>
Signed-off-by: Lifu Huang <lifu.hlf@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: cynthieye <yexin93@qq.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Jasmond Loh <Jasmond.Loh@hotmail.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com>
Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Co-authored-by: Yang Wang <elainewy@meta.com>
Co-authored-by: Aleksandr Malyshev <164964928+maleksan85@users.noreply.github.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>
Co-authored-by: root <root@banff-cyxtera-s73-5.ctr.dcgpu>
Co-authored-by: Aleksandr Malyshev <maleksan@amd.com>
Co-authored-by: qli88 <qiang.li2@amd.com>
Co-authored-by: root <root@banff-cyxtera-s65-4.amd.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Chendi.Xue <chendi.xue@intel.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: huafeng <qidizou88@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Michael Yao <haifeng.yao@daocloud.io>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Sangyeon Cho <josang1204@gmail.com>
Co-authored-by: Chen Xia <cxia0209@gmail.com>
Co-authored-by: Areeb Syed <areebsyed237@gmail.com>
Co-authored-by: 张宇 <zhangyuygss@outlook.com>
Co-authored-by: 开哲 <kaizhe.zy@alibaba-inc.com>
Co-authored-by: omer-dayan <omdayan@nvidia.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com>
Co-authored-by: Atilla <48064466+atilla00@users.noreply.github.com>
Co-authored-by: Eyshika Agarwal <eyshikaengineer@gmail.com>
Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: jglaser <glaserj@ornl.gov>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: zhouzaida <zhouzaida@msh.team>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: varun sundar rabindranath <vsundarr@redhat.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: yexin(叶鑫) <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>
Co-authored-by: 조상연[플레이스 AI] <sang-yeon.cho@navercorp.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Luka Govedič <lgovedic@redhat.com>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Jasmond L <120363110+JasmondL@users.noreply.github.com>
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
…mproved inference performance (vllm-project#16457)

Signed-off-by: cynthieye <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
…mproved inference performance (vllm-project#16457)

Signed-off-by: cynthieye <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>
adobrzyn pushed a commit to HabanaAI/vllm-fork that referenced this pull request Apr 30, 2025
…mproved inference performance (vllm-project#16457)

Signed-off-by: cynthieye <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>
Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
…mproved inference performance (vllm-project#16457)

Signed-off-by: cynthieye <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
@vadiklyutiy
Copy link
Collaborator

Performance Gains
Qwen2-VL: Achieved 17% end-to-end speedup when generating 150 output tokens.

@cynthieye Could you clarify what command line args did you use for measurement?

@vadiklyutiy
Copy link
Collaborator

@cynthieye @Isotr0py
It seems to me that this PR caused some discrepancy with the design of the torch.compile integration.

V1 with torch.compile explicitly avoid using any custom implementation. For control what custom op implementation should be enable is used CompilationConfig.custom_ops and by default used native torch version (see CustomOp.dispatch_forward)

With this change for class MRotaryEmbedding was enabled using custom cuda implementation even if torch.compile is enabled.

@youkaichao Please correct me if I wrong.

@Isotr0py
Copy link
Member

V1 with torch.compile explicitly avoid using any custom implementation. For control what custom op implementation should be enable is used CompilationConfig.custom_ops and by default used native torch version (see CustomOp.dispatch_forward)

But MRoPE implementation itself doesn't obey custom op implementation design very much, because it has overridden the forward method, and dispatch_forward won't take any effect.

@vadiklyutiy
Copy link
Collaborator

yes, correct. And I propose to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants