Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1] Add V1 support of Qwen2-VL #12128

Merged
merged 25 commits into from
Jan 19, 2025
Merged

[V1] Add V1 support of Qwen2-VL #12128

merged 25 commits into from
Jan 19, 2025

Conversation

ywang96
Copy link
Member

@ywang96 ywang96 commented Jan 16, 2025

Continued from #11668 for supporting Qwen2-VL on V1.

Co-authored-by: @imkero who has done the great work on the V1 MRoPE implementation.

Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

ywang96 and others added 8 commits January 16, 2025 18:19
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: imkero <kerorek@outlook.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
@mergify mergify bot added the documentation Improvements or additions to documentation label Jan 17, 2025
@ywang96
Copy link
Member Author

ywang96 commented Jan 17, 2025

@baifanxxx @Zhiy-Zhang I have verified this PR works end-to-end on V1 with all following commands, so please try it out and let me know if there's any issue

VLLM_USE_V1=1 python3 examples/offline_inference/vision_language.py --model-type qwen2_vl
VLLM_USE_V1=1 python3 examples/offline_inference/vision_language.py --model-type qwen2_vl --modality video
VLLM_USE_V1=1 python3 examples/offline_inference/vision_language_multi_image.py --model-type qwen2_vl 

@ywang96 ywang96 changed the title [WIP] V1 Qwen2-VL [V1] Add V1 support of Qwen2-VL Jan 17, 2025
@ywang96 ywang96 marked this pull request as ready for review January 17, 2025 09:42
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM wrt the V1 model runner.

Comment on lines +150 to +159
self.mrope_positions = torch.zeros((self.max_num_tokens, 3),
dtype=torch.int64,
device=self.device)
self.mrope_positions_cpu = torch.zeros((self.max_num_tokens, 3),
dtype=torch.int64,
device="cpu",
pin_memory=self.pin_memory)

self.mrope_positions = self.mrope_positions.permute((1, 0))
self.mrope_positions_cpu = self.mrope_positions_cpu.permute((1, 0))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A dumb question: Why don't create the tensors with the permuted shape from the first place?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment above explains it:

# a permuted mrope_positions tensor satisfying the following
#   properties to allow `torch.compile` work properly:
# - shape: (3, <variable>)
# - stride: (1, <variable>)

but creating the tensors with (3, self.max_num_tokens) directly will result in a stride of (self.max_num_tokens, 1). Alternatively, we can use torch.as_strided but I think using permute is cleaner.

@imkero @youkaichao feel free to comment since I'm not a torch.compile expert at all 😄.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need stride: (1, <variable>) ? I think stride: (<variable>, 1) should work, with a normal (self.max_num_tokens, 3) shape tensor.

Copy link
Contributor

@imkero imkero Jan 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the reason why I impl MRoPE in this way. @youkaichao @ywang96 @WoosukKwon

I'm trying to give a clear explanation below, and feel free to ask me if there's still any confusion.

TL;DR

  1. we should pass positions=self.mrope_positions[:, :total_num_scheduled_tokens] in the Qwen2-VL's forward pass (just like common rope's positions=self.positions[:total_num_scheduled_tokens])
  2. however, self.mrope_positions[:, :total_num_scheduled_tokens] is not contiguous, and torch.compiled code complain about this. (causing an assertion error)
  3. i discovered that if we exchange dim 0 and dim 1, then we should do self.mrope_positions[:total_num_scheduled_tokens, :] and we will get a contiguous tensor view to make it work.
  4. it's possible to make a logically shape(3, dynamic) and physically shape(dynamic, 3) tensor via:
self.mrope_positions = torch.zeros((self.max_num_tokens, 3),
                                    dtype=torch.int64,
                                    device=self.device)
self.mrope_positions = self.mrope_positions.permute((1, 0))
  1. this approach is compatible with Qwen2-VL's impl, and it works with torch.compile

Reason in detail

These lines of code organize the memory layout of mrope_positons, to avoid the following conflict:

  1. Qwen2-VL's model code expects positions in shape (3, seq_len) (dynamic at dim 1)

  2. torch.compiled model.forward func expects positions arg to have following attributes:

    • size: (3, shape[1]) = (3, total_num_scheduled_tokens)
    • stride: (shape[1], 1) = (total_num_scheduled_tokens, 1)

    a.k.a. contiguous

  3. CUDA graphs use persistant buffers (including self.mrope_positions), that means:

    • self.mrope_positions is initialized with shape (3, max_num_tokens)
    • model.forward func's positions arg is given by self.mrope_positions[:, :total_num_scheduled_tokens]
  4. And actually positions arg has following attributes:

    • size: (3, total_num_scheduled_tokens)
    • stride: (max_num_tokens, 1)
    • ↑ notice the difference here, view created by self.mrope_positions[:, :total_num_scheduled_tokens] is not contiguous now (if total_num_scheduled_tokens < max_num_tokens)

    This conflict with 2.

Approach in detail

We can avoid this conflict by organizing mrope_positons's memory layout to be F-contiguous (column-major) instead of C-contiguous (row-major) via:

self.mrope_positions = torch.zeros((self.max_num_tokens, 3),
                                    dtype=torch.int64,
                                    device=self.device)
self.mrope_positions = self.mrope_positions.permute((1, 0))

In this approch, torch compile's assertion of positions will become:

  • size: (3, shape[1]) = (3, total_num_scheduled_tokens)
  • stride: (1, shape[0]) = (1, 3)

This can always be satisfied because shape[0] of positions is a model-level constant (= 3), it will never change across different seq len

Illustrate C-Contiguous

mrope_dim_num = 3 # a constant across all seq

def run(total_num_scheduled_tokens, max_num_tokens):
    positions_c = torch.Tensor([
        list(range(max_num_tokens)),
    ] * mrope_dim_num)

    positions_c = positions_c[:, :total_num_scheduled_tokens]

    print("total_num_scheduled_tokens", total_num_scheduled_tokens)
    print("max_num_tokens", max_num_tokens)
    print("tensor", positions_c)
    print("shape:", positions_c.shape) # (3, total_num_scheduled_tokens)
    print("stride:", positions_c.stride()) # (max_num_tokens, 1)
    print()

run(total_num_scheduled_tokens=2, max_num_tokens=4)
run(total_num_scheduled_tokens=3, max_num_tokens=4)

it prints

total_num_scheduled_tokens 2
max_num_tokens 4
tensor tensor([[0., 1.],
        [0., 1.],
        [0., 1.]])
shape: torch.Size([3, 2])
stride: (4, 1)

total_num_scheduled_tokens 3
max_num_tokens 4
tensor tensor([[0., 1., 2.],
        [0., 1., 2.],
        [0., 1., 2.]])
shape: torch.Size([3, 3])
stride: (4, 1)

physically these two tensors would be (* means unused)

0   1   *   *   0   1   *   *   0   1   *   *
0   1   2   *   0   1   2   *   0   1   2   *

Illustrate F-Contiguous (this PR's approach)

mrope_dim_num = 3 # a constant across all seq

def run(total_num_scheduled_tokens, max_num_tokens):
    positions_f = torch.Tensor([
        [i] * mrope_dim_num for i in range(max_num_tokens)
    ])
    positions_f = positions_f.permute((1, 0))

    positions_f = positions_f[:, :total_num_scheduled_tokens]

    print("total_num_scheduled_tokens", total_num_scheduled_tokens)
    print("max_num_tokens", max_num_tokens)
    print("tensor", positions_f)
    print("shape:", positions_f.shape) # (3, total_num_scheduled_tokens)
    print("stride:", positions_f.stride()) # (max_num_tokens, 1)
    print()

run(total_num_scheduled_tokens=2, max_num_tokens=4)
run(total_num_scheduled_tokens=3, max_num_tokens=4)

it prints

total_num_scheduled_tokens 2
max_num_tokens 4
tensor tensor([[0., 1.],
        [0., 1.],
        [0., 1.]])
shape: torch.Size([3, 2])
stride: (1, 3)

total_num_scheduled_tokens 3
max_num_tokens 4
tensor tensor([[0., 1., 2.],
        [0., 1., 2.],
        [0., 1., 2.]])
shape: torch.Size([3, 3])
stride: (1, 3)

physically these two tensors would be (* means unused)

0   0   0   1   1   1   *   *   *   *   *   *
0   0   0   1   1   1   2   2   2   *   *   *

↑ As total_num_scheduled_tokens grows, positions_f is still contiguous, and torch.compile is happy to work with this contiguous tensor!

Copy link
Contributor

@imkero imkero Jan 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should point out the stride of mrope_positions is a constant in that comment, and add a link to the discussion here

# a permuted (column-major) mrope_positions tensor satisfying the following
#   properties to allow `torch.compile` work properly:
# - shape: (3, <variable>)
# - stride: (1, 3)
# see https://github.com/vllm-project/vllm/pull/12128#discussion_r1920908088 for detail.

Copy link
Contributor

@imkero imkero Jan 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.compiled code here for reference:

A won't work approach

source code:

self.mrope_positions = torch.zeros((3, self.max_num_tokens),
                                    dtype=torch.int64,
                                    device=self.device)

compiled code:

def call(args):
    arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1 = args
    args.clear()
    s0 = arg1_1
    assert_size_stride(arg0_1, (s0, 3584), (3584, 1))
    assert_size_stride(arg2_1, (3584, ), (1, ))
    assert_size_stride(arg3_1, (4608, 3584), (3584, 1))
    assert_size_stride(arg4_1, (4608, ), (1, ))
    assert_size_stride(arg5_1, (32768, 128), (128, 1))
    assert_size_stride(arg6_1, (3, s0), (s0, 1)) # exception thrown here

exception thrown here:

[rank0]: Traceback (most recent call last):
...
[rank0]:   File "/data/home/imkero/workspace/git/vllm-project/vllm/vllm/v1/worker/gpu_worker.py", line 134, in determine_num_available_blocks
[rank0]:     self.model_runner.profile_run()
[rank0]:   File "/data/home/imkero/workspace/git/vllm-project/vllm/vllm/v1/worker/gpu_model_runner.py", line 836, in profile_run
[rank0]:     hidden_states = self._dummy_run(self.model, self.max_num_tokens,
[rank0]:                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
[rank0]:   File "/data/home/imkero/workspace/git/vllm-project/vllm/vllm/model_executor/models/qwen2_vl.py", line 1205, in forward
[rank0]:     hidden_states = self.language_model.model(
[rank0]:                     ^^^^^^^^^^^^^^^^^^^^^^^^^^
...
[rank0]:   File "/data/home/imkero/.cache/vllm/torch_compile_cache/b06b53d306/rank_0/inductor_cache/il/ciloq6dlbeaiigv2l2eqyzyv7qihhat45i5xy4fjrhxyvuj3s6sb.py", line 319, in call
[rank0]:     assert_size_stride(arg6_1, (3, s0), (s0, 1))
[rank0]: AssertionError: expected size 3==3, stride 512==504 at dim=0

This PR's approach

source code:

self.mrope_positions = torch.zeros((self.max_num_tokens, 3),
                                    dtype=torch.int64,
                                    device=self.device)
self.mrope_positions = self.mrope_positions.permute((1, 0))

compiled code:

def call(args):
    arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1 = args
    args.clear()
    s0 = arg1_1
    assert_size_stride(arg0_1, (s0, 3584), (3584, 1))
    assert_size_stride(arg2_1, (3584, ), (1, ))
    assert_size_stride(arg3_1, (4608, 3584), (3584, 1))
    assert_size_stride(arg4_1, (4608, ), (1, ))
    assert_size_stride(arg5_1, (32768, 128), (128, 1))
    assert_size_stride(arg6_1, (3, s0), (1, 3))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the explanation! I think it might be a torch.compile 's bug, it should handle non-contiguous tensors as well. Let's take this work-around first, and investigate later.

Copy link
Contributor

@imkero imkero Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the explanation! I think it might be a torch.compile 's bug, it should handle non-contiguous tensors as well. Let's take this work-around first, and investigate later.

@youkaichao Here I found another workaround (maybe a direct approach?) that would make torch.compile "realize" that mrope_positions tensor is not always contiguous, and then works with mrope_positions' original shape (3, seq_len)

Approach

self.mrope_positions = torch.zeros((3, self.max_num_tokens + 1),
                                   dtype=torch.int64,
                                   device=self.device)

how it works:

  • extend the size of mrope_positions in dim 1 (plus 1 on it)
  • mrope_positions will be non-contiguous always in the upcoming usage (self.mrope_positions[:, :total_num_scheduled_tokens]) since total_num_scheduled_tokens is always less than max_num_tokens + 1
    • and torch.compile will notice it while compiling

Compiled code

def call(args):
    arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1 = args
    args.clear()
    s0 = arg1_1
    s2 = arg7_1
    assert_size_stride(arg0_1, (s0, 3584), (3584, 1))
    assert_size_stride(arg2_1, (3584, ), (1, ))
    assert_size_stride(arg3_1, (4608, 3584), (3584, 1))
    assert_size_stride(arg4_1, (4608, ), (1, ))
    assert_size_stride(arg5_1, (32768, 128), (128, 1))
    assert_size_stride(arg6_1, (3, s0), (s2, 1))

we can discover that:

  • in the compiled code, the stride_of_dim_0 (s2) becomes independent of size_of_dim_1 (s0), which means non-contiguous tensor supported.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is amazing! cc @ywang96 if we can make the change, it will make the code look simpler.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is amazing! cc @ywang96 if we can make the change, it will make the code look simpler.

Yep - changed in #12352

vllm/v1/worker/gpu_model_runner.py Outdated Show resolved Hide resolved
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 19, 2025
Copy link
Collaborator

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now!

@ywang96
Copy link
Member Author

ywang96 commented Jan 19, 2025

Looks like the tensor splitting breaks the embeddings as input interface, so I think we will need to move the split out of the encoder forward pass.

Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
@DarkLight1337
Copy link
Member

@imkero it would be great if you have time to help look into the failing embedding tests (since you originally created them)

@DarkLight1337
Copy link
Member

Nvm I think I found the issue, solving it now.

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@youkaichao youkaichao merged commit 81763c5 into main Jan 19, 2025
56 of 58 checks passed
@youkaichao youkaichao deleted the qwen2-vl-v1 branch January 19, 2025 11:52
joennlae pushed a commit to 44ai-labs/vllm that referenced this pull request Jan 19, 2025
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
joennlae pushed a commit to 44ai-labs/vllm that referenced this pull request Jan 19, 2025
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
lckr pushed a commit to lckr/vllm that referenced this pull request Jan 19, 2025
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
abmfy pushed a commit to abmfy/vllm-flashinfer that referenced this pull request Jan 24, 2025
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
abmfy pushed a commit to abmfy/vllm-flashinfer that referenced this pull request Jan 24, 2025
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
kzawora-intel added a commit to HabanaAI/vllm-fork that referenced this pull request Jan 28, 2025
- **[Bugfix] Fix score api for missing max_model_len validation
(vllm-project#12119)**
- **[Bugfix] Mistral tokenizer encode accept list of str (vllm-project#12149)**
- **[AMD][FP8] Using MI300 FP8 format on ROCm for block_quant (vllm-project#12134)**
- **[torch.compile] disable logging when cache is disabled (vllm-project#12043)**
- **[misc] fix cross-node TP (vllm-project#12166)**
- **[AMD][CI/Build][Bugfix] use pytorch stale wheel (vllm-project#12172)**
- **[core] further polish memory profiling (vllm-project#12126)**
- **[Docs] Fix broken link in SECURITY.md (vllm-project#12175)**
- **[Model] Port deepseek-vl2 processor, remove dependency (vllm-project#12169)**
- **[core] clean up executor class hierarchy between v1 and v0
(vllm-project#12171)**
- **[Misc] Support register quantization method out-of-tree (vllm-project#11969)**
- **[V1] Collect env var for usage stats (vllm-project#12115)**
- **[BUGFIX] Move scores to float32 in case of running xgrammar on cpu
(vllm-project#12152)**
- **[Bugfix] Fix multi-modal processors for transformers 4.48 (vllm-project#12187)**
- **[torch.compile] store inductor compiled Python file (vllm-project#12182)**
- **benchmark_serving support --served-model-name param (vllm-project#12109)**
- **[Misc] Add BNB support to GLM4-V model (vllm-project#12184)**
- **[V1] Add V1 support of Qwen2-VL (vllm-project#12128)**
- **[Model] Support for fairseq2 Llama (vllm-project#11442)**
- **[Bugfix] Fix num_heads value for simple connector when tp enabled
(vllm-project#12074)**
- **[torch.compile] fix sym_tensor_indices (vllm-project#12191)**
- **Move linting to `pre-commit` (vllm-project#11975)**
- **[DOC] Fix typo in docstring and assert message (vllm-project#12194)**
- **[DOC] Add missing docstring in LLMEngine.add_request() (vllm-project#12195)**
- **[Bugfix] Fix incorrect types in LayerwiseProfileResults (vllm-project#12196)**
- **[Model] Add Qwen2 PRM model support (vllm-project#12202)**
- **[Core] Interface for accessing model from `VllmRunner` (vllm-project#10353)**
- **[misc] add placeholder format.sh (vllm-project#12206)**
- **[CI/Build] Remove dummy CI steps (vllm-project#12208)**
- **[CI/Build] Make pre-commit faster (vllm-project#12212)**
- **[Model] Upgrade Aria to transformers 4.48 (vllm-project#12203)**
- **[misc] print a message to suggest how to bypass commit hooks
(vllm-project#12217)**
- **[core][bugfix] configure env var during import vllm (vllm-project#12209)**
- **[V1] Remove `_get_cache_block_size` (vllm-project#12214)**
- **[Misc] Pass `attention` to impl backend (vllm-project#12218)**
- **[Bugfix] Fix `HfExampleModels.find_hf_info` (vllm-project#12223)**
- **[CI] Pass local python version explicitly to pre-commit mypy.sh
(vllm-project#12224)**
- **[Misc] Update CODEOWNERS (vllm-project#12229)**
- **fix: update platform detection for M-series arm based MacBook
processors (vllm-project#12227)**
- **[misc] add cuda runtime version to usage data (vllm-project#12190)**
- **[bugfix] catch xgrammar unsupported array constraints (vllm-project#12210)**
- **[Kernel] optimize moe_align_block_size for cuda graph and large
num_experts (e.g. DeepSeek-V3) (vllm-project#12222)**
- **Add quantization and guided decoding CODEOWNERS (vllm-project#12228)**
- **[AMD][Build] Porting dockerfiles from the ROCm/vllm fork (vllm-project#11777)**
- **[BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64
(vllm-project#12230)**
- **[ci/build] disable failed and flaky tests (vllm-project#12240)**
- **[Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` (vllm-project#12244)**
- **[Misc]Add BNB quantization for PaliGemmaForConditionalGeneration
(vllm-project#12237)**
- **[Misc] Remove redundant TypeVar from base model (vllm-project#12248)**
- **[Bugfix] Fix mm_limits access for merged multi-modal processor
(vllm-project#12252)**

---------

Signed-off-by: Wallas Santos <wallashss@ibm.com>
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: hongxyan <hongxyan@amd.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Martin Gleize <mgleize@meta.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: Wallas Henrique <wallashss@users.noreply.github.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: yancong <32220263+ice-tong@users.noreply.github.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: gujing <925973396@qq.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: Işık <41375111+isikhi@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cheng Kuan Yong Jason <jasoncky96@gmail.com>
Co-authored-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
rasmith pushed a commit to rasmith/vllm that referenced this pull request Jan 30, 2025
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
Isotr0py pushed a commit to Isotr0py/vllm that referenced this pull request Feb 2, 2025
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Isotr0py <2037008807@qq.com>
hongxiayang added a commit to ROCm/vllm that referenced this pull request Feb 3, 2025
…ntion (#399)

* [V1] Avoid sending text prompt to core engine (vllm-project#11963)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [CI/Build] Add markdown linter (vllm-project#11857)

Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>

* [Model] Initialize support for Deepseek-VL2 models (vllm-project#11578)

Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Hardware][CPU] Multi-LoRA implementation for the CPU backend (vllm-project#11100)

Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>

* [Hardware][TPU] workaround fix for MoE on TPU (vllm-project#11764)

* [V1][Core][1/n] Logging and Metrics (vllm-project#11962)

Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>

* [Model] Support GGUF models newly added in `transformers` 4.46.0 (vllm-project#9685)

Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction (vllm-project#11973)

Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>

* [MISC] fix typo in kv transfer send recv test (vllm-project#11983)

* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. (vllm-project#11979)

* [CI][Spec Decode] fix: broken test for EAGLE model (vllm-project#11972)

Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>

* [Misc] Fix Deepseek V2 fp8 kv-scale remapping (vllm-project#11947)

Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>

* [Misc]Minor Changes about Worker (vllm-project#11555)

Signed-off-by: Chenguang Li <757486878@qq.com>

* [platform] add ray_device_key (vllm-project#11948)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* Fix Max Token ID for Qwen-VL-Chat (vllm-project#11980)

Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>

* [Kernel] unified_attention for Attention.forward (vllm-project#11967)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Doc][V1] Update model implementation guide for V1 support (vllm-project#11998)

Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>

* [Doc] Organise installation documentation into categories and tabs (vllm-project#11935)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [platform] add device_control env var (vllm-project#12009)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Platform] Move get_punica_wrapper() function to Platform (vllm-project#11516)

Signed-off-by: Shanshan Shen <467638484@qq.com>

* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function (vllm-project#11982)

Signed-off-by: elijah <f1renze.142857@gmail.com>

* [Doc] Fix build from source and installation link in README.md (vllm-project#12013)

Signed-off-by: Yikun <yikunkero@gmail.com>

* Using list

* [Bugfix] Fix deepseekv3 gate bias error (vllm-project#12002)

Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>

* Revert "[misc] improve memory profiling (vllm-project#11809)"

This reverts commit 889e662.

* Multi-lingual P3L (#356)

* Commiting the *multilingual* P3L test.

* Created a *multi-lingual* P3L test.

* Making ruff happy.

* .

* Added a reference to the language-scripture Confluence table.

* Typo fixing.

* Harmonizing naming.

* Fixing comments in the header.

---------

Co-authored-by: Alexei V. Ivanov <alivanov@banff-cyxtera-s65-4.amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>

* Trying to make scales work with compileable attention

* [Docs] Add Sky Computing Lab to project intro (vllm-project#12019)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [HPU][Bugfix] set_forward_context and CI test execution (vllm-project#12014)

Signed-off-by: Konrad Zawora <kzawora@habana.ai>

* [Doc] Update Quantization Hardware Support Documentation (vllm-project#12025)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [HPU][misc] add comments for explanation (vllm-project#12034)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] Fix various bugs in multi-modal processor (vllm-project#12031)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Kernel] Revert the API change of Attention.forward (vllm-project#12038)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Platform] Add output for Attention Backend (vllm-project#11981)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention (vllm-project#12040)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* Explain where the engine args go when using Docker (vllm-project#12041)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Docs lint

* [Doc]: Update the Json Example of the `Engine Arguments` document (vllm-project#12045)

* [Misc]  Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping (vllm-project#11924)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Kernel] Support MulAndSilu (vllm-project#11624)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py (vllm-project#12046)

Signed-off-by: Konrad Zawora <kzawora@habana.ai>

* [Platform] move current_memory_usage() into platform (vllm-project#11369)

Signed-off-by: Shanshan Shen <467638484@qq.com>

* [V1][BugFix] Fix edge case in VLM scheduling (vllm-project#12065)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Misc] Add multipstep chunked-prefill support for FlashInfer (vllm-project#10467)

* [core] Turn off GPU communication overlap for Ray executor (vllm-project#12051)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* [core] platform agnostic executor via collective_rpc (vllm-project#11256)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Doc] Update examples to remove SparseAutoModelForCausalLM (vllm-project#12062)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager (vllm-project#12003)

* Fix: cases with empty sparsity config (vllm-project#12057)

Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>

* Type-fix: make execute_model output type optional (vllm-project#12020)

* [Platform] Do not raise error if _Backend is not found (vllm-project#12023)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>

* [Model]: Support internlm3 (vllm-project#12037)

* Misc: allow to use proxy in `HTTPConnection` (vllm-project#12042)

Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>

* [Misc][Quark] Upstream Quark format to VLLM (vllm-project#10765)

Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>

* [Doc]: Update `OpenAI-Compatible Server` documents (vllm-project#12082)

* [Bugfix] use right truncation for non-generative tasks (vllm-project#12050)

Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>

* [V1][Core] Autotune encoder cache budget (vllm-project#11895)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [Bugfix] Fix _get_lora_device for HQQ marlin (vllm-project#12090)

Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* Allow hip sources to be directly included when compiling for rocm. (vllm-project#12087)

* [Core] Default to using per_token quantization for fp8 when cutlass is supported. (vllm-project#8651)

Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: mgoin <michael@neuralmagic.com>

* [Doc] Add documentation for specifying model architecture (vllm-project#12105)

* Various cosmetic/comment fixes (vllm-project#12089)

Signed-off-by: mgoin <michael@neuralmagic.com>

* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 (vllm-project#12067)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Support torchrun and SPMD-style offline inference (vllm-project#12071)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [core] LLM.collective_rpc interface and RLHF example (vllm-project#12084)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] Fix max image feature size for Llava-one-vision (vllm-project#12104)

Signed-off-by: Roger Wang <ywang@roblox.com>

* Enable user marker for vllm profiling (#357)

* Enable user marker for vllm profiling

---------

Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>

* [misc] Add LoRA kernel micro benchmarks (vllm-project#11579)

* [Model] Add support for deepseek-vl2-tiny model (vllm-project#12068)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Deepseek V3 support (#364)

* Changing the hard coded datatype to see if it's enough for the model to work

* Picking the upstrteam moe kernel version

* make upstream fix for v3 also works for rocm v2

* Conditional fnuz dtype

* Requantizing from fn to fnuz

* Requantizing moe as well

* Actually requantizing moe weights

* Conditional requantization and assert on padding in block quant

* Format

---------

Co-authored-by: charlifu <charlifu@amd.com>

* [Bugfix] Set enforce_eager automatically for mllama (vllm-project#12127)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix] Fix a path bug in disaggregated prefill example script. (vllm-project#12121)

Signed-off-by: Kuntai Du <kuntai@uchicago.edu>

* [CI]add genai-perf benchmark in nightly benchmark (vllm-project#10704)

Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>

* [Doc] Add instructions on using Podman when SELinux is active (vllm-project#12136)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [Bugfix] Fix issues in CPU build Dockerfile (vllm-project#12135)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [BugFix] add more `is not None` check in VllmConfig.__post_init__ (vllm-project#12138)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Misc] Add deepseek_vl2 chat template (vllm-project#12143)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [ROCm][MoE] moe tuning support for rocm (vllm-project#12049)

Signed-off-by: Divakar Verma <divakar.verma@amd.com>

* [V1] Move more control of kv cache initialization from model_executor to EngineCore (vllm-project#11960)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

* [Misc][LoRA] Improve the readability of LoRA error messages (vllm-project#12102)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [CI/Build][CPU][Bugfix] Fix CPU CI (vllm-project#12150)

Signed-off-by: jiang1.li <jiang1.li@intel.com>

* [core] allow callable in collective_rpc (vllm-project#12151)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] Fix score api for missing max_model_len validation (vllm-project#12119)

Signed-off-by: Wallas Santos <wallashss@ibm.com>

* [Bugfix] Mistral tokenizer encode accept list of str (vllm-project#12149)

Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>

* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant (vllm-project#12134)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [torch.compile] disable logging when cache is disabled (vllm-project#12043)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [misc] fix cross-node TP (vllm-project#12166)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [AMD][CI/Build][Bugfix] use pytorch stale wheel (vllm-project#12172)

Signed-off-by: hongxyan <hongxyan@amd.com>

* [core] further polish memory profiling (vllm-project#12126)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Docs] Fix broken link in SECURITY.md (vllm-project#12175)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Model] Port deepseek-vl2 processor, remove dependency (vllm-project#12169)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [core] clean up executor class hierarchy between v1 and v0 (vllm-project#12171)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Support register quantization method out-of-tree (vllm-project#11969)

* [V1] Collect env var for usage stats (vllm-project#12115)

* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu (vllm-project#12152)

Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>

* [Bugfix] Fix multi-modal processors for transformers 4.48 (vllm-project#12187)

* [torch.compile] store inductor compiled Python file (vllm-project#12182)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* benchmark_serving support --served-model-name param (vllm-project#12109)

Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

* [Misc] Add BNB support to GLM4-V model (vllm-project#12184)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [V1] Add V1 support of Qwen2-VL (vllm-project#12128)

Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] Support for fairseq2 Llama (vllm-project#11442)

Signed-off-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>

* [Bugfix] Fix num_heads value for simple connector when tp enabled (vllm-project#12074)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* [torch.compile] fix sym_tensor_indices (vllm-project#12191)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* Move linting to `pre-commit` (vllm-project#11975)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [DOC] Fix typo in docstring and assert message (vllm-project#12194)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [DOC] Add missing docstring in LLMEngine.add_request() (vllm-project#12195)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [Bugfix] Fix incorrect types in LayerwiseProfileResults (vllm-project#12196)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [Model] Add Qwen2 PRM model support (vllm-project#12202)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Core] Interface for accessing model from `VllmRunner` (vllm-project#10353)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [misc] add placeholder format.sh (vllm-project#12206)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [CI/Build] Remove dummy CI steps (vllm-project#12208)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [CI/Build] Make pre-commit faster (vllm-project#12212)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] Upgrade Aria to transformers 4.48 (vllm-project#12203)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [misc] print a message to suggest how to bypass commit hooks (vllm-project#12217)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [core][bugfix] configure env var during import vllm (vllm-project#12209)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [V1] Remove `_get_cache_block_size` (vllm-project#12214)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Misc] Pass `attention` to impl backend (vllm-project#12218)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Bugfix] Fix `HfExampleModels.find_hf_info` (vllm-project#12223)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [CI] Pass local python version explicitly to pre-commit mypy.sh (vllm-project#12224)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* Using ROCm6.3.1 base docker and building hipblas-common (#366)

* [Misc] Update CODEOWNERS (vllm-project#12229)

* fix: update platform detection for M-series arm based MacBook processors (vllm-project#12227)

Signed-off-by: isikhi <huseyin.isik000@gmail.com>

* [misc] add cuda runtime version to usage data (vllm-project#12190)

Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Roger Wang <ywang@roblox.com>

* [bugfix] catch xgrammar unsupported array constraints (vllm-project#12210)

Signed-off-by: Jason Cheng <jasoncky96@gmail.com>

* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) (vllm-project#12222)

Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Add quantization and guided decoding CODEOWNERS (vllm-project#12228)

Signed-off-by: mgoin <michael@neuralmagic.com>

* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork (vllm-project#11777)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 (vllm-project#12230)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [ci/build] disable failed and flaky tests (vllm-project#12240)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` (vllm-project#12244)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration  (vllm-project#12237)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Misc] Remove redundant TypeVar from base model (vllm-project#12248)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix mm_limits access for merged multi-modal processor (vllm-project#12252)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [torch.compile] transparent compilation with more logging (vllm-project#12246)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [V1][Bugfix] Fix data item ordering in mixed-modality inference (vllm-project#12259)

Signed-off-by: Roger Wang <ywang@roblox.com>

* Remove pytorch comments for outlines + compressed-tensors (vllm-project#12260)

Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>

* [Platform] improve platforms getattr (vllm-project#12264)

Signed-off-by: Mengqing Cao <cmq0113@163.com>

* [ci/build] update nightly torch for gh200 test (vllm-project#12270)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] fix race condition that leads to wrong order of token returned (vllm-project#10802)

Signed-off-by: Jannis Schönleber <joennlae@gmail.com>

* [Kernel] fix moe_align_block_size error condition (vllm-project#12239)

Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>

* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types  (vllm-project#10907)

Signed-off-by: rickyx <rickyx@anyscale.com>

* [Bugfix] Multi-sequence broken (vllm-project#11898)

Signed-off-by: Andy Lo <andy@mistral.ai>

* [Misc] Remove experimental dep from tracing.py (vllm-project#12007)

Signed-off-by: Adrian Cole <adrian.cole@elastic.co>

* [Misc] Set default backend to SDPA for get_vit_attn_backend (vllm-project#12235)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Core] Free CPU pinned memory on environment cleanup (vllm-project#10477)

* Update pre-commit.yml (#374)

* Update pre-commit.yml

* Reapplying missing format

* New codespell exclude location

---------

Co-authored-by: Kevin H. Luu <kevin@anyscale.com>

* [bugfix] moe tuning. rm is_navi() (vllm-project#12273)

Signed-off-by: Divakar Verma <divakar.verma@amd.com>

* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes (vllm-project#12277)

Signed-off-by: maleksan85 <maleksan@amd.com>
Co-authored-by: maleksan85 <maleksan@amd.com>

* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose (vllm-project#12281)

Signed-off-by: Hongxia Yang <hongxyan@amd.com>

* [VLM] Simplify post-processing of replacement info (vllm-project#12269)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [ci/lint] Add back default arg for pre-commit (vllm-project#12279)

Signed-off-by: kevin <kevin@anyscale.com>

* [CI] add docker volume prune to neuron CI (vllm-project#12291)

Signed-off-by: Liangfu Chen <liangfc@amazon.com>

* [Ci/Build] Fix mypy errors on main (vllm-project#12296)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` (vllm-project#12288)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [core] separate builder init and builder prepare for each batch (vllm-project#12253)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Build] update requirements of no-device (vllm-project#12299)

Signed-off-by: Mengqing Cao <cmq0113@163.com>

* [Core] Support fully transparent sleep mode (vllm-project#11743)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [VLM] Avoid unnecessary tokenization (vllm-project#12310)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model][Bugfix]: correct Aria model output (vllm-project#12309)

Signed-off-by: xffxff <1247714429@qq.com>

* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 (vllm-project#12313)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [Doc] Add docs for prompt replacement (vllm-project#12318)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Fix the error in the tip for the --lora-modules parameter (vllm-project#12319)

Signed-off-by: wangerxiao <863579016@qq.com>

* [Misc]  Improve the readability of BNB error messages  (vllm-project#12320)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init (#367)

* switching detokenize flag to be False

* detokenize = False for benchmarks

* restoring default in main vllm code for detokenize

* removing extra spaces

* moving detokenize to flag

* adding support for token ids

---------

Co-authored-by: maleksan85 <maleksan@amd.com>

* [Bugfix] Fix HPU multiprocessing executor (vllm-project#12167)

Signed-off-by: Konrad Zawora <kzawora@habana.ai>

* [Core] Support `reset_prefix_cache` (vllm-project#12284)

* [Frontend][V1] Online serving performance improvements (vllm-project#12287)

* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD (vllm-project#12282)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* FP8 FA fixes (#381)

* FP8 FA fixes

Summary:
Add missing clamp and fix reciprocal scale computation.

* linter

* Returning the use of the proper stream in allreduce (#382)

* [Bugfix] Fixing  AMD LoRA CI test. (vllm-project#12329)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Docs] Update FP8 KV Cache documentation (vllm-project#12238)

Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Docs] Document vulnerability disclosure process (vllm-project#12326)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [V1] Add `uncache_blocks` (vllm-project#12333)

* [doc] explain common errors around torch.compile (vllm-project#12340)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update (vllm-project#12338)

Signed-off-by: zhenwei <zhenweiliu@habana.ai>

* [Bugfix] Fix k_proj's bias for whisper self attention (vllm-project#12342)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Kernel] Flash Attention 3 Support (vllm-project#12093)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Doc] Troubleshooting errors during model inspection (vllm-project#12351)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1] Simplify M-RoPE (vllm-project#12352)

Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: imkero <kerorek@outlook.com>

* [Bugfix] Fix broken internvl2 inference with v1 (vllm-project#12360)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [core] add wake_up doc and some sanity check (vllm-project#12361)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [torch.compile] decouple compile sizes and cudagraph sizes (vllm-project#12243)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [FP8][Kernel] Dynamic kv cache scaling factors computation (vllm-project#11906)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>

* [TPU] Update TPU CI to use torchxla nightly on 20250122 (vllm-project#12334)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Docs] Document Phi-4 support (vllm-project#12362)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order  (vllm-project#11528)

Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>

* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script (vllm-project#12357)

Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>

* [Docs] Add meetup slides (vllm-project#12345)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Using pytorch commit past the point when rowwise PR (pytorch/pytorch#144432) was merged (#384)

* [Docs] Update spec decode + structured output in compat matrix (vllm-project#12373)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [V1][Frontend] Coalesce bunched `RequestOutput`s (vllm-project#12298)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>

* Set weights_only=True when using torch.load() (vllm-project#12366)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Bugfix] Path join when building local path for S3 clone (vllm-project#12353)

Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>

* Update compressed-tensors version (vllm-project#12367)

* [V1] Increase default batch size for H100/H200 (vllm-project#12369)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [perf] fix perf regression from vllm-project#12253 (vllm-project#12380)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Use VisionArena Dataset for VLM Benchmarking (vllm-project#12389)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [ci/build] fix wheel size check (vllm-project#12396)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Hardware][Gaudi][Doc] Add missing step in setup instructions (vllm-project#12382)

* [ci/build] sync default value for wheel size (vllm-project#12398)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Enable proxy support in benchmark script (vllm-project#12356)

Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>

* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build (vllm-project#12375)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* Applying scales rename to fp8 config (#387)

* [Misc] Remove deprecated code (vllm-project#12383)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). (vllm-project#12405)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* Dev-docker Documentation Updates (#378)

* Dev-docker Documentation Updates

Minor updates to several sections, with links to other documents where appropriate.

* Fix formatting of GEMM filename

* README cleanup

- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording

* Expanded sample commands for Latency and Throughput

* Fix markdown links

* Fix pre-commit errors

* Updates from review

Initial updates to incorporate feedback from a review session held with @t-parry

* Update script args to match current recommendations

* Remove recommended max-num-seqs values for now

---------

Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>

* [Bugfix][Kernel] Fix moe align block issue for mixtral (vllm-project#12413)

* [Bugfix] Fix BLIP-2 processing (vllm-project#12412)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 (vllm-project#12408)

Signed-off-by: Divakar Verma <divakar.verma@amd.com>

* [Misc] Add FA2 support to ViT MHA layer (vllm-project#12355)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [TPU][CI] Update torchxla version in requirement-tpu.txt (vllm-project#12422)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Misc][Bugfix] FA3 support to ViT MHA layer (vllm-project#12435)

Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>

* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync (vllm-project#12094)

Signed-off-by: Keyun Tong <tongkeyun@gmail.com>

* [V1][Bugfix] Fix assertion when mm hashing is turned off (vllm-project#12439)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 (vllm-project#12445)

* [Frontend] generation_config.json for  maximum tokens(vllm-project#12242)

Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 (vllm-project#12417)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>

* [Bugfix/CI] Fix broken kernels/test_mha.py (vllm-project#12450)

* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 (vllm-project#12434)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Build/CI] Fix libcuda.so linkage (vllm-project#12424)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [Frontend] Rerank API (Jina- and Cohere-compatible API)  (vllm-project#12376)

Signed-off-by: Kyle Mistele <kyle@mistele.com>

* [DOC] Add link to vLLM blog (vllm-project#12460)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [V1] Avoid list creation in input preparation (vllm-project#12457)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Frontend] Support scores endpoint in run_batch (vllm-project#12430)

Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>

* [Bugfix] Fix Granite 3.0 MoE model loading (vllm-project#12446)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata (vllm-project#12464)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [V1][Minor] Minor optimizations for update_from_output (vllm-project#12454)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix] Fix gpt2 GGUF inference (vllm-project#12467)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Build] Only build 9.0a for scaled_mm and sparse kernels (vllm-project#12339)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [V1][Metrics] Add initial Prometheus logger (vllm-project#12416)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [V1][CI/Test] Do basic test for top-p & top-k sampling (vllm-project#12469)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [FlashInfer] Upgrade to 0.2.0 (vllm-project#11194)

Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>

* Support FP8 FA from Quark format (#388)

* Support FP8 FA from Quark format

* Support FP8 FA from Quark format

* nit: update comment

* Direct call on ROCm

* 20250127 docs update (#392)

* updating code blocks

* typo

* updated manifest

* Including feedback

* whitespace

* Deepseek instructions

* hyperlink fix

* hyperlink fix

* updating what is new

* cpx update

* typo

* whitespace

* whitespace

* Faster Custom Paged Attention kernels (#372)

* integrate new cpa kernel, update tests and benchmark

* added comments to mfma4 kernel

* further comments for mfma16 kernel

* clang-format

* Lint

* add flag for logits rtz conversion and disable by default

* lint

* [Bugfix]: Fix paged attention unit tests of #372 (#389)

* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and  `csrc/rocm/attention.cu`.

* improve code documentation.

* lint

---------

Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>

---------

Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: TJian <tunjian1996@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>

* Using a more precise profiling on ROCm to properly account for weights padding (#394)

* Update Dockerfile.rocm

* [Bugfix]: inclucde the env variables required for running FastSyncLLM

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* fix pre-commit lint

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

---------

Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: Chenguang Li <757486878@qq.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Shanshan Shen <467638484@qq.com>
Signed-off-by: elijah <f1renze.142857@gmail.com>
Signed-off-by: Yikun <yikunkero@gmail.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: yisheng <yi.sheng@intel.com>
Signed-off-by: Abatom <abzhonghua@gmail.com>
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
Signed-off-by: Sourashis Roy <sroy@roblox.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Wallas Santos <wallashss@ibm.com>
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: yan ma <yan.ma@intel.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Signed-off-by: Ye Qi <yeq@meta.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Kuntai Du <kuntai@uchicago.edu>
Signed-off-by: Ren MinMin <renmm6@chinaunicom.cn>
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
Signed-off-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: hongxyan <hongxyan@amd.com>
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Signed-off-by: Martin Gleize <mgleize@meta.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: Jannis Schönleber <joennlae@gmail.com>
Signed-off-by: rickyx <rickyx@anyscale.com>
Signed-off-by: Andy Lo <andy@mistral.ai>
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Signed-off-by: maleksan85 <maleksan@amd.com>
Signed-off-by: Hongxia Yang <hongxyan@amd.com>
Signed-off-by: kevin <kevin@anyscale.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: xffxff <1247714429@qq.com>
Signed-off-by: wangerxiao <863579016@qq.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: zhenwei <zhenweiliu@habana.ai>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Kyle Mistele <kyle@mistele.com>
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Akshat Tripathi <Akshat.tripathi6568@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Avshalom Manevich <12231371+avshalomman@users.noreply.github.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
Co-authored-by: Yangcheng Li <liyangcheng.lyc@alibaba-inc.com>
Co-authored-by: Siyuan Li <94890248+liaoyanqing666@users.noreply.github.com>
Co-authored-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Co-authored-by: Concurrensee <yida.wu@amd.com>
Co-authored-by: Chenguang Li <757486878@qq.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: elijah <30852919+e1ijah1@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Steve Luo <36296769+SunflowerAries@users.noreply.github.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: Alexei V. Ivanov <alivanov@banff-cyxtera-s65-4.amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com>
Co-authored-by: YiSheng5 <yi.sheng@intel.com>
Co-authored-by: Zhonghua Deng <abatom@163.com>
Co-authored-by: Liangfu Chen <liangfc@amazon.com>
Co-authored-by: XiaobingZhang <xiaobingzhangupc@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Yuan <yuan.zhou@intel.com>
Co-authored-by: jiangjiadi <34134495+jiangjiadi@users.noreply.github.com>
Co-authored-by: jiadi.jjd <jiadi.jjd@antgroup.com>
Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com>
Co-authored-by: Jie Fu (傅杰) <jiefu@tencent.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: WangErXiao <863579016@qq.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Wallas Henrique <wallashss@users.noreply.github.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Yan Ma <yan.ma@intel.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Co-authored-by: Guspan Tanadi <36249910+guspan-tanadi@users.noreply.github.com>
Co-authored-by: Ye (Charlotte) Qi <ye.charlotte.qi@gmail.com>
Co-authored-by: yeq <yeq@devgpu004.lla3.facebook.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Charles Frye <cfrye59@gmail.com>
Co-authored-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: cennn <61925104+cennn@users.noreply.github.com>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: minmin <rmm0811@gmail.com>
Co-authored-by: Ren MinMin <renmm6@chinaunicom.cn>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Fred Reiss <frreiss@us.ibm.com>
Co-authored-by: shaochangxu <85155497+shaochangxu@users.noreply.github.com>
Co-authored-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: sixgod <evethwillbeok@outlook.com>
Co-authored-by: Elfie Guo <164945471+elfiegg@users.noreply.github.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Co-authored-by: Keyun Tong <tongkeyun@gmail.com>
Co-authored-by: RunningLeon <maningsheng@sensetime.com>
Co-authored-by: kewang-xlnx <73578509+kewang-xlnx@users.noreply.github.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: tvirolai-amd <teemu.virolainen@amd.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Zhaoyi Li <36555117+Lzy17@users.noreply.github.com>
Co-authored-by: charlifu <charlifu@amd.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: yancong <32220263+ice-tong@users.noreply.github.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>
Co-authored-by: gujing <925973396@qq.com>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: Işık <41375111+isikhi@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cheng Kuan Yong Jason <jasoncky96@gmail.com>
Co-authored-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Thomas Parnell <tpa@zurich.ibm.com>
Co-authored-by: Jannis Schönleber <joennlae@gmail.com>
Co-authored-by: Ricky Xu <xuchen727@hotmail.com>
Co-authored-by: Andy Lo <andylolu24@gmail.com>
Co-authored-by: Adrian Cole <64215+codefromthecrypt@users.noreply.github.com>
Co-authored-by: Jani Monoses <jani.monoses@gmail.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Aleksandr Malyshev <164964928+maleksan85@users.noreply.github.com>
Co-authored-by: maleksan85 <maleksan@amd.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: zhou fan <1247714429@qq.com>
Co-authored-by: ilia-cher <30845429+ilia-cher@users.noreply.github.com>
Co-authored-by: liuzhenwei <zhenweiliu@habana.ai>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
Co-authored-by: omer-dayan <omer@run.ai>
Co-authored-by: Mohit Deopujari <mdeopujari@habana.ai>
Co-authored-by: Jeremy Arnold <103538711+JArnoldAMD@users.noreply.github.com>
Co-authored-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Co-authored-by: Kyle Mistele <kyle@mistele.com>
Co-authored-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: Bowen Bao <bowenbao@amd.com>
Co-authored-by: arakowsk-amd <182798202+arakowsk-amd@users.noreply.github.com>
Co-authored-by: sanyalington <shomy.sanyal@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
hongxiayang added a commit to ROCm/vllm that referenced this pull request Feb 5, 2025
* [Model] Initialize support for Deepseek-VL2 models (vllm-project#11578)

Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Hardware][CPU] Multi-LoRA implementation for the CPU backend (vllm-project#11100)

Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>

* [Hardware][TPU] workaround fix for MoE on TPU (vllm-project#11764)

* [V1][Core][1/n] Logging and Metrics (vllm-project#11962)

Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>

* [Model] Support GGUF models newly added in `transformers` 4.46.0 (vllm-project#9685)

Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction (vllm-project#11973)

Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>

* [MISC] fix typo in kv transfer send recv test (vllm-project#11983)

* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. (vllm-project#11979)

* [CI][Spec Decode] fix: broken test for EAGLE model (vllm-project#11972)

Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>

* [Misc] Fix Deepseek V2 fp8 kv-scale remapping (vllm-project#11947)

Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>

* [Misc]Minor Changes about Worker (vllm-project#11555)

Signed-off-by: Chenguang Li <757486878@qq.com>

* [platform] add ray_device_key (vllm-project#11948)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* Fix Max Token ID for Qwen-VL-Chat (vllm-project#11980)

Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>

* [Kernel] unified_attention for Attention.forward (vllm-project#11967)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Doc][V1] Update model implementation guide for V1 support (vllm-project#11998)

Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>

* [Doc] Organise installation documentation into categories and tabs (vllm-project#11935)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [platform] add device_control env var (vllm-project#12009)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Platform] Move get_punica_wrapper() function to Platform (vllm-project#11516)

Signed-off-by: Shanshan Shen <467638484@qq.com>

* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function (vllm-project#11982)

Signed-off-by: elijah <f1renze.142857@gmail.com>

* [Doc] Fix build from source and installation link in README.md (vllm-project#12013)

Signed-off-by: Yikun <yikunkero@gmail.com>

* Using list

* [Bugfix] Fix deepseekv3 gate bias error (vllm-project#12002)

Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>

* Revert "[misc] improve memory profiling (vllm-project#11809)"

This reverts commit 889e662.

* Multi-lingual P3L (#356)

* Commiting the *multilingual* P3L test.

* Created a *multi-lingual* P3L test.

* Making ruff happy.

* .

* Added a reference to the language-scripture Confluence table.

* Typo fixing.

* Harmonizing naming.

* Fixing comments in the header.

---------

Co-authored-by: Alexei V. Ivanov <alivanov@banff-cyxtera-s65-4.amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>

* Trying to make scales work with compileable attention

* [Docs] Add Sky Computing Lab to project intro (vllm-project#12019)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [HPU][Bugfix] set_forward_context and CI test execution (vllm-project#12014)

Signed-off-by: Konrad Zawora <kzawora@habana.ai>

* [Doc] Update Quantization Hardware Support Documentation (vllm-project#12025)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [HPU][misc] add comments for explanation (vllm-project#12034)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] Fix various bugs in multi-modal processor (vllm-project#12031)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Kernel] Revert the API change of Attention.forward (vllm-project#12038)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Platform] Add output for Attention Backend (vllm-project#11981)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention (vllm-project#12040)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* Explain where the engine args go when using Docker (vllm-project#12041)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Docs lint

* [Doc]: Update the Json Example of the `Engine Arguments` document (vllm-project#12045)

* [Misc]  Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping (vllm-project#11924)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Kernel] Support MulAndSilu (vllm-project#11624)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py (vllm-project#12046)

Signed-off-by: Konrad Zawora <kzawora@habana.ai>

* [Platform] move current_memory_usage() into platform (vllm-project#11369)

Signed-off-by: Shanshan Shen <467638484@qq.com>

* [V1][BugFix] Fix edge case in VLM scheduling (vllm-project#12065)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Misc] Add multipstep chunked-prefill support for FlashInfer (vllm-project#10467)

* [core] Turn off GPU communication overlap for Ray executor (vllm-project#12051)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* [core] platform agnostic executor via collective_rpc (vllm-project#11256)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Doc] Update examples to remove SparseAutoModelForCausalLM (vllm-project#12062)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager (vllm-project#12003)

* Fix: cases with empty sparsity config (vllm-project#12057)

Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>

* Type-fix: make execute_model output type optional (vllm-project#12020)

* [Platform] Do not raise error if _Backend is not found (vllm-project#12023)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>

* [Model]: Support internlm3 (vllm-project#12037)

* Misc: allow to use proxy in `HTTPConnection` (vllm-project#12042)

Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>

* [Misc][Quark] Upstream Quark format to VLLM (vllm-project#10765)

Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>

* [Doc]: Update `OpenAI-Compatible Server` documents (vllm-project#12082)

* [Bugfix] use right truncation for non-generative tasks (vllm-project#12050)

Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>

* [V1][Core] Autotune encoder cache budget (vllm-project#11895)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [Bugfix] Fix _get_lora_device for HQQ marlin (vllm-project#12090)

Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* Allow hip sources to be directly included when compiling for rocm. (vllm-project#12087)

* [Core] Default to using per_token quantization for fp8 when cutlass is supported. (vllm-project#8651)

Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: mgoin <michael@neuralmagic.com>

* [Doc] Add documentation for specifying model architecture (vllm-project#12105)

* Various cosmetic/comment fixes (vllm-project#12089)

Signed-off-by: mgoin <michael@neuralmagic.com>

* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 (vllm-project#12067)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Support torchrun and SPMD-style offline inference (vllm-project#12071)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [core] LLM.collective_rpc interface and RLHF example (vllm-project#12084)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] Fix max image feature size for Llava-one-vision (vllm-project#12104)

Signed-off-by: Roger Wang <ywang@roblox.com>

* Enable user marker for vllm profiling (#357)

* Enable user marker for vllm profiling

---------

Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>

* [misc] Add LoRA kernel micro benchmarks (vllm-project#11579)

* [Model] Add support for deepseek-vl2-tiny model (vllm-project#12068)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Deepseek V3 support (#364)

* Changing the hard coded datatype to see if it's enough for the model to work

* Picking the upstrteam moe kernel version

* make upstream fix for v3 also works for rocm v2

* Conditional fnuz dtype

* Requantizing from fn to fnuz

* Requantizing moe as well

* Actually requantizing moe weights

* Conditional requantization and assert on padding in block quant

* Format

---------

Co-authored-by: charlifu <charlifu@amd.com>

* [Bugfix] Set enforce_eager automatically for mllama (vllm-project#12127)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix] Fix a path bug in disaggregated prefill example script. (vllm-project#12121)

Signed-off-by: Kuntai Du <kuntai@uchicago.edu>

* [CI]add genai-perf benchmark in nightly benchmark (vllm-project#10704)

Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>

* [Doc] Add instructions on using Podman when SELinux is active (vllm-project#12136)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [Bugfix] Fix issues in CPU build Dockerfile (vllm-project#12135)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [BugFix] add more `is not None` check in VllmConfig.__post_init__ (vllm-project#12138)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Misc] Add deepseek_vl2 chat template (vllm-project#12143)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [ROCm][MoE] moe tuning support for rocm (vllm-project#12049)

Signed-off-by: Divakar Verma <divakar.verma@amd.com>

* [V1] Move more control of kv cache initialization from model_executor to EngineCore (vllm-project#11960)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

* [Misc][LoRA] Improve the readability of LoRA error messages (vllm-project#12102)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [CI/Build][CPU][Bugfix] Fix CPU CI (vllm-project#12150)

Signed-off-by: jiang1.li <jiang1.li@intel.com>

* [core] allow callable in collective_rpc (vllm-project#12151)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] Fix score api for missing max_model_len validation (vllm-project#12119)

Signed-off-by: Wallas Santos <wallashss@ibm.com>

* [Bugfix] Mistral tokenizer encode accept list of str (vllm-project#12149)

Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>

* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant (vllm-project#12134)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [torch.compile] disable logging when cache is disabled (vllm-project#12043)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [misc] fix cross-node TP (vllm-project#12166)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [AMD][CI/Build][Bugfix] use pytorch stale wheel (vllm-project#12172)

Signed-off-by: hongxyan <hongxyan@amd.com>

* [core] further polish memory profiling (vllm-project#12126)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Docs] Fix broken link in SECURITY.md (vllm-project#12175)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Model] Port deepseek-vl2 processor, remove dependency (vllm-project#12169)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [core] clean up executor class hierarchy between v1 and v0 (vllm-project#12171)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Support register quantization method out-of-tree (vllm-project#11969)

* [V1] Collect env var for usage stats (vllm-project#12115)

* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu (vllm-project#12152)

Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>

* [Bugfix] Fix multi-modal processors for transformers 4.48 (vllm-project#12187)

* [torch.compile] store inductor compiled Python file (vllm-project#12182)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* benchmark_serving support --served-model-name param (vllm-project#12109)

Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>

* [Misc] Add BNB support to GLM4-V model (vllm-project#12184)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [V1] Add V1 support of Qwen2-VL (vllm-project#12128)

Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] Support for fairseq2 Llama (vllm-project#11442)

Signed-off-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>

* [Bugfix] Fix num_heads value for simple connector when tp enabled (vllm-project#12074)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>

* [torch.compile] fix sym_tensor_indices (vllm-project#12191)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* Move linting to `pre-commit` (vllm-project#11975)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [DOC] Fix typo in docstring and assert message (vllm-project#12194)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [DOC] Add missing docstring in LLMEngine.add_request() (vllm-project#12195)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [Bugfix] Fix incorrect types in LayerwiseProfileResults (vllm-project#12196)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [Model] Add Qwen2 PRM model support (vllm-project#12202)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Core] Interface for accessing model from `VllmRunner` (vllm-project#10353)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [misc] add placeholder format.sh (vllm-project#12206)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [CI/Build] Remove dummy CI steps (vllm-project#12208)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [CI/Build] Make pre-commit faster (vllm-project#12212)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] Upgrade Aria to transformers 4.48 (vllm-project#12203)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [misc] print a message to suggest how to bypass commit hooks (vllm-project#12217)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [core][bugfix] configure env var during import vllm (vllm-project#12209)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [V1] Remove `_get_cache_block_size` (vllm-project#12214)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Misc] Pass `attention` to impl backend (vllm-project#12218)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Bugfix] Fix `HfExampleModels.find_hf_info` (vllm-project#12223)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [CI] Pass local python version explicitly to pre-commit mypy.sh (vllm-project#12224)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* Using ROCm6.3.1 base docker and building hipblas-common (#366)

* [Misc] Update CODEOWNERS (vllm-project#12229)

* fix: update platform detection for M-series arm based MacBook processors (vllm-project#12227)

Signed-off-by: isikhi <huseyin.isik000@gmail.com>

* [misc] add cuda runtime version to usage data (vllm-project#12190)

Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Roger Wang <ywang@roblox.com>

* [bugfix] catch xgrammar unsupported array constraints (vllm-project#12210)

Signed-off-by: Jason Cheng <jasoncky96@gmail.com>

* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) (vllm-project#12222)

Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Add quantization and guided decoding CODEOWNERS (vllm-project#12228)

Signed-off-by: mgoin <michael@neuralmagic.com>

* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork (vllm-project#11777)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 (vllm-project#12230)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [ci/build] disable failed and flaky tests (vllm-project#12240)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` (vllm-project#12244)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration  (vllm-project#12237)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Misc] Remove redundant TypeVar from base model (vllm-project#12248)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix mm_limits access for merged multi-modal processor (vllm-project#12252)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [torch.compile] transparent compilation with more logging (vllm-project#12246)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [V1][Bugfix] Fix data item ordering in mixed-modality inference (vllm-project#12259)

Signed-off-by: Roger Wang <ywang@roblox.com>

* Remove pytorch comments for outlines + compressed-tensors (vllm-project#12260)

Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>

* [Platform] improve platforms getattr (vllm-project#12264)

Signed-off-by: Mengqing Cao <cmq0113@163.com>

* [ci/build] update nightly torch for gh200 test (vllm-project#12270)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] fix race condition that leads to wrong order of token returned (vllm-project#10802)

Signed-off-by: Jannis Schönleber <joennlae@gmail.com>

* [Kernel] fix moe_align_block_size error condition (vllm-project#12239)

Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>

* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types  (vllm-project#10907)

Signed-off-by: rickyx <rickyx@anyscale.com>

* [Bugfix] Multi-sequence broken (vllm-project#11898)

Signed-off-by: Andy Lo <andy@mistral.ai>

* [Misc] Remove experimental dep from tracing.py (vllm-project#12007)

Signed-off-by: Adrian Cole <adrian.cole@elastic.co>

* [Misc] Set default backend to SDPA for get_vit_attn_backend (vllm-project#12235)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Core] Free CPU pinned memory on environment cleanup (vllm-project#10477)

* Update pre-commit.yml (#374)

* Update pre-commit.yml

* Reapplying missing format

* New codespell exclude location

---------

Co-authored-by: Kevin H. Luu <kevin@anyscale.com>

* [bugfix] moe tuning. rm is_navi() (vllm-project#12273)

Signed-off-by: Divakar Verma <divakar.verma@amd.com>

* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes (vllm-project#12277)

Signed-off-by: maleksan85 <maleksan@amd.com>
Co-authored-by: maleksan85 <maleksan@amd.com>

* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose (vllm-project#12281)

Signed-off-by: Hongxia Yang <hongxyan@amd.com>

* [VLM] Simplify post-processing of replacement info (vllm-project#12269)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [ci/lint] Add back default arg for pre-commit (vllm-project#12279)

Signed-off-by: kevin <kevin@anyscale.com>

* [CI] add docker volume prune to neuron CI (vllm-project#12291)

Signed-off-by: Liangfu Chen <liangfc@amazon.com>

* [Ci/Build] Fix mypy errors on main (vllm-project#12296)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` (vllm-project#12288)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [core] separate builder init and builder prepare for each batch (vllm-project#12253)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Build] update requirements of no-device (vllm-project#12299)

Signed-off-by: Mengqing Cao <cmq0113@163.com>

* [Core] Support fully transparent sleep mode (vllm-project#11743)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [VLM] Avoid unnecessary tokenization (vllm-project#12310)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model][Bugfix]: correct Aria model output (vllm-project#12309)

Signed-off-by: xffxff <1247714429@qq.com>

* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 (vllm-project#12313)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [Doc] Add docs for prompt replacement (vllm-project#12318)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Fix the error in the tip for the --lora-modules parameter (vllm-project#12319)

Signed-off-by: wangerxiao <863579016@qq.com>

* [Misc]  Improve the readability of BNB error messages  (vllm-project#12320)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init (#367)

* switching detokenize flag to be False

* detokenize = False for benchmarks

* restoring default in main vllm code for detokenize

* removing extra spaces

* moving detokenize to flag

* adding support for token ids

---------

Co-authored-by: maleksan85 <maleksan@amd.com>

* [Bugfix] Fix HPU multiprocessing executor (vllm-project#12167)

Signed-off-by: Konrad Zawora <kzawora@habana.ai>

* [Core] Support `reset_prefix_cache` (vllm-project#12284)

* [Frontend][V1] Online serving performance improvements (vllm-project#12287)

* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD (vllm-project#12282)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* FP8 FA fixes (#381)

* FP8 FA fixes

Summary:
Add missing clamp and fix reciprocal scale computation.

* linter

* Returning the use of the proper stream in allreduce (#382)

* [Bugfix] Fixing  AMD LoRA CI test. (vllm-project#12329)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Docs] Update FP8 KV Cache documentation (vllm-project#12238)

Signed-off-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Docs] Document vulnerability disclosure process (vllm-project#12326)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [V1] Add `uncache_blocks` (vllm-project#12333)

* [doc] explain common errors around torch.compile (vllm-project#12340)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update (vllm-project#12338)

Signed-off-by: zhenwei <zhenweiliu@habana.ai>

* [Bugfix] Fix k_proj's bias for whisper self attention (vllm-project#12342)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Kernel] Flash Attention 3 Support (vllm-project#12093)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Doc] Troubleshooting errors during model inspection (vllm-project#12351)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1] Simplify M-RoPE (vllm-project#12352)

Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: imkero <kerorek@outlook.com>

* [Bugfix] Fix broken internvl2 inference with v1 (vllm-project#12360)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [core] add wake_up doc and some sanity check (vllm-project#12361)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [torch.compile] decouple compile sizes and cudagraph sizes (vllm-project#12243)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [FP8][Kernel] Dynamic kv cache scaling factors computation (vllm-project#11906)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>

* [TPU] Update TPU CI to use torchxla nightly on 20250122 (vllm-project#12334)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Docs] Document Phi-4 support (vllm-project#12362)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order  (vllm-project#11528)

Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>

* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script (vllm-project#12357)

Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>

* [Docs] Add meetup slides (vllm-project#12345)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Using pytorch commit past the point when rowwise PR (pytorch/pytorch#144432) was merged (#384)

* [Docs] Update spec decode + structured output in compat matrix (vllm-project#12373)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [V1][Frontend] Coalesce bunched `RequestOutput`s (vllm-project#12298)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>

* Set weights_only=True when using torch.load() (vllm-project#12366)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Bugfix] Path join when building local path for S3 clone (vllm-project#12353)

Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>

* Update compressed-tensors version (vllm-project#12367)

* [V1] Increase default batch size for H100/H200 (vllm-project#12369)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [perf] fix perf regression from vllm-project#12253 (vllm-project#12380)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Use VisionArena Dataset for VLM Benchmarking (vllm-project#12389)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [ci/build] fix wheel size check (vllm-project#12396)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Hardware][Gaudi][Doc] Add missing step in setup instructions (vllm-project#12382)

* [ci/build] sync default value for wheel size (vllm-project#12398)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Enable proxy support in benchmark script (vllm-project#12356)

Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>

* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build (vllm-project#12375)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* Applying scales rename to fp8 config (#387)

* [Misc] Remove deprecated code (vllm-project#12383)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). (vllm-project#12405)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* Dev-docker Documentation Updates (#378)

* Dev-docker Documentation Updates

Minor updates to several sections, with links to other documents where appropriate.

* Fix formatting of GEMM filename

* README cleanup

- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording

* Expanded sample commands for Latency and Throughput

* Fix markdown links

* Fix pre-commit errors

* Updates from review

Initial updates to incorporate feedback from a review session held with @t-parry

* Update script args to match current recommendations

* Remove recommended max-num-seqs values for now

---------

Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>

* [Bugfix][Kernel] Fix moe align block issue for mixtral (vllm-project#12413)

* [Bugfix] Fix BLIP-2 processing (vllm-project#12412)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 (vllm-project#12408)

Signed-off-by: Divakar Verma <divakar.verma@amd.com>

* [Misc] Add FA2 support to ViT MHA layer (vllm-project#12355)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [TPU][CI] Update torchxla version in requirement-tpu.txt (vllm-project#12422)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Misc][Bugfix] FA3 support to ViT MHA layer (vllm-project#12435)

Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>

* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync (vllm-project#12094)

Signed-off-by: Keyun Tong <tongkeyun@gmail.com>

* [V1][Bugfix] Fix assertion when mm hashing is turned off (vllm-project#12439)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 (vllm-project#12445)

* [Frontend] generation_config.json for  maximum tokens(vllm-project#12242)

Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 (vllm-project#12417)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: mgoin <michael@neuralmagic.com>

* [Bugfix/CI] Fix broken kernels/test_mha.py (vllm-project#12450)

* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 (vllm-project#12434)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Build/CI] Fix libcuda.so linkage (vllm-project#12424)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [Frontend] Rerank API (Jina- and Cohere-compatible API)  (vllm-project#12376)

Signed-off-by: Kyle Mistele <kyle@mistele.com>

* [DOC] Add link to vLLM blog (vllm-project#12460)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [V1] Avoid list creation in input preparation (vllm-project#12457)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Frontend] Support scores endpoint in run_batch (vllm-project#12430)

Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>

* [Bugfix] Fix Granite 3.0 MoE model loading (vllm-project#12446)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata (vllm-project#12464)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [V1][Minor] Minor optimizations for update_from_output (vllm-project#12454)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix] Fix gpt2 GGUF inference (vllm-project#12467)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Build] Only build 9.0a for scaled_mm and sparse kernels (vllm-project#12339)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [V1][Metrics] Add initial Prometheus logger (vllm-project#12416)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [V1][CI/Test] Do basic test for top-p & top-k sampling (vllm-project#12469)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [FlashInfer] Upgrade to 0.2.0 (vllm-project#11194)

Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>

* Support FP8 FA from Quark format (#388)

* Support FP8 FA from Quark format

* Support FP8 FA from Quark format

* nit: update comment

* Direct call on ROCm

* 20250127 docs update (#392)

* updating code blocks

* typo

* updated manifest

* Including feedback

* whitespace

* Deepseek instructions

* hyperlink fix

* hyperlink fix

* updating what is new

* cpx update

* typo

* whitespace

* whitespace

* Faster Custom Paged Attention kernels (#372)

* integrate new cpa kernel, update tests and benchmark

* added comments to mfma4 kernel

* further comments for mfma16 kernel

* clang-format

* Lint

* add flag for logits rtz conversion and disable by default

* lint

* [Bugfix]: Fix paged attention unit tests of #372 (#389)

* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and  `csrc/rocm/attention.cu`.

* improve code documentation.

* lint

---------

Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>

---------

Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: TJian <tunjian1996@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>

* Using a more precise profiling on ROCm to properly account for weights padding (#394)

* Update Dockerfile.rocm

* [Bugfix]: inclucde the env variables required for running FastSyncLLM

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* fix pre-commit lint

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Bugfix] included missing environment variable

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

---------

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Akshat Tripathi <akshat@krai.ai>
Signed-off-by: Oleg Mosalov <oleg@krai.ai>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Signed-off-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: Chenguang Li <757486878@qq.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Shanshan Shen <467638484@qq.com>
Signed-off-by: elijah <f1renze.142857@gmail.com>
Signed-off-by: Yikun <yikunkero@gmail.com>
Signed-off-by: mgoin <michael@neuralmagic.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Konrad Zawora <kzawora@habana.ai>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: yisheng <yi.sheng@intel.com>
Signed-off-by: Abatom <abzhonghua@gmail.com>
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
Signed-off-by: Sourashis Roy <sroy@roblox.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: Wallas Santos <wallashss@ibm.com>
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: yan ma <yan.ma@intel.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Signed-off-by: Ye Qi <yeq@meta.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Kuntai Du <kuntai@uchicago.edu>
Signed-off-by: Ren MinMin <renmm6@chinaunicom.cn>
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Signed-off-by: Fred Reiss <frreiss@us.ibm.com>
Signed-off-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Rahul Tuli <rahul@neuralmagic.com>
Signed-off-by: kewang-xlnx <kewang@xilinx.com>
Signed-off-by: kewang2 <kewang2@amd.com>
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: hongxyan <hongxyan@amd.com>
Signed-off-by: Michal Adamczyk <madamczyk@habana.ai>
Signed-off-by: zibai <zibai.gj@alibaba-inc.com>
Signed-off-by: Martin Gleize <mgleize@meta.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: isikhi <huseyin.isik000@gmail.com>
Signed-off-by: Jason Cheng <jasoncky96@gmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: Jannis Schönleber <joennlae@gmail.com>
Signed-off-by: rickyx <rickyx@anyscale.com>
Signed-off-by: Andy Lo <andy@mistral.ai>
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Signed-off-by: maleksan85 <maleksan@amd.com>
Signed-off-by: Hongxia Yang <hongxyan@amd.com>
Signed-off-by: kevin <kevin@anyscale.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: xffxff <1247714429@qq.com>
Signed-off-by: wangerxiao <863579016@qq.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: zhenwei <zhenweiliu@habana.ai>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: ElizaWszola <eliza@neuralmagic.com>
Signed-off-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
Signed-off-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Kyle Mistele <kyle@mistele.com>
Signed-off-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Akshat Tripathi <Akshat.tripathi6568@gmail.com>
Co-authored-by: Oleg Mosalov <oleg@krai.ai>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Avshalom Manevich <12231371+avshalomman@users.noreply.github.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
Co-authored-by: Yangcheng Li <liyangcheng.lyc@alibaba-inc.com>
Co-authored-by: Siyuan Li <94890248+liaoyanqing666@users.noreply.github.com>
Co-authored-by: Sungjae Lee <33976427+llsj14@users.noreply.github.com>
Co-authored-by: Concurrensee <yida.wu@amd.com>
Co-authored-by: Chenguang Li <757486878@qq.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: elijah <30852919+e1ijah1@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Steve Luo <36296769+SunflowerAries@users.noreply.github.com>
Co-authored-by: mgoin <michael@neuralmagic.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: Alexei V. Ivanov <alivanov@banff-cyxtera-s65-4.amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com>
Co-authored-by: YiSheng5 <yi.sheng@intel.com>
Co-authored-by: Zhonghua Deng <abatom@163.com>
Co-authored-by: Liangfu Chen <liangfc@amazon.com>
Co-authored-by: XiaobingZhang <xiaobingzhangupc@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Yuan <yuan.zhou@intel.com>
Co-authored-by: jiangjiadi <34134495+jiangjiadi@users.noreply.github.com>
Co-authored-by: jiadi.jjd <jiadi.jjd@antgroup.com>
Co-authored-by: sroy745 <142070531+sroy745@users.noreply.github.com>
Co-authored-by: Jie Fu (傅杰) <jiefu@tencent.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: WangErXiao <863579016@qq.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Wallas Henrique <wallashss@users.noreply.github.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Yan Ma <yan.ma@intel.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Maxime Fournioux <55544262+mfournioux@users.noreply.github.com>
Co-authored-by: Guspan Tanadi <36249910+guspan-tanadi@users.noreply.github.com>
Co-authored-by: Ye (Charlotte) Qi <ye.charlotte.qi@gmail.com>
Co-authored-by: yeq <yeq@devgpu004.lla3.facebook.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Charles Frye <cfrye59@gmail.com>
Co-authored-by: Joe Runde <Joseph.Runde@ibm.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: cennn <61925104+cennn@users.noreply.github.com>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: minmin <rmm0811@gmail.com>
Co-authored-by: Ren MinMin <renmm6@chinaunicom.cn>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Fred Reiss <frreiss@us.ibm.com>
Co-authored-by: shaochangxu <85155497+shaochangxu@users.noreply.github.com>
Co-authored-by: shaochangxu.scx <shaochangxu.scx@antgroup.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: sixgod <evethwillbeok@outlook.com>
Co-authored-by: Rafael Vasquez <rafvasq21@gmail.com>
Co-authored-by: Elfie Guo <164945471+elfiegg@users.noreply.github.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Co-authored-by: Keyun Tong <tongkeyun@gmail.com>
Co-authored-by: RunningLeon <maningsheng@sensetime.com>
Co-authored-by: kewang-xlnx <73578509+kewang-xlnx@users.noreply.github.com>
Co-authored-by: kewang2 <kewang2@amd.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: tvirolai-amd <teemu.virolainen@amd.com>
Co-authored-by: Michael Goin <mgoin@redhat.com>
Co-authored-by: Zhaoyi Li <36555117+Lzy17@users.noreply.github.com>
Co-authored-by: charlifu <charlifu@amd.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: yancong <32220263+ice-tong@users.noreply.github.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>
Co-authored-by: gujing <925973396@qq.com>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
Co-authored-by: shangmingc <caishangming@linux.alibaba.com>
Co-authored-by: Işık <41375111+isikhi@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cheng Kuan Yong Jason <jasoncky96@gmail.com>
Co-authored-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: Thomas Parnell <tpa@zurich.ibm.com>
Co-authored-by: Jannis Schönleber <joennlae@gmail.com>
Co-authored-by: Ricky Xu <xuchen727@hotmail.com>
Co-authored-by: Andy Lo <andylolu24@gmail.com>
Co-authored-by: Adrian Cole <64215+codefromthecrypt@users.noreply.github.com>
Co-authored-by: Jani Monoses <jani.monoses@gmail.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Aleksandr Malyshev <164964928+maleksan85@users.noreply.github.com>
Co-authored-by: maleksan85 <maleksan@amd.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: zhou fan <1247714429@qq.com>
Co-authored-by: ilia-cher <30845429+ilia-cher@users.noreply.github.com>
Co-authored-by: liuzhenwei <zhenweiliu@habana.ai>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Micah Williamson <micah.williamson@amd.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Junichi Sato <junichi.sato@sbintuitions.co.jp>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
Co-authored-by: omer-dayan <omer@run.ai>
Co-authored-by: Mohit Deopujari <mdeopujari@habana.ai>
Co-authored-by: Jeremy Arnold <103538711+JArnoldAMD@users.noreply.github.com>
Co-authored-by: Matthew Hendrey <matthew.hendrey@gmail.com>
Co-authored-by: Kyle Mistele <kyle@mistele.com>
Co-authored-by: Pooya Davoodi <pooya.davoodi@parasail.io>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: Bowen Bao <bowenbao@amd.com>
Co-authored-by: arakowsk-amd <182798202+arakowsk-amd@users.noreply.github.com>
Co-authored-by: sanyalington <shomy.sanyal@amd.com>
Co-authored-by: Joe Shajrawi <17753158+shajrawi@users.noreply.github.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
GWS0428 pushed a commit to GWS0428/VARserve that referenced this pull request Feb 12, 2025
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: imkero <kerorek@outlook.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants