Skip to content

Conversation

@hl475
Copy link
Contributor

@hl475 hl475 commented Oct 14, 2025

Purpose

On ROCm/HIP, the base RoPE path expects (positions, query, key) and, in the fallback branch, calls forward_cuda(positions, query, key). The Llama4‑Vision RoPE implements forward_cuda(query, key) (2 args), so HIP can pass the wrong arg order or arg count.

This PR overrides forward_hip(query, key) in Llama4VisionRotaryEmbedding and delegate to the existing native implementation. This keeps the current call site self.rotary_emb(q, k) working, avoids creating positions, and prevents the base class from calling the 3‑arg CUDA path.

Test Plan

run with rocm

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Huamin Li <3ericli@gmail.com>
@mergify mergify bot added the llama Related to Llama models label Oct 14, 2025
@hl475 hl475 marked this pull request as ready for review October 14, 2025 09:39
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks reasonable to me, thanks

@mgoin mgoin added bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed amd labels Oct 14, 2025
@mgoin mgoin enabled auto-merge (squash) October 14, 2025 16:25
@vllm-bot vllm-bot merged commit 87efc68 into vllm-project:main Oct 14, 2025
50 of 53 checks passed
Jonahcb pushed a commit to Jonahcb/vllm that referenced this pull request Oct 15, 2025
…tions, q, k) mismatch (vllm-project#26790)

Signed-off-by: Huamin Li <3ericli@gmail.com>
Signed-off-by: Jonah Bernard <jb2528@cornell.edu>
bbartels pushed a commit to bbartels/vllm that referenced this pull request Oct 16, 2025
…tions, q, k) mismatch (vllm-project#26790)

Signed-off-by: Huamin Li <3ericli@gmail.com>
Signed-off-by: bbartels <benjamin@bartels.dev>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
…tions, q, k) mismatch (vllm-project#26790)

Signed-off-by: Huamin Li <3ericli@gmail.com>
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
…tions, q, k) mismatch (vllm-project#26790)

Signed-off-by: Huamin Li <3ericli@gmail.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
…tions, q, k) mismatch (vllm-project#26790)

Signed-off-by: Huamin Li <3ericli@gmail.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
…tions, q, k) mismatch (vllm-project#26790)

Signed-off-by: Huamin Li <3ericli@gmail.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
…tions, q, k) mismatch (vllm-project#26790)

Signed-off-by: Huamin Li <3ericli@gmail.com>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
…tions, q, k) mismatch (vllm-project#26790)

Signed-off-by: Huamin Li <3ericli@gmail.com>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

amd bug Something isn't working llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants