Skip to content

Conversation

@Isotr0py
Copy link
Member

@Isotr0py Isotr0py commented Nov 4, 2025

Purpose

(EngineCore_DP0 pid=5260) INFO 11-04 03:34:52 [base.py:120] Using Transformers backend.
(EngineCore_DP0 pid=5260) INFO 11-04 03:34:52 [cuda.py:414] Using FlexAttention backend.
(EngineCore_DP0 pid=5260) [2025-11-04 03:34:54] INFO _client.py:1025: HTTP Request: GET https://huggingface.co/api/models/papluca/xlm-roberta-base-language-detection "HTTP/1.1 200 OK"
(EngineCore_DP0 pid=5260) [2025-11-04 03:34:54] INFO _client.py:1025: HTTP Request: GET https://huggingface.co/api/models/papluca/xlm-roberta-base-language-detection/tree/main?recursive=false&expand=false "HTTP/1.1 200 OK"
(EngineCore_DP0 pid=5260) [2025-11-04 03:34:54] INFO _client.py:1025: HTTP Request: GET https://huggingface.co/api/models/papluca/xlm-roberta-base-language-detection/revision/main "HTTP/1.1 200 OK"
(EngineCore_DP0 pid=5260) [2025-11-04 03:34:55] INFO _client.py:1025: HTTP Request: HEAD https://huggingface.co/papluca/xlm-roberta-base-language-detection/resolve/main/model.safetensors.index.json "HTTP/1.1 404 Not Found"
(EngineCore_DP0 pid=5260) INFO 11-04 03:34:55 [weight_utils.py:480] No model.safetensors.index.json found in remote.
Loading safetensors checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  1.94it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  1.93it/s]
(EngineCore_DP0 pid=5260) 
(EngineCore_DP0 pid=5260) INFO 11-04 03:34:55 [default_loader.py:314] Loading weights took 0.65 seconds
(EngineCore_DP0 pid=5260) INFO 11-04 03:34:56 [gpu_model_runner.py:2997] Model loading took 0.5192 GiB and 3.491527 seconds
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843] EngineCore failed to start.
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843] Traceback (most recent call last):
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/engine/core.py", line 834, in run_engine_core
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/engine/core.py", line 602, in __init__
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     super().__init__(
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/engine/core.py", line 109, in __init__
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/engine/core.py", line 223, in _initialize_kv_caches
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     kv_cache_specs = self.model_executor.get_kv_cache_specs()
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/executor/abstract.py", line 129, in get_kv_cache_specs
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     return self.collective_rpc("get_kv_cache_spec")
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/executor/uniproc_executor.py", line 73, in collective_rpc
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     return [run_method(self.driver_worker, method, args, kwargs)]
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/serial_utils.py", line 459, in run_method
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     return func(*args, **kwargs)
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]            ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/worker/gpu_worker.py", line 373, in get_kv_cache_spec
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     return self.model_runner.get_kv_cache_spec()
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/v1/worker/gpu_model_runner.py", line 4746, in get_kv_cache_spec
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     if spec := attn_module.get_kv_cache_spec(self.vllm_config):
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]   File "/kaggle/working/vllm/vllm/attention/layer.py", line 462, in get_kv_cache_spec
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]     assert self.attn_type == AttentionType.DECODER
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=5260) ERROR 11-04 03:34:56 [core.py:843] AssertionError

Test Plan

pytest -s -v tests/models/test_initialization.py -k TransformersForSequenceClassification

Test Result

Test should pass with nightly Transformers now


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug that prevented encoder-only models from working with the transformers backend. The issue, as indicated by the traceback, was an assertion error during the KV cache initialization, where the attention type was expected to be DECODER. The fix involves conditionally selecting the EncoderOnlyAttention class for encoder layers, which correctly signals that no KV cache is needed. The change is precise, correct, and effectively resolves the reported bug.

@heheda12345 heheda12345 enabled auto-merge (squash) November 4, 2025 05:39
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 4, 2025
if attn_type == AttentionType.ENCODER_ONLY
else Attention
)
attention_instances[i] = attn_cls(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does passing attn_type not work? Are the two not equivalent?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

per_layer_sliding_window = self.config.sliding_window

attention_instances[i] = Attention(
attn_cls = (
Copy link
Collaborator

@NickLucche NickLucche Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heheda12345 do you want to handle it inside the Attention class init to signal deprecation with a warning?

Copy link
Collaborator

@heheda12345 heheda12345 Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to handle it like this now. For the deprecation warning, just add one line of warning in Attention class? (not necessary in this PR)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
@Isotr0py Isotr0py requested a review from ywang96 as a code owner November 4, 2025 16:49
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
@vllm-bot vllm-bot merged commit 0ff05e3 into vllm-project:main Nov 5, 2025
51 of 53 checks passed
@github-project-automation github-project-automation bot moved this from Todo to Done in Transformers backend Nov 5, 2025
@Isotr0py Isotr0py deleted the fix-transformers-encoder branch November 5, 2025 06:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

6 participants