Skip to content

Conversation

@fake0fan
Copy link
Contributor

@fake0fan fake0fan commented Sep 19, 2025

Disaggregated Encoder

A disaggregated encoder runs the vision-encoder stage of a multimodal LLM in a process that is separate from the prefill / decoder stage. Deploying these two stages in independent vLLM instances brings three practical benefits:

  1. Independent, fine-grained scaling
  2. Lower time-to-first-token (TTFT)
  3. Cross-process reuse and caching of encoder outputs

We received a series of feedback from the 21740 (Many thanks to @LastZhabka for the excellent work on the initial version), and based on this feedback, we updated our overall design logic.

Design doc: https://docs.google.com/document/d/1aed8KtC6XkXtdoV87pWT0a8OJlZ-CpnuLLzmR8l9BAE

@ywang96 @NickLucche @DarkLight1337 @WoosukKwon


1 New Encoder Cache Connector

Disaggregated encoding in v1 is enabled by an explicit EC connector abstraction. The ECConnectorBase defines a clear split of responsibilities between the scheduler-side and worker-side, and provides the lifecycle hooks required to check, load, save, and track encoder cache (EC) across processes.

ECConnector – interface for retrieving EC caches produced by the encoder.

  • Scheduler role – checks cache existence and schedules loads. (lives in the scheduler process):

    • check_caches_exist(request) -> list[bool]: probe whether EC exists for each multimodal datum in the request.
    • update_state_after_alloc(request, index): record EC loading intent once encoder cache space is allocated for a given mm datum.
    • build_connector_meta(scheduler_output) -> ECConnectorMetadata: build per-step metadata consumed by workers; also resets connector internal state for the next step.
    • request_finished(request) -> (bool, Optional[dict]): optionally indicate async save/send in progress and attach metadata.
    • update_connector_output(connector_output): ingest worker-side completion info.
  • Worker role – loads the embeddings into memory. (lives in each worker process):

    • bind_connector_metadata(meta) / clear_connector_metadata(): attach per-step metadata from the scheduler before/after forward.
    • start_load_caches(**kwargs): load required EC into the local encoder cache before embeddings are gathered.
    • save_caches(**kwargs): persist/transfer EC produced on the encoder.
    • wait_for_save(): block until outstanding saves complete (if any).
    • get_finished(finished_req_ids) -> (set[str]|None, set[str]|None): report completion of async send/recv to help scheduler bookkeeping.

2 Change highlights (Scheduler and GPUModelRunner)

  • Scheduler (vllm/v1/core/sched/scheduler.py):

    • Initializes an EC connector when ec_transfer_config is present via ECConnectorFactory.create_connector(..., role=SCHEDULER).
    • Tracks an encoder-token budget and skips EC work when cache already exists (local or remote as reported by the connector).
    • After allocation, calls ec_connector.update_state_after_alloc(...) so the next build_connector_meta(...) includes the mm hashes to load.
    • Emits ec_connector_metadata in SchedulerOutput for the workers, and ingests worker completion via ec_connector.update_connector_output(...).
  • Worker (vllm/v1/worker/gpu_model_runner.py):

    • Encoder-side (producer):
      • Within execute_model, when get_ec_transfer().is_producer is True, the runner enters with maybe_get_ec_connector_output(..., encoder_cache=self.encoder_cache): before running the multimodal encoder.
      • The encode pass computes embeddings and writes them into encoder_cache[mm_hash].
      • Immediately after finishing the encode for a given mm_hash, the runner calls maybe_save_ec_to_connector(self.encoder_cache, mm_hash) which invokes ECConnectorBase.save_caches(encoder_cache=..., mm_hash=...).
      • On context exit, wait_for_save() is invoked (if enabled) to ensure the persisted EC is durable/visible to consumers; get_finished(...) is queried to surface completion status back to the scheduler.
    • PD-side (consumer):
      • For requests scheduled on PD, the scheduler supplies ec_connector_metadata that lists the mm_hash items needing loads.
      • The runner binds this metadata and calls start_load_caches(encoder_cache=self.encoder_cache) prior to _gather_mm_embeddings, allowing the connector to populate encoder_cache[mm_hash] from the external store.
      • _gather_mm_embeddings then reads the loaded tensors from encoder_cache and returns them as multimodal embeddings for the subsequent decoder input embedding construction.
      • After the forward step, the runner clears metadata; any connector-furnished completion info is recorded into ECConnectorOutput for the scheduler to free resources when safe.

3 Usage Example

The current reference pathway is ECSharedStorageConnector.
Below ready-to-run scripts show the workflows:

  • 1 Encoder + 1 PD:

    • examples/online_serving/disaggregated_encoder/shared_storage_connector/disagg_1e1pd_example.sh
    • (legacy/simple demo) examples/online_serving/disaggregated_encoder/shared_storage_connector/disagg_encoder_example.sh
  • 1 Encoder + 1 Prefill + 1 Decode:

    • examples/online_serving/disaggregated_encoder/shared_storage_connector/disagg_1e1p1d_example.sh

3.1 Minimal ECTransfer CLI config

The Encoder and PD share/transfer encoder cache (EC) via EC transfer. The current reference implementation is ECSharedStorageConnector (dump/load via a shared directory).

  • Encoder (producer) example:
vllm serve <model> \
  --ec-transfer-config '{
    "ec_connector": "ECSharedStorageConnector",
    "ec_role": "ec_producer",
    "ec_connector_extra_config": {"shared_storage_path": "/tmp"}
  }'
  • PD (consumer) example:
vllm serve <model> \
  --ec-transfer-config '{
    "ec_connector": "ECSharedStorageConnector",
    "ec_role": "ec_consumer",
    "ec_connector_extra_config": {"shared_storage_path": "/tmp"}
  }'

Notes:

  • If shared_storage_path is not explicitly set, /tmp is used by default.
  • This connector persists EC per mm_hash to /<shared_storage_path>/<mm_hash>/encoder_cache.safetensors, and the PD side loads it to GPU on demand.
  • Connector selection and loading are handled by ECConnectorFactory (with ECSharedStorageConnector registered by default).

4 Development

Here is a figure illustrating disaggregate encoder flow:

Disaggregated Encoder Flow

Disaggregated encoding is implemented by running two parts:

  • Encoder instance – a vLLM instance to performs vision encoding.
  • Prefill/Decode (PD) instance(s) – runs language pre-fill and decode.
    • PD can be a single instance (E->PD, see disagg_1e1pd_example.sh / disagg_encoder_example.sh), or disaggregated with Decode (E->P->D, see disagg_1e1p1d_example.sh).

A connector transfers encoder-cache (EC) embeddings from the encoder instance to the PD instance.
All related code is under vllm/distributed/ec_transfer.

For the PD disaggregation part, the Prefill instance receives cache exactly the same as the disaggregate encoder flow above. Prefill instance executes 1 step (prefill -> 1 token output) and then transfers KV cache to the Decode instance for the remaining execution. The KV transfer part purely happens after the execution of the Prefill instance. docs/features/disagg_prefill.md shows the brief idea about the disaggregated prefill (v0). We create the example setup with the NixlConnector from vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py in vllm V1 and referred to the tests/v1/kv_connector/nixl_integration/toy_proxy_server.py to facilitate the kv transfer between P and D;

5. Next-Step Plan

5.1 Broaden EC Connector Types

5.2 Multi-Hardware Platform Support

5.3 Comprehensive Performance Evaluation

n00909098 and others added 21 commits September 8, 2025 16:32
Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
…hceduler for remote load request

Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
…Connector

Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
Signed-off-by: n00909098 <nguyen.kha.long@huawei.com>
…oder input

Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: herotai214 <herotai214@gmail.com>
Signed-off-by: herotai214 <herotai214@gmail.com>
Copy link
Collaborator

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, thanks for the mega job!

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 23, 2025
@DarkLight1337
Copy link
Member

Seems that this change breaks the existing code for Whisper, which is an encoder-decoder model, even when E/P/D is not used

@fake0fan
Copy link
Contributor Author

Seems that this change breaks the existing code for Whisper, which is an encoder-decoder model, even when E/P/D is not used

Working on it now.

Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
@mergify
Copy link

mergify bot commented Oct 24, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @fake0fan.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Oct 24, 2025
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
@mergify mergify bot removed the needs-rebase label Oct 24, 2025
@knlnguyen1802
Copy link
Contributor

Fail tests are unrelated

knlnguyen1802 and others added 3 commits October 27, 2025 09:28
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: herotai214 <herotai214@gmail.com>
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) October 27, 2025 03:54
herotai214 and others added 6 commits October 28, 2025 09:41
… cache init

Signed-off-by: herotai214 <herotai214@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
…ueModelState

Signed-off-by: herotai214 <herotai214@gmail.com>
Signed-off-by: herotai214 <herotai214@gmail.com>
Signed-off-by: herotai214 <herotai214@gmail.com>
Signed-off-by: herotai214 <herotai214@gmail.com>
@mergify
Copy link

mergify bot commented Nov 5, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @fake0fan.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Nov 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation kv-connector needs-rebase ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.