Skip to content

Commit

Permalink
[Core] Factor out input preprocessing to a separate class (vllm-proje…
Browse files Browse the repository at this point in the history
…ct#7329)

Signed-off-by: Amit Garg <mitgarg17495@gmail.com>
  • Loading branch information
DarkLight1337 authored and garg-amit committed Oct 28, 2024
1 parent 7c90a06 commit dd97620
Showing 1 changed file with 1 addition and 3 deletions.
4 changes: 1 addition & 3 deletions vllm/engine/async_llm_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,7 @@
from vllm.engine.metrics_types import StatLoggerBase
from vllm.executor.executor_base import ExecutorAsyncBase
from vllm.executor.ray_utils import initialize_ray_cluster
from vllm.inputs import (EncoderDecoderLLMInputs, LLMInputs, PromptInputs,
SingletonPromptInputs)
from vllm.inputs.parse import is_explicit_encoder_decoder_prompt
from vllm.inputs import PromptInputs
from vllm.logger import init_logger
from vllm.lora.request import LoRARequest
from vllm.model_executor.layers.sampler import SamplerOutput
Expand Down

0 comments on commit dd97620

Please sign in to comment.