Skip to content

ActorDiedError Caused by ValueError in VLLM Initialization #3

@curryqka

Description

@curryqka

Description:
I encountered an ActorDiedError exception when running my Ray-based application. The error occurs during the initialization of the LLM class from the vllm library, specifically when configuring the model. The root cause appears to be a ValueError related to the limit_mm_per_prompt parameter, which is only supported for multimodal models.

Error Stack Trace:

Exception has occurred: ActorDiedError
The actor died because of an error raised in its creation task, ray::_MapWorker.__init__() (pid=2454810, ip=10.96.192.35, actor_id=792fbc7178acf9ef3f06667101000000, repr=MapWorker(MapBatches(LLMPredictor)))
  File "/root/miniconda3/lib/python3.10/site-packages/ray/data/_internal/execution/operators/actor_pool_map_operator.py", line 403, in __init__
    self._map_transformer.init()
  File "/root/miniconda3/lib/python3.10/site-packages/ray/data/_internal/execution/operators/map_transformer.py", line 208, in init
    self._init_fn()
  File "/root/miniconda3/lib/python3.10/site-packages/ray/data/_internal/planner/plan_udf_map_op.py", line 268, in init_fn
    udf_map_fn=op_fn(
  File "/root/miniconda3/lib/python3.10/site-packages/ray/data/_internal/execution/util.py", line 70, in __init__
    super().__init__(*args, **kwargs)
  File "/high_perf_store/mlinfra-vepfs/wangjinghui/drive-bench/inference/llava1.5.py", line 48, in __init__
    self.llm = LLM(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 178, in __init__
    self.llm_engine = LLMEngine.from_engine_args(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 547, in from_engine_args
    engine_config = engine_args.create_engine_config()
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 844, in create_engine_config
    model_config = self.create_model_config()
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 782, in create_model_config
    return ModelConfig(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/config.py", line 235, in __init__
    self.multimodal_config = self._init_multimodal_config(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/config.py", line 256, in _init_multimodal_config
    raise ValueError(
ValueError: limit_mm_per_prompt is only supported for multimodal models.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions