-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: AttributeError: module 'cv2.dnn' has no attribute 'DictValue' #8650
Comments
I also met this issue. I fixed this by |
@patrickvonplaten Sorry for pinging you directly again please take a look at this dependency issue. Thanks! |
Uff yeah this is a long-standing opencv issue - referring to opencv/opencv-python#884 here. Solutions:
pip uninstall opencv-python
pip uninstall opencv
pip install --upgrade mistral_common
pip uninstall opencv-python-headless Also it might be good to leave a comment or bump opencv/opencv-python#884 so that opencv sees that more people are struggling with the opencv package inconsistencies & so that it might be fixed globally faster ;-) |
@ywang96 I'm not sure we can do much here sadly as this can happen whenever mistral_common is installed in a package that has a misconfigured, already existing opencv package. If you install vllm into a clean env this can't happen. What do you think? To me advertising the solutions as written here: #8650 (comment) is the best thing we can do |
@patrickvonplaten is it possible to lazily import many users don't use the vision part, they may just use text LLM. it does not make sense to bother them by opencv. |
That makes sense - we'll do a patch release tomorrow with this PR: mistralai/mistral-common#56 so that cv2 is not automatically installed anymore |
Patch release 1.4.3 to make cv2 install optional is out: https://github.com/mistralai/mistral-common/releases/tag/v1.4.3 |
Hi all, Unfortunately, I think not installing Here's an excerpt of the error log. vllm-1 | INFO 09-30 05:18:03 model_runner.py:1025] Loading model weights took 23.6552 GB
vllm-1 | WARNING 09-30 05:18:04 model_runner.py:1196] Computed max_num_seqs (min(256, 32768 // 40960)) to be less than 1. Setting it to the minimum value of 1.
vllm-1 | Process SpawnProcess-1:
vllm-1 | Traceback (most recent call last):
vllm-1 | File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
vllm-1 | self.run()
vllm-1 | File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
vllm-1 | self._target(*self._args, **self._kwargs)
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 388, in run_mp_engine
vllm-1 | engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 138, in from_engine_args
vllm-1 | return cls(
vllm-1 | ^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 78, in __init__
vllm-1 | self.engine = LLMEngine(*args,
vllm-1 | ^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 339, in __init__
vllm-1 | self._initialize_kv_caches()
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 474, in _initialize_kv_caches
vllm-1 | self.model_executor.determine_num_available_blocks())
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
vllm-1 | return self.driver_worker.determine_num_available_blocks()
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
vllm-1 | return func(*args, **kwargs)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 223, in determine_num_available_blocks
vllm-1 | self.model_runner.profile_run()
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
vllm-1 | return func(*args, **kwargs)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1228, in profile_run
vllm-1 | model_input = self.prepare_model_input(
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1519, in prepare_model_input
vllm-1 | model_input = self._prepare_model_input_tensors(
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1141, in _prepare_model_input_tensors
vllm-1 | builder.add_seq_group(seq_group_metadata)
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 728, in add_seq_group
vllm-1 | per_seq_group_fn(inter_data, seq_group_metadata)
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 660, in _compute_multi_modal_input
vllm-1 | mm_kwargs = self.multi_modal_input_mapper(mm_data)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/registry.py", line 126, in map_input
vllm-1 | input_dict = plugin.map_input(model_config, data_value)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/base.py", line 279, in map_input
vllm-1 | return mapper(InputContext(model_config), data, **mm_processor_kwargs)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/pixtral.py", line 96, in input_mapper_for_pixtral
vllm-1 | encoding = tokenizer.instruct.mm_encoder(image)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/mistral_common/tokens/tokenizers/multimodal.py", line 142, in __call__
vllm-1 | processed_image = transform_image(image, new_image_size)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/mistral_common/tokens/tokenizers/multimodal.py", line 96, in transform_image
vllm-1 | raise ImportError("OpenCV is required for this function. Install it with 'pip install mistral_common[opencv]'")
vllm-1 | ImportError: OpenCV is required for this function. Install it with 'pip install mistral_common[opencv]'
vllm-1 | Traceback (most recent call last):
vllm-1 | File "<frozen runpy>", line 198, in _run_module_as_main
vllm-1 | File "<frozen runpy>", line 88, in _run_code
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 571, in <module>
vllm-1 | uvloop.run(run_server(args))
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
vllm-1 | return __asyncio.run(
vllm-1 | ^^^^^^^^^^^^^^
vllm-1 | File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
vllm-1 | return runner.run(main)
vllm-1 | ^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
vllm-1 | return self._loop.run_until_complete(task)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
vllm-1 | return await main
vllm-1 | ^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 538, in run_server
vllm-1 | async with build_async_engine_client(args) as engine_client:
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
vllm-1 | return await anext(self.gen)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 105, in build_async_engine_client
vllm-1 | async with build_async_engine_client_from_engine_args(
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
vllm-1 | return await anext(self.gen)
vllm-1 | ^^^^^^^^^^^^^^^^^^^^^
vllm-1 | File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 192, in build_async_engine_client_from_engine_args
vllm-1 | raise RuntimeError(
vllm-1 | RuntimeError: Engine process failed to start
vllm-1 exited with code 0 And here is the respective services:
vllm:
image: vllm/vllm-openai:latest
entrypoint: python3
command: "-m vllm.entrypoints.openai.api_server --port=8000 --host=0.0.0.0 --model mistralai/Pixtral-12B-2409 --limit-mm-per-prompt 'image=10' --max-model-len 32768 --tokenizer-mode mistral --load-format mistral --config-format mistral"
env_file:
- .env
ports:
- "${VLLM_EXPOSED}:8000"
environment:
- HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}
- LOG_LEVEL=DEBUG
volumes:
- ./cache:/workspace/.cache
- ./templates:/workspace/templates
restart: always
shm_size: "64gb"
healthcheck:
test: ["CMD", "curl", "-f", "http://0.0.0.0:8000/v1/models"]
interval: 30s
timeout: 5s
retries: 20
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["0"]
capabilities: [gpu]
|
This is now fixed by #8951 but you’ll need to wait for our next release. |
Your current environment
Model Input Dumps
No response
🐛 Describe the bug
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: