Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

demo doesn't work : Error while finding module specification for 'llava.serve.cli' (ModuleNotFoundError: No module named 'llava') #9

Open
chuangzhidan opened this issue Mar 27, 2024 · 2 comments

Comments

@chuangzhidan
Copy link

chuangzhidan commented Mar 27, 2024

**python -m llava.serve.cli
--model-path ./checkpoints/llava-hr-7b-sft-1024
--image-file "./assets/example.jpg"

i have downloaded model from huggingface and didn't find any model card info ,sad . so i tried above command line and changed the aboved path to my own model file on the server . and it doesn't work. how do i fix this ?**

bug:
def convnextv2_small(pretrained=False, **kwargs) -> ConvNeXt:
/root/autodl-tmp/hongan/NewFolder/llava-hr/LLaVA-HR-main/llava_hr/model/multimodal_encoder/convnext.py:1072: UserWarning: Overwriting convnextv2_base in registry with llava_hr.model.multimodal_encoder.convnext.convnextv2_base. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
def convnextv2_base(pretrained=False, **kwargs) -> ConvNeXt:
/root/autodl-tmp/hongan/NewFolder/llava-hr/LLaVA-HR-main/llava_hr/model/multimodal_encoder/convnext.py:1079: UserWarning: Overwriting convnextv2_large in registry with llava_hr.model.multimodal_encoder.convnext.convnextv2_large. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
def convnextv2_large(pretrained=False, **kwargs) -> ConvNeXt:
/root/autodl-tmp/hongan/NewFolder/llava-hr/LLaVA-HR-main/llava_hr/model/multimodal_encoder/convnext.py:1086: UserWarning: Overwriting convnextv2_huge in registry with llava_hr.model.multimodal_encoder.convnext.convnextv2_huge. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
def convnextv2_huge(pretrained=False, **kwargs) -> ConvNeXt:
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/runpy.py", line 185, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/root/miniconda3/lib/python3.8/runpy.py", line 111, in _get_module_details
import(pkg_name)
File "/root/autodl-tmp/hongan/NewFolder/llava-hr/LLaVA-HR-main/llava_hr/init.py", line 1, in
from .model import LlavaLlamaForCausalLM
File "/root/autodl-tmp/hongan/NewFolder/llava-hr/LLaVA-HR-main/llava_hr/model/init.py", line 1, in
from .language_model.llava_llama import LlavaLlamaForCausalLM, LlavaConfig
File "/root/autodl-tmp/hongan/NewFolder/llava-hr/LLaVA-HR-main/llava_hr/model/language_model/llava_llama.py", line 139, in
AutoConfig.register("llava_hr", LlavaConfig)
File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 1123, in register
raise ValueError(
ValueError: The config you are passing has a model_type attribute that is not consistent with the model type you passed (config has llava and you passed llava_hr. Fix one of those so they match!

can you add model card info on your huggingface repo?

@chuangzhidan
Copy link
Author

(llava-hr) root@auto:~/autodl-tmp/hongan/NewFolder/LLaVA-HR# python -m llava.serve.cli --model-path /root/autodl-tmp/hongan/NewFolder/llava-hr --image-file "./assets/example.jpg"
/root/miniconda3/envs/llava-hr/bin/python: Error while finding module specification for 'llava.serve.cli' (ModuleNotFoundError: No module named 'llava')

@chuangzhidan chuangzhidan changed the title demo doesn't work demo doesn't work Error while finding module specification for 'llava.serve.cli' (ModuleNotFoundError: No module named 'llava') Apr 8, 2024
@chuangzhidan chuangzhidan changed the title demo doesn't work Error while finding module specification for 'llava.serve.cli' (ModuleNotFoundError: No module named 'llava') demo doesn't work : Error while finding module specification for 'llava.serve.cli' (ModuleNotFoundError: No module named 'llava') Apr 8, 2024
@chuangzhidan
Copy link
Author

chuangzhidan commented Apr 9, 2024

**def convnextv2_large(pretrained=False, kwargs) -> ConvNeXt:
/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/multimodal_encoder/convnext.py:1091: UserWarning: Overwriting convnextv2_huge in registry with llava.model.multimodal_encoder.convnext.convnextv2_huge. This is because the name being registered conflicts with an existing name.
Please check if this is not expected.
def convnextv2_huge(pretrained=False, **kwargs) -> ConvNeXt:
Traceback (most recent call last):
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
sock = connection.create_connection(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/connection.py", line 616, in connect
self.sock = sock = self._new_conn()
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/connection.py", line 207, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7fe1a3f32e90>, 'Connection to huggingface.co timed out. (connect timeout=10)')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k/resolve/main/pytorch_model.bin (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fe1a3f32e90>, 'Connection to huggingface.co timed out. (connect timeout=10)'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1261, in hf_hub_download
metadata = get_hf_file_metadata(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1674, in get_hf_file_metadata
r = _request_wrapper(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 369, in _request_wrapper
response = _request_wrapper(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 392, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 68, in send
return super().send(request, *args, **kwargs)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/requests/adapters.py", line 507, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k/resolve/main/pytorch_model.bin (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fe1a3f32e90>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: e15c3a0d-8dd0-4c03-b966-171d3164a628)')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/root/miniconda3/envs/llava-hr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/envs/llava-hr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/serve/cli.py", line 125, in
main(args)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/serve/cli.py", line 32, in main
tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, args.load_8bit, args.load_4bit, device=args.device)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/builder.py", line 122, in load_pretrained_model
model = LlavaLlamaForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2700, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/language_model/llava_llama.py", line 46, in init
self.model = LlavaLlamaModel(config)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/language_model/llava_llama.py", line 38, in init
super(LlavaLlamaModel, self).init(config)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/llava_arch.py", line 39, in init
self.vision_tower = build_vision_tower(config, delay_load=delay_load)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/multimodal_encoder/builder.py", line 13, in **build_vision_tower
return MultiPathCLIPVisionTower(vision_tower, args=vision_tower_cfg, kwargs)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/multimodal_encoder/multipath_encoder_wapper.py", line 116, in init
self.slow_vision_tower = ConvNextVisionTower(args.vision_tower_slow, args,
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/multimodal_encoder/convnext_encoder.py", line 41, in init
self.load_model()
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/multimodal_encoder/convnext_encoder.py", line 46, in load_model
self.vision_tower = convnext_xxlarge(self.vision_tower_name)
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/multimodal_encoder/convnext.py", line 1017, in convnext_xxlarge
model = _create_convnext('convnext_xxlarge', pretrained=pretrained, **dict(model_args, **kwargs))
File "/root/autodl-tmp/hongan/NewFolder/LLaVA-HR/llava/model/multimodal_encoder/convnext.py", line 491, in _create_convnext
model = build_model_with_cfg(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/timm/models/_builder.py", line 397, in build_model_with_cfg
load_pretrained(
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/timm/models/_builder.py", line 190, in load_pretrained
state_dict = load_state_dict_from_hf(pretrained_loc)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/timm/models/_hub.py", line 189, in load_state_dict_from_hf
cached_file = hf_hub_download(hf_model_id, filename=filename, revision=hf_revision)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
File "/root/miniconda3/envs/llava-hr/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1406, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

本地下载好模型文件并不支持

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant