Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

我在运行python build_model.py时出现了这个错误,似乎是因为openai/clip-vit-large-patch14无法下载导致的,可以帮助我解决吗 #42

Open
rock-lss opened this issue May 16, 2024 · 1 comment

Comments

@rock-lss
Copy link

Traceback (most recent call last):
File "/homec/ssli/DiAD/build_model.py", line 27, in
model = create_model(config_path='/homec/ssli/DiAD/models/diad.yaml')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/sgn/model.py", line 26, in create_model
model = instantiate_from_config(config.model).cpu()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/sgn/sgn.py", line 369, in init
super().init(*args, **kwargs)
File "/homec/ssli/DiAD/ldm/models/diffusion/ddpm.py", line 603, in init
self.instantiate_cond_stage(cond_stage_config)
File "/homec/ssli/DiAD/ldm/models/diffusion/ddpm.py", line 670, in instantiate_cond_stage
model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/modules/encoders/modules.py", line 99, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2073, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

@lewandofskee
Copy link
Owner

I think it may be that your server is not connecting to huggingface.co, causing your deployment to fail. You can refer to this blog as well as this issue to try to download the pretrained weights to your local director first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants