You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/homec/ssli/DiAD/build_model.py", line 27, in
model = create_model(config_path='/homec/ssli/DiAD/models/diad.yaml')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/sgn/model.py", line 26, in create_model
model = instantiate_from_config(config.model).cpu()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/sgn/sgn.py", line 369, in init
super().init(*args, **kwargs)
File "/homec/ssli/DiAD/ldm/models/diffusion/ddpm.py", line 603, in init
self.instantiate_cond_stage(cond_stage_config)
File "/homec/ssli/DiAD/ldm/models/diffusion/ddpm.py", line 670, in instantiate_cond_stage
model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/modules/encoders/modules.py", line 99, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2073, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
The text was updated successfully, but these errors were encountered:
I think it may be that your server is not connecting to huggingface.co, causing your deployment to fail. You can refer to this blog as well as this issue to try to download the pretrained weights to your local director first.
Traceback (most recent call last):
File "/homec/ssli/DiAD/build_model.py", line 27, in
model = create_model(config_path='/homec/ssli/DiAD/models/diad.yaml')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/sgn/model.py", line 26, in create_model
model = instantiate_from_config(config.model).cpu()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/sgn/sgn.py", line 369, in init
super().init(*args, **kwargs)
File "/homec/ssli/DiAD/ldm/models/diffusion/ddpm.py", line 603, in init
self.instantiate_cond_stage(cond_stage_config)
File "/homec/ssli/DiAD/ldm/models/diffusion/ddpm.py", line 670, in instantiate_cond_stage
model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/DiAD/ldm/modules/encoders/modules.py", line 99, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2073, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
The text was updated successfully, but these errors were encountered: