You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run time TRANSFORMERS_OFFLINE=1 python test.py to force the cache to be hit
See that the following exception is returned:
Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False.Traceback (most recent call last): File "test.py", line 3, in <module> _ = pipeline("text-generation", model="gpt2", model_kwargs={"cache_dir": "model_cache"}) File "venv/lib/python3.7/site-packages/transformers/pipelines/__init__.py", line 409, in pipeline model, model_classes=model_classes, config=config, framework=framework, revision=revision, task=task File "venv/lib/python3.7/site-packages/transformers/pipelines/base.py", line 136, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) File "venv/lib/python3.7/site-packages/transformers/utils/dummy_tf_objects.py", line 991, in from_pretrained requires_backends(cls, ["tf"]) File "venv/lib/python3.7/site-packages/transformers/file_utils.py", line 612, in requires_backends raise ImportError("".join([BACKENDS_MAPPING[backend][1].format(name) for backend in backends]))ImportError:TFGPT2LMHeadModel requires the TensorFlow library but it was not found in your environment. Checkout the instructions on theinstallation page: https://www.tensorflow.org/install and follow the ones that match your environment.
(I edited the stack trace to remove the parts of the path outside the virtual environment.)
Expected behavior
There should be no output because the model should be loaded from the cache without issues.
The text was updated successfully, but these errors were encountered:
While that's a good temporary workaround (I'm currently using a different one), I was hoping for a longer term solution so pipeline() works as the docs say:
model_kwargs – Additional dictionary of keyword arguments passed along to the model’s from_pretrained(..., **model_kwargs) function.
model_kwags actually used to work properly, at least when the framework parameter was set, but #12025 broke it. #12449 should fix it, although it doesn't address the issue that #12025 broke this behavior without any tests failing.
Environment info
transformers
version: 4.8.1Who can help
This should be a one-line fix, so I will be submitting a PR shortly.
Information
Model I am using:
gpt2
(not model-specific issue, though)The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
test.py
:time TRANSFORMERS_OFFLINE=1 python test.py
to force the cache to be hit(I edited the stack trace to remove the parts of the path outside the virtual environment.)
Expected behavior
There should be no output because the model should be loaded from the cache without issues.
The text was updated successfully, but these errors were encountered: