You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When calling the load_from_checkpoint function to load a model from a checkpoint, the hparams.yml file located in the parent folder does not get taken into account. For example, the pretrained_model setting in hparams.yml has no effect.
The roberta model is located in models/xlm-roberta-large as indicated by pretrained_model but an error is thrown because it still expects the roberta model to be in xlm-roberta-large in root. This gives the following error message:
EnvironmentError(
OSError: Can't load tokenizer for 'xlm-roberta-large'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'xlm-roberta-large' is the correct path to a directory containing all relevant files for a XLMRobertaTokenizerFast tokenizer.
Expected behaviour
I would expect that the pretrained_model parameter is used to determine the location of the model.
This could be achieved by adding the hparams_file as an argument to the model_class.load_from_checkpoint function in models/__init__.py
defload_from_checkpoint(checkpoint_path: str) ->CometModel:
"""Loads models from a checkpoint path. Args: checkpoint_path (str): Path to a model checkpoint. Return: COMET model. """checkpoint_path=Path(checkpoint_path)
ifnotcheckpoint_path.is_file():
raiseException(f"Invalid checkpoint path: {checkpoint_path}")
parent_folder=checkpoint_path.parents[1] # .parent.parenthparams_file=parent_folder/"hparams.yaml"ifhparams_file.is_file():
withopen(hparams_file) asyaml_file:
hparams=yaml.load(yaml_file.read(), Loader=yaml.FullLoader)
model_class=str2model[hparams["class_identifier"]]
model=model_class.load_from_checkpoint(
checkpoint_path, load_pretrained_weights=False, hparams_file=hparams_file
)
returnmodelelse:
raiseException(f"hparams.yaml file is missing from {parent_folder}!")
Environment
OS: Linux
Packaging: pip
Version 2.0.1
The text was updated successfully, but these errors were encountered:
🐛 Bug
When calling the
load_from_checkpoint
function to load a model from a checkpoint, thehparams.yml
file located in the parent folder does not get taken into account. For example, thepretrained_model
setting inhparams.yml
has no effect.To Reproduce
contents of hparams.yml:
The roberta model is located in
models/xlm-roberta-large
as indicated bypretrained_model
but an error is thrown because it still expects the roberta model to be inxlm-roberta-large
in root. This gives the following error message:Expected behaviour
I would expect that the
pretrained_model
parameter is used to determine the location of the model.This could be achieved by adding the
hparams_file
as an argument to themodel_class.load_from_checkpoint
function inmodels/__init__.py
Environment
OS: Linux
Packaging: pip
Version 2.0.1
The text was updated successfully, but these errors were encountered: