Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to implement "nielsr/luke-large" gives "KeyError: 'luke'" #10700

Closed
UrosOgrizovic opened this issue Mar 13, 2021 · 2 comments · Fixed by #11223
Closed

Trying to implement "nielsr/luke-large" gives "KeyError: 'luke'" #10700

UrosOgrizovic opened this issue Mar 13, 2021 · 2 comments · Fixed by #11223

Comments

@UrosOgrizovic
Copy link

Environment info

  • transformers version: 4.1.1
  • Platform: Windows-10-10.0.19041-SP0
  • Python version: 3.8.3
  • PyTorch version (GPU?): 1.7.1+cpu (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Using GPU in script?: No
  • Using distributed or parallel set-up in script?: No

Who can help

@LysandreJik I guess, because it's an AutoTokenizer-related issue.

Information

I'm trying to use an implementation of LUKE (paper) (implementation).

The problem arises when using:

  • my own modified scripts

The task I am working on is:
I don't think this is relevant.

To reproduce

Steps to reproduce the behavior:

  1. from transformers import AutoTokenizer, AutoModel
  2. tokenizer = AutoTokenizer.from_pretrained("nielsr/luke-large")

Running gives the following error:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-12-c1614eef2346> in <module>
      4 
----> 5 luke_tokenizer = AutoTokenizer.from_pretrained("nielsr/luke-large")
      6 

c:\...\venv\lib\site-packages\transformers\models\auto\tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
    343         config = kwargs.pop("config", None)
    344         if not isinstance(config, PretrainedConfig):
--> 345             config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
    346 
    347         use_fast = kwargs.pop("use_fast", True)

c:\...\venv\lib\site-packages\transformers\models\auto\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    350 
    351         if "model_type" in config_dict:
--> 352             config_class = CONFIG_MAPPING[config_dict["model_type"]]
    353             return config_class.from_dict(config_dict, **kwargs)
    354         else:

KeyError: 'luke'

Expected behavior

I'm expecting no error to be thrown.

@NielsRogge
Copy link
Contributor

NielsRogge commented Mar 13, 2021

Thanks for your interest! LUKE is not part of the master branch yet.

Actually, the current implementation of LUKE is here (at my adding_luke_v2 branch): https://github.com/NielsRogge/transformers/tree/adding_luke_v2/src/transformers/models/luke

Note that it is work-in-progress, but you can already use the base EntityAwareAttentionModel and the head models. It's mostly the tokenizer that needs some work.

cc'ing the original author for visibility: @ikuyamada

@UrosOgrizovic
Copy link
Author

Thanks, Niels!

As far as I'm concerned, this can be closed.

@NielsRogge NielsRogge mentioned this issue Apr 13, 2021
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants