Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing options/arguments in run_squad.py for BERT Large #51

Closed
avisil opened this issue Nov 21, 2018 · 1 comment
Closed

Missing options/arguments in run_squad.py for BERT Large #51

avisil opened this issue Nov 21, 2018 · 1 comment

Comments

@avisil
Copy link

avisil commented Nov 21, 2018

Thanks for the great code..However, the run_squad.py for BERT Large seems to not have the vocab_file and bert_config_file (or other) options/arguments. Did you push the latest version?
Also, it is looking for a pytorch model file (a bin file). Does it need to be there?

I also had to add this line to the file to make BERT base to run on Squad 1.1:
parser.add_argument('--do_lower_case', action="store_true", default=True, help="Lowercase the input")

@thomwolf
Copy link
Member

Yes, the readme example was for an older version. I have updated them with the simplified parameters used in the current release. Thanks.

jameshennessytempus pushed a commit to jameshennessytempus/transformers that referenced this issue Jun 1, 2023
jonb377 added a commit to jonb377/hf-transformers that referenced this issue Apr 5, 2024
* Support XLA autocast and enable persistent caching

* Add caching flags
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
* add deepseel_v2 support

* need require_trust_remote_code = True

* fix typo, need "shared_experts", not "shared_expert"

* cleanup code

* clear up layer_modules

---------

Co-authored-by: LRL-ModelCloud <lrl@modelcloud.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants