-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch export_model to use AutoModel and AutoTokenizer #1260
Conversation
If I'm reading this right, this PR needs to be updated to pass the new tests, right? |
…ath. Update notebooks with AutoClass changes.
8ffa0ea
to
b269107
Compare
b269107
to
f7b8b55
Compare
Codecov Report
@@ Coverage Diff @@
## master #1260 +/- ##
=======================================
Coverage 55.51% 55.52%
=======================================
Files 149 149
Lines 10903 10878 -25
=======================================
- Hits 6053 6040 -13
+ Misses 4850 4838 -12
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing jumps out to me, also relatively low risk to merge now, I think.
Potentially one thing you might want to look at is if updating to transformers 4+ will be complicated by any of this.
@zphang A few notes on how this PR will be affected by Transformers v4:
|
Sounds good, moving remaining discussion to other PR. |
This PR refactors
export_model
to useAutoModel
andAutoTokenizer
. It must be approved after the tests forexport_model
have been merged (#1259). This PR reduces is part of the effort to reduce theif-else
statements regarding model types in our codebase.model_tokenizer_path
has been removed and replaced byhf_pretrained_model_name_or_path
.