forked from huggingface/transformers
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update 4.7.0 -> 4.18.0 #37
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Make sure custom configs work with Transformers * Apply code review suggestions
* Add Wav2Vec2 Adapter Weights to Flax * Suggested changes
* typical decoding * changing arg name * add test config params * forgotten arg rename * fix edge case where scores are same * test for typical logits warper * code quality fixes
…5416) * added classes to get started with constrained beam search * in progress, think i can directly force tokens now but not yet with the round robin * think now i have total control, now need to code the bank selection * technically works as desired, need to optimize and fix design choices leading to undersirable outputs * complete PR #1 without disjunctive decoding * removed incorrect tests * Delete k.txt * Delete test.py * Delete test.sh * revert changes to test scripts * genutils * full implementation with testing, no disjunctive yet * shifted docs * passing all tests realistically ran locally * removing accidentally included print statements * fixed source of error in initial PR test * fixing the get_device() vs device trap * fixed documentation docstrings about constrained_beam_search * fixed tests having failing for Speech2TextModel's floating point inputs * fix cuda long tensor * added examples and testing for them and founx & fixed a bug in beam_search and constrained_beam_search * deleted accidentally added test halting code with assert False * code reformat * Update tests/test_generation_utils.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_generation_utils.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_generation_utils.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_generation_utils.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Update tests/test_generation_utils.py * fixing based on comments on PR * took out the testing code that should but work fails without the beam search moditification ; style changes * fixing comments issues * docstrings for ConstraintListState * typo in PhrsalConstraint docstring * docstrings improvements Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Expose hub test problem * Fix tests
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* [trainer docs] document how to select specific gpus * expand * add urls * add accelerate launcher
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MBP.localdomain>
* Expand tutorial for custom models * Style * Apply suggestions from code review Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr> Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
* Add TensorFlow support for ONNX export * Change documentation to mention conversion with Tensorflow * Refactor export into export_pytorch and export_tensorflow * Check model's type instead of framework installation to choose between TF and Pytorch Co-authored-by: Lysandre Debut <lysandre@huggingface.co> Co-authored-by: Alberto Bégué <alberto.begue@della.ai> Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
…gface#14139) (huggingface#15175) * Compute loss independent from decoder (as 14139) * fix expected seq_len + style * Apply the same change to TFVisionEncoderDecoderModel * fix style * Add case with labels in equivalence test * uncomment * Add case with labels in equivalence test * add decoder_token_labels * use hf_compute_loss * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Add copied from Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MBP.localdomain>
…ace#15612) * Add informative warning
* Fix TF MT5 vocab resize * more assertive testing
* [deepspeed docs] round_robin_gradients * training and/or eval/predict loss is * Update docs/source/main_classes/deepspeed.mdx Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Move generate docs * up * Update docs/source/_toctree.yml * correct * correct some stuff * correct tests * more fixes * finish generate * add to doc stest * finish * finalize * add warning to generate method
* Add attentions_option to common tester * Fix tests, apply suggestion * Apply suggestion from code review Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
* Fix Bug in Flax-Speech-Encoder-Decoder Test * change thresholds for CPU precision
* fix Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Build the doc in a seperate folder then move it * Allow job * Is this it? * Dislike comments? * Copy instead of move * Removing version built * Typos * No variable * Take _versions.yml into account * Finish main job and add dev job * Forgot the run * Fix syntax error * Execute builder from the repo * Typo
* MVP * apply decorator to TFBertModel * finish updating bert * update rembert (copy-linked to bert) * update roberta (copy-linked to bert); Fix args * Now working for non-text modalities
* Fix Bug in Flax Seq2Seq Models * incorporate suggested changes
* Support for torch 1.11 * Address Sylvain's comment
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
* support not sharing embeddings * update modeling * update tokenizer * fix conversion script * always use self.shared * boom boom * begin tests * update tests * fix resize_decoder_token_embeddings * address Patrick's comments * style * update conversion script * fix conversion script * fix tokenizer * better name target vocab * add integration test for tokenizer with two vocabs * style * address Patrick's comments * add integration test for model
…gface#16045) * Fix duplicate arguments passed to dummy inputs in ONNX export * Fix M2M100 ONNX config * Ensure we check PreTrained model only if torch is available * Remove TensorFlow tests for models without PyTorch parity
KSGulin
force-pushed
the
update-4.18.0-refactor
branch
from
March 10, 2022 19:51
0647141
to
8cac95f
Compare
KSGulin
force-pushed
the
update-4.18.0-refactor
branch
from
March 10, 2022 20:28
2d6d005
to
7d1bb5f
Compare
KSGulin
requested review from
a team,
natuan,
markurtz and
horheynm
and removed request for
a team
March 11, 2022 12:10
natuan
approved these changes
Mar 14, 2022
spacemanidol
approved these changes
Mar 15, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.