-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix conflicts in fuyu_follow_up_image_processing #27228
Fix conflicts in fuyu_follow_up_image_processing #27228
Commits on Oct 19, 2023
-
[Docs] Make sure important decode and generate method are nicely disp…
…layed in Whisper docs (huggingface#26927) better docstrings whisper
Configuration menu - View commit details
-
Copy full SHA for 734dd96 - Browse repository at this point
Copy the full SHA 734dd96View commit details -
Fix and re-enable ConversationalPipeline tests (huggingface#26907)
* Fix and re-enable conversationalpipeline tests * Fix the batch test so the change only applies to conversational pipeline
Configuration menu - View commit details
-
Copy full SHA for bdbcd5d - Browse repository at this point
Copy the full SHA bdbcd5dView commit details -
[docstring] Fix docstrings for
CodeGen
(huggingface#26821)* remove docstrings CodeGen from objects_to_ignore * autofix codegen docstrings * fill in the missing types and docstrings * fixup * change descriptions to be in a separate line * apply docstring suggestions from code review Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com> * update n_ctx description in CodeGenConfig --------- Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for ad08137 - Browse repository at this point
Copy the full SHA ad08137View commit details -
Configuration menu - View commit details
-
Copy full SHA for 73dc23f - Browse repository at this point
Copy the full SHA 73dc23fView commit details -
Pin Keras for now (huggingface#26904)
* Pin Keras for now out of paranoia * Add the keras pin to _tests_requirements.txt too * Make sure the Keras version matches the TF one * make fixup
Configuration menu - View commit details
-
Copy full SHA for cbd278f - Browse repository at this point
Copy the full SHA cbd278fView commit details -
[
FA-2
/Mistral
] Supprot fa-2 + right padding + forward (huggingf……ace#26912) supprot fa-2 + right padding + forward
Configuration menu - View commit details
-
Copy full SHA for bc4bbd9 - Browse repository at this point
Copy the full SHA bc4bbd9View commit details -
Configuration menu - View commit details
-
Copy full SHA for ae4fb84 - Browse repository at this point
Copy the full SHA ae4fb84View commit details -
Corrected modalities description in README_ru.md (huggingface#26913)
Update README_ru.md Corrected modalities description in README
Configuration menu - View commit details
-
Copy full SHA for 08a2edf - Browse repository at this point
Copy the full SHA 08a2edfView commit details
Commits on Oct 20, 2023
-
[docstring] Fix docstring for speech-to-text config (huggingface#26883)
* Fix docstring for speech-to-text config * Refactor doc line len <= 119 char * Remove Speech2TextConfig from OBJECTS_TO_IGNORE * Fix Speech2TextConfig doc str * Fix Speech2TextConfig doc using doc-builder * Refactor Speech2TextConfig doc
Configuration menu - View commit details
-
Copy full SHA for 929134b - Browse repository at this point
Copy the full SHA 929134bView commit details -
fix set_transform link docs (huggingface#26856)
* fix set_transform link * Update docs/source/en/preprocessing.md Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * use doc-builder sintax --------- Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 9b19766 - Browse repository at this point
Copy the full SHA 9b19766View commit details -
Fix Fuyu image scaling bug (huggingface#26918)
* Fix Fuyu image scaling bug It could produce negative padding and hence inference errors for certain image sizes. * Fix aspect ratio scaling test
Configuration menu - View commit details
-
Copy full SHA for c030fc8 - Browse repository at this point
Copy the full SHA c030fc8View commit details -
Update README_hd.md (huggingface#26872)
* Update README_hd.md - Fixed broken links I hope this small contribution adds value to this project. * Update README_hd.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 224794b - Browse repository at this point
Copy the full SHA 224794bView commit details -
Added Telugu [te] translations (huggingface#26828)
* Create index.md * Create _toctree.yml * Updated index.md in telugu * Update _toctree.yml * Create quicktour.md * Update quicktour.md * Create index.md * Update quicktour.md * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Delete docs/source/hi/index.md * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update build_documentation.yml Added telugu [te] * Update build_pr_documentation.yml Added Telugu [te] * Update _toctree.yml --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 093848d - Browse repository at this point
Copy the full SHA 093848dView commit details
Commits on Oct 23, 2023
-
fix logit-to-multi-hot conversion in example (huggingface#26936)
* fix logit to multi-hot converstion * add comments * typo
Configuration menu - View commit details
-
Copy full SHA for f71c9cc - Browse repository at this point
Copy the full SHA f71c9ccView commit details -
Configuration menu - View commit details
-
Copy full SHA for 7003294 - Browse repository at this point
Copy the full SHA 7003294View commit details -
python falcon doc-string example typo (huggingface#26995)
git python falcon typo
Configuration menu - View commit details
-
Copy full SHA for 4542566 - Browse repository at this point
Copy the full SHA 4542566View commit details -
skip two tests (huggingface#27013)
* skip two tests * skip torch as well * fixup
Configuration menu - View commit details
-
Copy full SHA for ef978d0 - Browse repository at this point
Copy the full SHA ef978d0View commit details -
Configuration menu - View commit details
-
Copy full SHA for d33d313 - Browse repository at this point
Copy the full SHA d33d313View commit details -
Change default
max_shard_size
to smaller value (huggingface#26942)* Update modeling_utils.py * fixup * let's change it to 5GB * fix
Configuration menu - View commit details
-
Copy full SHA for 50d0cf4 - Browse repository at this point
Copy the full SHA 50d0cf4View commit details -
Add Seamless M4T model (huggingface#25693)
* first raw commit * still POC * tentative convert script * almost working speech encoder conversion scripts * intermediate code for encoder/decoders * add modeling code * first version of speech encoder * make style * add new adapter layer architecture * add adapter block * add first tentative config * add working speech encoder conversion * base model convert works now * make style * remove unnecessary classes * remove unecessary functions * add modeling code speech encoder * rework logics * forward pass of sub components work * add modeling codes * some config modifs and modeling code modifs * save WIP * new edits * same output speech encoder * correct attention mask * correct attention mask * fix generation * new generation logics * erase comments * make style * fix typo * add some descriptions * new state * clean imports * add tests * make style * make beam search and num_return_sequences>1 works * correct edge case issue * correct SeamlessM4TConformerSamePadLayer copied from * replace ACT2FN relu by nn.relu * remove unecessary return variable * move back a class * change name conformer_attention_mask ->conv_attention_mask * better nit code * add some Copied from statements * small nits * small nit in dict.get * rename t2u model -> conditionalgeneration * ongoing refactoring of structure * update models architecture * remove SeamlessM4TMultiModal classes * add tests * adapt tests * some non-working code for vocoder * add seamlessM4T vocoder * remove buggy line * fix some hifigan related bugs * remove hifigan specifc config * change * add WIP tokenization * add seamlessM4T working tokenzier * update tokenization * add tentative feature extractor * Update converting script * update working FE * refactor input_values -> input_features * update FE * changes in generation, tokenizer and modeling * make style and add t2u_decoder_input_ids * add intermediate outputs for ToSpeech models * add vocoder to speech models * update valueerror * update FE with languages * add vocoder convert * update config docstrings and names * update generation code and configuration * remove todos and update config.pad_token_id to generation_config.pad_token_id * move block vocoder * remove unecessary code and uniformize tospeech code * add feature extractor import * make style and fix some copies from * correct consistency + make fix-copies * add processor code * remove comments * add fast tokenizer support * correct pad_token_id in M4TModel * correct config * update tests and codes + make style * make some suggested correstion - correct comments and change naming * rename some attributes * rename some attributes * remove unecessary sequential * remove option to use dur predictor * nit * refactor hifigan * replace normalize_mean and normalize_var with do_normalize + save lang ids to generation config * add tests * change tgt_lang logic * update generation ToSpeech * add support import SeamlessM4TProcessor * fix generate * make tests * update integration tests, add option to only return text and update tokenizer fast * fix wrong function call * update import and convert script * update integration tests + update repo id * correct paths and add first test * update how new attention masks are computed * update tests * take first care of batching in vocoder code * add batching with the vocoder * add waveform lengths to model outputs * make style * add generate kwargs + forward kwargs of M4TModel * add docstrings forward methods * reformate docstrings * add docstrings t2u model * add another round of modeling docstrings + reformate speaker_id -> spkr_id * make style * fix check_repo * make style * add seamlessm4t to toctree * correct check_config_attributes * write config docstrings + some modifs * make style * add docstrings tokenizer * add docstrings to processor, fe and tokenizers * make style * write first version of model docs * fix FE + correct FE test * fix tokenizer + add correct integration tests * fix most tokenization tests * make style * correct most processor test * add generation tests and fix num_return_sequences > 1 * correct integration tests -still one left * make style * correct position embedding * change numbeams to 1 * refactor some modeling code and correct one test * make style * correct typo * refactor intermediate fnn * refactor feedforward conformer * make style * remove comments * make style * fix tokenizer tests * make style * correct processor tests * make style * correct S2TT integration * Apply suggestions from Sanchit code review Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> * correct typo * replace torch.nn->nn + make style * change Output naming (waveforms -> waveform) and ordering * nit renaming and formating * remove return None when not necessary * refactor SeamlessM4TConformerFeedForward * nit typo * remove almost copied from comments * add a copied from comment and remove an unecessary dropout * remove inputs_embeds from speechencoder * remove backward compatibiliy function * reformate class docstrings for a few components * remove unecessary methods * split over 2 lines smthg hard to read * make style * replace two steps offset by one step as suggested * nice typo * move warnings * remove useless lines from processor * make generation non-standard test more robusts * remove torch.inference_mode from tests * split integration tests * enrich md * rename control_symbol_vocoder_offset->vocoder_offset * clean convert file * remove tgt_lang and src_lang from FE * change generate docstring of ToText models * update generate docstring of tospeech models * unify how to deal withtext_decoder_input_ids * add default spkr_id * unify tgt_lang for t2u_model * simplify tgt_lang verification * remove a todo * change config docstring * make style * simplify t2u_tgt_lang_id * make style * enrich/correct comments * enrich .md * correct typo in docstrings * add torchaudio dependency * update tokenizer * make style and fix copies * modify SeamlessM4TConverter with new tokenizer behaviour * make style * correct small typo docs * fix import * update docs and add requirement to tests * add convert_fairseq2_to_hf in utils/not_doctested.txt * update FE * fix imports and make style * remove torchaudio in FE test * add seamless_m4t.md to utils/not_doctested.txt * nits and change the way docstring dataset is loaded * move checkpoints from ylacombe/ to facebook/ orga * refactor warning/error to be in the 119 line width limit * round overly precised floats * add stereo audio behaviour * refactor .md and make style * enrich docs with more precised architecture description * readd undocumented models * make fix-copies * apply some suggestions * Apply suggestions from code review Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * correct bug from previous commit * refactor a parameter allowing to clean the code + some small nits * clean tokenizer * make style and fix * make style * clean tokenizers arguments * add precisions for some tests * move docs from not_tested to slow * modify tokenizer according to last comments * add copied from statements in tests * correct convert script * correct parameter docstring style * correct tokenization * correct multi gpus * make style * clean modeling code * make style * add copied from statements * add copied statements * add support with ASR pipeline * remove file added inadvertently * fix docstrings seamlessM4TModel * add seamlessM4TConfig to OBJECTS_TO_IGNORE due of unconventional markdown * add seamlessm4t to assisted generation ignored models --------- Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for cb45f71 - Browse repository at this point
Copy the full SHA cb45f71View commit details -
[
NLLB-MoE
] Fix NLLB MoE 4bit inference (huggingface#27012)fix NLLB MoE 4bit
Configuration menu - View commit details
-
Copy full SHA for 244a53e - Browse repository at this point
Copy the full SHA 244a53eView commit details -
[
SeamlessM4T
] fix copies with NLLB MoE int8 (huggingface#27018)fix copies on newly merged model
Configuration menu - View commit details
-
Copy full SHA for f9f27b0 - Browse repository at this point
Copy the full SHA f9f27b0View commit details -
small typos found (huggingface#26988)
just very small typos found
Configuration menu - View commit details
-
Copy full SHA for c0b5ad9 - Browse repository at this point
Copy the full SHA c0b5ad9View commit details -
Configuration menu - View commit details
-
Copy full SHA for f7354a3 - Browse repository at this point
Copy the full SHA f7354a3View commit details -
Translate
pipeline_tutorial.md
to chinese (huggingface#26954)* update translation of pipeline_tutorial and preprocessing(Version1.0) * update translation of pipeline_tutorial and preprocessing(Version2.0) * update translation docs * update to fix problems mentioned in review --------- Co-authored-by: jiaqiw <wangjiaqi50@huawei.com>
Configuration menu - View commit details
-
Copy full SHA for f09a081 - Browse repository at this point
Copy the full SHA f09a081View commit details -
Remove ambiguous
padding_mask
and instead use a 2D->4D Attn Mask Ma……pper (huggingface#26792) * [Attn Mask Converter] refactor attn mask * up * Apply suggestions from code review Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com> * improve * rename * better cache * renaming * improve more * improve * fix bug * finalize * make style & make fix-copies * correct more * start moving attention_mask * fix llama * improve falcon * up * improve more * improve more * Update src/transformers/models/owlv2/modeling_owlv2.py * make style * make style * rename to converter * Apply suggestions from code review --------- Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 33f98cf - Browse repository at this point
Copy the full SHA 33f98cfView commit details -
🌐 [i18n-ZH] Translate multilingual into Chinese (huggingface#26935)
translate multilingual into Chinese Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 19ae050 - Browse repository at this point
Copy the full SHA 19ae050View commit details -
translate
preprocessing.md
to Chinese (huggingface#26955)* translate preprocessing.md to Chinese * update files fixing problems mentioned in review * update files fixing problems mentioned in review --------- Co-authored-by: jiaqiw <wangjiaqi50@huawei.com>
Configuration menu - View commit details
-
Copy full SHA for b0d1d7f - Browse repository at this point
Copy the full SHA b0d1d7fView commit details -
Bugfix device map detr model (huggingface#26849)
* Fixed replace_batch_norm when on meta device * lint fix * Adding coauthor Co-authored-by: Pi Esposito <piero.skywalker@gmail.com> * Removed tests * Remove unused deps * Try to fix copy issue * try fix copy one more time * Reverted import changes --------- Co-authored-by: Pi Esposito <piero.skywalker@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for f370beb - Browse repository at this point
Copy the full SHA f370bebView commit details -
Configuration menu - View commit details
-
Copy full SHA for 25c022d - Browse repository at this point
Copy the full SHA 25c022dView commit details -
🌐 [i18n-ZH] Translate create_a_model.md into Chinese (huggingface#27026)
docs(zh): translate create_a_model.md
Configuration menu - View commit details
-
Copy full SHA for 32f799d - Browse repository at this point
Copy the full SHA 32f799dView commit details
Commits on Oct 24, 2023
-
Fix key dtype in GPTJ and CodeGen (huggingface#26836)
* fix key dtype in gptj and codegen * delay the key cast to a later point * fix
Configuration menu - View commit details
-
Copy full SHA for ede051f - Browse repository at this point
Copy the full SHA ede051fView commit details -
Register ModelOutput as supported torch pytree nodes (huggingface#26618)
* Register ModelOutput as supported torch pytree nodes * Test ModelOutput as supported torch pytree nodes * Update type hints for pytree unflatten functions
Configuration menu - View commit details
-
Copy full SHA for cc7803c - Browse repository at this point
Copy the full SHA cc7803cView commit details -
Add
default_to_square_for_size
toCLIPImageProcessor
(huggingface……#26965) * fix * fix * fix * fix * fix --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for fc142bd - Browse repository at this point
Copy the full SHA fc142bdView commit details -
Add descriptive docstring to WhisperTimeStampLogitsProcessor (hugging…
…face#25642) * adding in logit examples for Whisper processor * adding in updated logits processor for Whisper * adding in cleaned version of logits processor for Whisper * adding docstrings for whisper processor * making sure the formatting is correct * adding logits after doc builder * Update src/transformers/generation/logits_process.py Adding in suggested fix to the LogitProcessor description. Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/generation/logits_process.py Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/generation/logits_process.py Removing tip per suggestion. Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/generation/logits_process.py Removing redundant code per suggestion. Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com> * adding in revised version * adding in version with timestamp examples * Update src/transformers/generation/logits_process.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * enhanced paragraph on behavior of processor * fixing doc quality issue * removing the word poem from example * adding in updated docstring * adding in new version of file after doc-builder --------- Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 576e282 - Browse repository at this point
Copy the full SHA 576e282View commit details -
Normalize only if needed (huggingface#26049)
* Normalize only if needed * Update examples/pytorch/image-classification/run_image_classification.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * if else in one line * within block * one more place, sorry for mess * import order * Update examples/pytorch/image-classification/run_image_classification.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update examples/pytorch/image-classification/run_image_classification_no_trainer.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for e2d6d5c - Browse repository at this point
Copy the full SHA e2d6d5cView commit details -
[
TFxxxxForSequenceClassifciation
] Fix the eager mode after huggingf……ace#25085 (huggingface#25751) * TODOS * Switch .shape -> shape_list --------- Co-authored-by: Matt <rocketknight1@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 7bde5d6 - Browse repository at this point
Copy the full SHA 7bde5d6View commit details -
Safe import of rgb_to_id from FE modules (huggingface#27037)
Safe import from FE modules
Configuration menu - View commit details
-
Copy full SHA for cb0c680 - Browse repository at this point
Copy the full SHA cb0c680View commit details -
add info on TRL docs (huggingface#27024)
* add info on TRL docs * add TRL link * tweak text * tweak text
Configuration menu - View commit details
-
Copy full SHA for b18e314 - Browse repository at this point
Copy the full SHA b18e314View commit details -
Add fuyu device map (huggingface#26949)
* add _no_split_modules * style * fix _no_split_modules * add doc
Configuration menu - View commit details
-
Copy full SHA for 41496b9 - Browse repository at this point
Copy the full SHA 41496b9View commit details -
Device agnostic testing (huggingface#25870)
* adds agnostic decorators and availability fns * renaming decorators and fixing imports * updating some representative example tests bloom, opt, and reformer for now * wip device agnostic functions * lru cache to device checking functions * adds `TRANSFORMERS_TEST_DEVICE_SPEC` if present, imports the target file and updates device to function mappings * comments `TRANSFORMERS_TEST_DEVICE_SPEC` code * extra checks on device name * `make style; make quality` * updates default functions for agnostic calls * applies suggestions from review * adds `is_torch_available` guard * Add spec file to docs, rename function dispatch names to backend_* * add backend import to docs example for spec file * change instances of to * Move register backend to before device check as per @statelesshz changes * make style * make opt test require fp16 to run --------- Co-authored-by: arsalanu <arsalanu@graphcore.ai> Co-authored-by: arsalanu <hzji210@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 9da4517 - Browse repository at this point
Copy the full SHA 9da4517View commit details -
Fix config silent copy in from_pretrained (huggingface#27043)
* Fix config modeling utils * fix more * fix attn mask bug * Update src/transformers/modeling_utils.py
Configuration menu - View commit details
-
Copy full SHA for 13ef14e - Browse repository at this point
Copy the full SHA 13ef14eView commit details -
[docs] Performance docs refactor p.2 (huggingface#26791)
* initial edits * improvements for clarity and flow * improvements for clarity and flow, removed the repetead section * removed two docs that had no content * Revert "removed two docs that had no content" This reverts commit e98fa2f. * Apply suggestions from code review Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * feedback addressed * more feedback addressed * feedback addressed --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 9333bf0 - Browse repository at this point
Copy the full SHA 9333bf0View commit details -
Add a default decoder_attention_mask for EncoderDecoderModel during t…
…raining (huggingface#26752) * Add a default decoder_attention_mask for EncoderDecoderModel during training Since we are already creating the default decoder_input_ids from the labels, we should also create a default decoder_attention_mask to go with it. * Fix test constant that relied on manual_seed() The test was changed to use a decoder_attention_mask that ignores padding instead (which is the default one created by BERT when attention_mask is None). * Create the decoder_attention_mask using decoder_input_ids instead of labels * Fix formatting in test
Configuration menu - View commit details
-
Copy full SHA for a0fd344 - Browse repository at this point
Copy the full SHA a0fd344View commit details -
Fix RoPE config validation for FalconConfig + various config typos (h…
…uggingface#26929) * Resolve incorrect ValueError in RoPE config for Falcon * Add broken codeblock tag in Falcon Config * Fix typo: an float -> a float * Implement copy functionality for Fuyu and Persimmon for RoPE scaling validation * Make style
Configuration menu - View commit details
-
Copy full SHA for 6cbc136 - Browse repository at this point
Copy the full SHA 6cbc136View commit details
Commits on Oct 25, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 9286f0a - Browse repository at this point
Copy the full SHA 9286f0aView commit details -
[
core
] Refactor ofgradient_checkpointing
(huggingface#27020)* v1 * fix * remove `create_custom_forward` * fixup * fixup * add test and fix all failing GC tests * remove all remaining `create_custom_forward` methods * fix idefics bug * fixup * replace with `__call__` * add comment * quality
Configuration menu - View commit details
-
Copy full SHA for 06e782d - Browse repository at this point
Copy the full SHA 06e782dView commit details -
Fix TypicalLogitsWarper tensor OOB indexing edge case (huggingface#26579
Configuration menu - View commit details
-
Copy full SHA for 0baa924 - Browse repository at this point
Copy the full SHA 0baa924View commit details -
[docstring] fix incorrect llama docstring: encoder -> decoder (huggin…
…gface#27071) fix incorrect docstring: encoder -> decoder
Configuration menu - View commit details
-
Copy full SHA for a64f8c1 - Browse repository at this point
Copy the full SHA a64f8c1View commit details -
Configuration menu - View commit details
-
Copy full SHA for ba073ea - Browse repository at this point
Copy the full SHA ba073eaView commit details -
[
docs
] AddMaskGenerationPipeline
in docs (huggingface#27063)* add `MaskGenerationPipeline` in docs * Update __init__.py * fix repo consistency and clarify docstring * add on check docstirngs * actually we do have a tf sam * oops
Configuration menu - View commit details
-
Copy full SHA for c34c50c - Browse repository at this point
Copy the full SHA c34c50cView commit details -
🌐 [i18n-ZH] Translate custom_models.md into Chinese (huggingface#27065)
* docs(zh): translate custom_models.md * minor fix in customer_models Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for ba5144f - Browse repository at this point
Copy the full SHA ba5144fView commit details -
Hindi translation of pipeline_tutorial.md (huggingface#26837)
* hindi translation of pipeline_tutorial.md * Update pipeline_tutorial.md * Update build_documentation.yml * Update build_pr_documentation.yml * Updated build_documentation.yml --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for a2f55a6 - Browse repository at this point
Copy the full SHA a2f55a6View commit details
Commits on Oct 26, 2023
-
Handle unsharded Llama2 model types in conversion script (huggingface…
…#27069) Handle all unshared models types
Configuration menu - View commit details
-
Copy full SHA for df2eebf - Browse repository at this point
Copy the full SHA df2eebfView commit details -
Bump werkzeug from 2.2.3 to 3.0.1 in /examples/research_projects/deci…
…sion_transformer (huggingface#27072) Bump werkzeug in /examples/research_projects/decision_transformer Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.2.3 to 3.0.1. - [Release notes](https://github.com/pallets/werkzeug/releases) - [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst) - [Commits](pallets/werkzeug@2.2.3...3.0.1) --- updated-dependencies: - dependency-name: werkzeug dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 9c5240a - Browse repository at this point
Copy the full SHA 9c5240aView commit details -
Bump urllib3 from 1.26.17 to 1.26.18 in /examples/research_projects/l…
…xmert (huggingface#26888) Bump urllib3 in /examples/research_projects/lxmert Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](urllib3/urllib3@1.26.17...1.26.18) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 3c26924 - Browse repository at this point
Copy the full SHA 3c26924View commit details -
Bring back
set_epoch
for Accelerate-based dataloaders (huggingface#……26850) * Working tests! * Fix sampler * Fix * Update src/transformers/trainer.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * Fix check * Clean --------- Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 9041240 - Browse repository at this point
Copy the full SHA 9041240View commit details -
Bump
flash_attn
version to2.1
(huggingface#27079)* pin FA-2 to `2.1` * fix on modeling
Configuration menu - View commit details
-
Copy full SHA for efba1a1 - Browse repository at this point
Copy the full SHA efba1a1View commit details -
Configuration menu - View commit details
-
Copy full SHA for fe2877c - Browse repository at this point
Copy the full SHA fe2877cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 15cd096 - Browse repository at this point
Copy the full SHA 15cd096View commit details -
Add-support for commit description (huggingface#26704)
* fix * update * revert * add dosctring * good to go * update * add a test
Configuration menu - View commit details
-
Copy full SHA for 4864d08 - Browse repository at this point
Copy the full SHA 4864d08View commit details -
[Llama FA2] Re-add _expand_attention_mask and clean a couple things (h…
…uggingface#27074) * clean * clean llama * fix more * make style * Apply suggestions from code review * Apply suggestions from code review * Update src/transformers/models/llama/modeling_llama.py * Update src/transformers/models/llama/modeling_llama.py * Apply suggestions from code review * finish * make style
Configuration menu - View commit details
-
Copy full SHA for d7cb5e1 - Browse repository at this point
Copy the full SHA d7cb5e1View commit details -
add exllamav2 arg (huggingface#26437)
* add_ xllamav2 arg * add test * style * add check * add doc * replace by use_exllama_v2 * fix tests * fix doc * style * better condition * fix logic * add deprecate msg
Configuration menu - View commit details
-
Copy full SHA for 8214d6e - Browse repository at this point
Copy the full SHA 8214d6eView commit details -
Correct docstrings and a typo in comments (huggingface#27047)
* docs(training_args): correct docstrings Correct docstrings of these methods in `TrainingArguments`: - `set_save` - `set_logging` * docs(training_args): adjust words in docstrings Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * docs(trainer): correct a typo in comments --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 1892592 - Browse repository at this point
Copy the full SHA 1892592View commit details -
Save TB logs as part of push_to_hub (huggingface#27022)
* Support runs/ * Upload runs folder as part of push to hub * Add a test * Add to test deps * Update with proposed solution from Slack * Ensure that repo gets deleted in tests
Configuration menu - View commit details
-
Copy full SHA for 34a6406 - Browse repository at this point
Copy the full SHA 34a6406View commit details -
Added huggingface emoji instead of the markdown format (huggingface#2…
…7091) Added huggingface emoji instead of the markdown format as it was not displaying the required emoji in that format
Configuration menu - View commit details
-
Copy full SHA for 6f31601 - Browse repository at this point
Copy the full SHA 6f31601View commit details
Commits on Oct 27, 2023
-
[
T5Tokenizer
] Fix fast and extra tokens (huggingface#27085)* v4.35.dev.0 * nit t5fast match t5 slow
Configuration menu - View commit details
-
Copy full SHA for aa4198a - Browse repository at this point
Copy the full SHA aa4198aView commit details -
Revert "add exllamav2 arg" (huggingface#27102)
Revert "add exllamav2 arg (huggingface#26437)" This reverts commit 8214d6e.
Configuration menu - View commit details
-
Copy full SHA for 90ee9ce - Browse repository at this point
Copy the full SHA 90ee9ceView commit details -
Add early stopping for Bark generation via logits processor (huggingf…
…ace#26675) * add early stopping logits processor * black formmated * indent * follow method signature * actual logic * check for None * address comments on docstrings and method signature * add unit test under `LogitsProcessorTest` wip * unit test passing * black formatted * condition per sample * add to BarkModelIntegrationTests * wip BarkSemanticModelTest * rename and add to kwargs handling * not add to BarkSemanticModelTest * correct logic and assert last outputs tokens different in test * doc-builder style * read from kwargs as well * assert len of with less than that of without * ruff * add back seed and test case * add original impl default suggestion * doc-builder * rename and use softmax * switch back to LogitsProcessor and update docs wording * camelCase and spelling and saving compute * assert strictly less than * assert less than * expand test_generate_semantic_early_stop instead
Configuration menu - View commit details
-
Copy full SHA for e2bffcf - Browse repository at this point
Copy the full SHA e2bffcfView commit details -
Configuration menu - View commit details
-
Copy full SHA for 66b088f - Browse repository at this point
Copy the full SHA 66b088fView commit details -
Fix no split modules underlying modules (huggingface#27090)
* fix no split * style * remove comm * Update src/transformers/modeling_utils.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * rename modules --------- Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 5be1fb6 - Browse repository at this point
Copy the full SHA 5be1fb6View commit details -
[
core
/gradient_checkpointing
] Refactor GC - part 2 (huggingface#……27073) * fix * more fixes * fix other models * fix long t5 * use `gradient_checkpointing_func` instead * fix copies * set `gradient_checkpointing_func` as a private attribute and retrieve previous behaviour * Update src/transformers/modeling_utils.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * replace it with `is_gradient_checkpointing_set` * remove default * Update src/transformers/modeling_utils.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * fixup --------- Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for ffff9e7 - Browse repository at this point
Copy the full SHA ffff9e7View commit details -
fix detr device map (huggingface#27089)
* fix detr device map * add comments
Configuration menu - View commit details
-
Copy full SHA for 29c74f5 - Browse repository at this point
Copy the full SHA 29c74f5View commit details -
[Attention Mask] Refactor all encoder-decoder attention mask (hugging…
…face#27086) * [FA2 Bart] Add FA2 to all Bart-like * better * Refactor attention mask * remove all customized atteniton logic * format * mass rename * replace _expand_mask * replace _expand_mask * mass rename * add pt files * mass replace & rename * mass replace & rename * mass replace & rename * mass replace & rename * Update src/transformers/models/idefics/modeling_idefics.py * fix more * clean more * fix more * make style * fix again * finish * finish * finish * finish * finish * finish * finish * finish * finish * finish * Apply suggestions from code review * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * small fix mistral * finish * finish * finish * finish --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for ac58937 - Browse repository at this point
Copy the full SHA ac58937View commit details -
Added Telugu [te] translation for README.md in main (huggingface#27077)
* Create index.md * Create _toctree.yml * Updated index.md in telugu * Update _toctree.yml * Create quicktour.md * Update quicktour.md * Create index.md * Update quicktour.md * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Delete docs/source/hi/index.md * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/te/quicktour.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update build_documentation.yml Added telugu [te] * Update build_pr_documentation.yml Added Telugu [te] * Update _toctree.yml * Create README_te.md Telugu translation for README.md * Update README_te.md Added Telugu translation for Readme.md * Update README_te.md * Update README_te.md * Update README_te.md * Update README_te.md * Update README.md * Update README_es.md * Update README_es.md * Update README_hd.md * Update README_ja.md * Update README_ko.md * Update README_pt-br.md * Update README_ru.md * Update README_zh-hans.md * Update README_zh-hant.md * Update README_te.md --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 96f9e78 - Browse repository at this point
Copy the full SHA 96f9e78View commit details -
translate transformers_agents.md to Chinese (huggingface#27046)
* update translation * fix problems mentioned in reviews
Configuration menu - View commit details
-
Copy full SHA for ef23b68 - Browse repository at this point
Copy the full SHA ef23b68View commit details -
Fix docstring and type hint for resize (huggingface#27104)
fix docstring and type hint for resize
Configuration menu - View commit details
-
Copy full SHA for 9e87618 - Browse repository at this point
Copy the full SHA 9e87618View commit details
Commits on Oct 29, 2023
-
[Typo fix] flag config in WANDB (huggingface#27130)
typo fix flag config
Configuration menu - View commit details
-
Copy full SHA for 722e936 - Browse repository at this point
Copy the full SHA 722e936View commit details
Commits on Oct 30, 2023
-
Fix slack report failing for doctest (huggingface#27042)
* fix slack report for doctest * separate reports * style --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 211ad4c - Browse repository at this point
Copy the full SHA 211ad4cView commit details -
[
FA2
/Mistral
] Revert previous behavior with right padding + forw……ard (huggingface#27125) Update modeling_mistral.py
Configuration menu - View commit details
-
Copy full SHA for 1604321 - Browse repository at this point
Copy the full SHA 1604321View commit details -
Fix data2vec-audio note about attention mask (huggingface#27116)
fix data2vec audio note about attention mask
Configuration menu - View commit details
-
Copy full SHA for e830495 - Browse repository at this point
Copy the full SHA e830495View commit details -
[
Trainer
/GC
] Addgradient_checkpointing_kwargs
in trainer and…… training arguments (huggingface#27068) * add `gradient_checkpointing_kwargs` in trainer and training arguments * add comment * add test - currently failing * now tests pass
Configuration menu - View commit details
-
Copy full SHA for 5fbed2d - Browse repository at this point
Copy the full SHA 5fbed2dView commit details -
remove the obsolete code related to fairscale FSDP (huggingface#26651)
* remove the obsolete code related to fairscale FSDP * apple review suggestion
Configuration menu - View commit details
-
Copy full SHA for d751dbe - Browse repository at this point
Copy the full SHA d751dbeView commit details -
Add
Kosmos-2
model (huggingface#24709)* Add KOSMOS-2 model * update * update * update * address review comment - 001 * address review comment - 002 * address review comment - 003 * style * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * fix * address review comment - 004 * address review comment - 005 * address review comment - 006 * address review comment - 007 * address review comment - 008 * address review comment - 009 * address review comment - 010 * address review comment - 011 * update readme * fix * fix * fix * [skip ci] fix * revert the change in _decode * fix docstring * fix docstring * Update docs/source/en/model_doc/kosmos-2.md Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * no more Kosmos2Tokenizer * style * remove "returned when being computed by the model" * Apply suggestions from code review Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * UTM5 Atten * fix attn mask * use present_key_value_states instead of next_decoder_cache * style * conversion scripts * conversion scripts * conversion scripts * Add _reorder_cache * fix doctest and copies * rename 1 * rename 2 * rename 3 * make fixup * fix table * fix docstring * rename 4 * change repo_id * remove tip * update md file * make style * update md file * put docs/source/en/model_doc/kosmos-2.md to slow * update conversion script * Use CLIPImageProcessor in Kosmos2Processor * Remove Kosmos2ImageProcessor * Remove to_dict in Kosmos2Config * Remove files * fix import * Update conversion * normalized=False * Not using hardcoded values like  * No assert * No nested functions * Fix md file * copy * update doc * fix docstring * fix name * Remove _add_remove_spaces_around_tag_tokens * Remove dummy docstring of _preprocess_single_example * Use `BatchEncoding` * temp * temp * temp * Update * Update * Make Kosmos2ProcessorTest a bit pretty * Update gradient checkpointing * Fix gradient checkpointing test * Remove one liner remove_special_fields * Simplify conversion script * fix add_eos_token * update readme * update tests * Change to microsoft/kosmos-2-patch14-224 * style * Fix doc --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 691fd8f - Browse repository at this point
Copy the full SHA 691fd8fView commit details -
Fix some tests using
"common_voice"
(huggingface#27147)* Use mozilla-foundation/common_voice_11_0 * Update expected values * Update expected values * For test_word_time_stamp_integration --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 5769949 - Browse repository at this point
Copy the full SHA 5769949View commit details -
[
tests
/Quantization
] Fix bnb test (huggingface#27145)* fix bnb test * link to GH issue
Configuration menu - View commit details
-
Copy full SHA for 6b46677 - Browse repository at this point
Copy the full SHA 6b46677View commit details -
Configuration menu - View commit details
-
Copy full SHA for cd19b19 - Browse repository at this point
Copy the full SHA cd19b19View commit details -
Remove some Kosmos-2
copied from
(huggingface#27149)* fix * fix * fix * fix --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 3224c0c - Browse repository at this point
Copy the full SHA 3224c0cView commit details -
🌐 [i18n-ZH] Translate serialization.md into Chinese (huggingface#27076)
* docs(zh): translate serialization.md * docs(zh): add space around links
Configuration menu - View commit details
-
Copy full SHA for 9093b19 - Browse repository at this point
Copy the full SHA 9093b19View commit details -
Translating
en/main_classes
folder docs to Japanese 🇯🇵 (huggingface……#26894) * add * add * add * Add deepspeed.md * Add * add * Update docs/source/ja/main_classes/callback.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/ja/main_classes/output.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/ja/main_classes/pipelines.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/ja/main_classes/processors.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/ja/main_classes/processors.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/ja/main_classes/text_generation.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/ja/main_classes/processors.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update logging.md * Update toctree.yml * Update docs/source/ja/main_classes/deepspeed.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Add suggesitons * m * Update docs/source/ja/main_classes/trainer.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update toctree.yml * Update Quantization.md * Update docs/source/ja/_toctree.yml Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update toctree.yml * Update docs/source/en/main_classes/deepspeed.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/main_classes/deepspeed.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 84724ef - Browse repository at this point
Copy the full SHA 84724efView commit details -
Configuration menu - View commit details
-
Copy full SHA for 5bbf671 - Browse repository at this point
Copy the full SHA 5bbf671View commit details -
[
core
/GC
/tests
] Stronger GC tests (huggingface#27124)* stronger GC tests * better tests and skip failing tests * break down into 3 sub-tests * break down into 3 sub-tests * refactor a bit * more refactor * fix * last nit * credits contrib and suggestions * credits contrib and suggestions --------- Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for f7ea959 - Browse repository at this point
Copy the full SHA f7ea959View commit details -
Configuration menu - View commit details
-
Copy full SHA for e971486 - Browse repository at this point
Copy the full SHA e971486View commit details -
Fix import of torch.utils.checkpoint (huggingface#27155)
* Fix import * Apply suggestions from code review Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> --------- Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for d39352d - Browse repository at this point
Copy the full SHA d39352dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 8211c59 - Browse repository at this point
Copy the full SHA 8211c59View commit details
Commits on Oct 31, 2023
-
deprecate function
get_default_device
intools/base.py
(huggingfa……ce#26774) * get default device through `PartialState().default_device` as is has been officially released * apply code review suggestion * apply code review suggestion Co-authored-by: Zach Mueller <muellerzr@gmail.com> --------- Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for df6f36a - Browse repository at this point
Copy the full SHA df6f36aView commit details -
Configuration menu - View commit details
-
Copy full SHA for b5c8e23 - Browse repository at this point
Copy the full SHA b5c8e23View commit details -
[docstring] Fix docstring for AltCLIPTextConfig, AltCLIPVisionConfig …
…and AltCLIPConfig (huggingface#27128) * [docstring] Fix docstring for AltCLIPVisionConfig, AltCLIPTextConfig + cleaned some docstring * Removed entries from check_docstring.py * Removed entries from check_docstring.py * Removed entry from check_docstring.py * [docstring] Fix docstring for AltCLIPTextConfig, AltCLIPVisionConfig and AltCLIPConfig
Configuration menu - View commit details
-
Copy full SHA for 9234cae - Browse repository at this point
Copy the full SHA 9234caeView commit details -
[doctring] Fix docstring for BlipTextConfig, BlipVisionConfig (huggin…
…gface#27173) Update configuration_blip.py edit docstrings
Configuration menu - View commit details
-
Copy full SHA for 14bb196 - Browse repository at this point
Copy the full SHA 14bb196View commit details -
Disable CI runner check (huggingface#27170)
Disable runner check Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 9dc4ce9 - Browse repository at this point
Copy the full SHA 9dc4ce9View commit details -
Add flash attention for
gpt_bigcode
(huggingface#26479)* added flash attention of gpt_bigcode * changed docs * Update src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py * add FA-2 docs * oops * Update docs/source/en/perf_infer_gpu_one.md Last Nit Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * fix * oops * remove padding_mask * change getattr->hasattr logic * changed .md file --------- Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by: younesbelkada <younesbelkada@gmail.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for b5db8ca - Browse repository at this point
Copy the full SHA b5db8caView commit details -
fix: Fix typical_p behaviour broken in recent change (huggingface#27165)
A recent PR huggingface#26579 fixed an edge case out-of-bounds tensor indexing error in TypicalLogitsWarper, and a related behaviour change was made that we thought fixed a long-standing bug w.r.t. the token inclusion cutoff. However after looking more closely, I am pretty certain that the original logic was correct and that the OOB fix should have been made differently. Specifically the docs state that it should include the "smallest set of tokens that add up to P or higher" and so `last_ind` should actually be one more than the index of the last token satisfying (cumulative_probs < self.mass). We still need a max clamp in case that last token is the very last one in the tensor.
Configuration menu - View commit details
-
Copy full SHA for 3cd3eaf - Browse repository at this point
Copy the full SHA 3cd3eafView commit details -
Add support for loading GPTQ models on CPU (huggingface#26719)
* Add support for loading GPTQ models on CPU Right now, we can only load the GPTQ Quantized model on the CUDA device. The attribute `gptq_supports_cpu` checks if the current auto_gptq version is the one which has the cpu support for the model or not. The larger variants of the model are hard to load/run/trace on the GPU and that's the rationale behind adding this attribute. Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com> * Update quantization.md * Update quantization.md * Update quantization.md
Configuration menu - View commit details
-
Copy full SHA for 2963e19 - Browse repository at this point
Copy the full SHA 2963e19View commit details -
Trigger CI if
tiny_model_summary.json
is modified (huggingface#27175)fix Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for a8e74eb - Browse repository at this point
Copy the full SHA a8e74ebView commit details -
Shorten the conversation tests for speed + fixing position overflows (h…
…uggingface#26960) * Shorten the conversation tests for speed + fixing position overflows * Put max_new_tokens back to 5 * Remove test skips * Increase max_position_embeddings in blenderbot tests * Add skips for blenderbot_small * Correct TF test skip * make fixup * Reformat skips to use is_pipeline_test_to_skip * Update tests/models/blenderbot_small/test_modeling_blenderbot_small.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update tests/models/blenderbot_small/test_modeling_flax_blenderbot_small.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update tests/models/blenderbot_small/test_modeling_tf_blenderbot_small.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 08fadc8 - Browse repository at this point
Copy the full SHA 08fadc8View commit details -
device agnostic pipelines testing (huggingface#27129)
* device agnostic pipelines testing * pass torch_device
Configuration menu - View commit details
-
Copy full SHA for f53041a - Browse repository at this point
Copy the full SHA f53041aView commit details -
[FEAT] Add Neftune into transformers Trainer (huggingface#27141)
* add v1 neftune * use `unwrap_model` instead * add test + docs * Apply suggestions from code review Co-authored-by: Zach Mueller <muellerzr@gmail.com> * more details * fixup * Update docs/source/en/main_classes/trainer.md Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * refactor a bit * more elaborated test * fix unwrap issue --------- Co-authored-by: Zach Mueller <muellerzr@gmail.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 309a906 - Browse repository at this point
Copy the full SHA 309a906View commit details -
Backward compatibility fix for the Conversation class (huggingface#27176
Configuration menu - View commit details
-
Copy full SHA for 05f2290 - Browse repository at this point
Copy the full SHA 05f2290View commit details -
[
Quantization
/tests
] Fix bnb MPT test (huggingface#27178)fix bnb mpt test
Configuration menu - View commit details
-
Copy full SHA for 4bb50aa - Browse repository at this point
Copy the full SHA 4bb50aaView commit details -
Fix dropout in
StarCoder
(huggingface#27182)fix dropout in modeling_gpt_bigcode.py
Configuration menu - View commit details
-
Copy full SHA for e22b7ce - Browse repository at this point
Copy the full SHA e22b7ceView commit details -
translate traning.md to chinese (huggingface#27122)
* translate traning.md * update _tocree.yml * update _tocree.yml * update _tocree.yml
Configuration menu - View commit details
-
Copy full SHA for 6b7f8ff - Browse repository at this point
Copy the full SHA 6b7f8ffView commit details -
[docs] Update CPU/GPU inference docs (huggingface#26881)
* first draft * remove non-existent paths * edits * feedback * feedback and optimum * Apply suggestions from code review Co-authored-by: regisss <15324346+regisss@users.noreply.github.com> Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * redirect to correct doc * _redirects.yml --------- Co-authored-by: regisss <15324346+regisss@users.noreply.github.com> Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 77930f8 - Browse repository at this point
Copy the full SHA 77930f8View commit details -
device agnostic models testing (huggingface#27146)
* device agnostic models testing * add decorator `require_torch_fp16` * make style * apply review suggestion * Oops, the fp16 decorator was misused
Configuration menu - View commit details
-
Copy full SHA for 50378cb - Browse repository at this point
Copy the full SHA 50378cbView commit details -
Configuration menu - View commit details
-
Copy full SHA for 25e6e94 - Browse repository at this point
Copy the full SHA 25e6e94View commit details -
Safetensors serialization by default (huggingface#27064)
* Safetensors serialization by default * First pass on the tests * Second pass on the tests * Third pass on the tests * Fix TF weight loading from TF-format safetensors * Specific encoder-decoder fixes for weight crossloading * Add VisionEncoderDecoder fixes for TF too * Change filename test for pt-to-tf * One missing fix for TFVisionEncoderDecoder * Fix the other crossload test * Support for flax + updated tests * Apply suggestions from code review Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> * Sanchit's comments * Sanchit's comments 2 * Nico's comments * Fix tests * cleanup * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by: Matt <rocketknight1@gmail.com> Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 113ebf8 - Browse repository at this point
Copy the full SHA 113ebf8View commit details -
🌐 [i18n-ZH] Translate tflite.md into Chinese (huggingface#27134)
* docs(zh): translate tflite.md * docs(zh): add space around links * Update docs/source/zh/tflite.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 7d8ff36 - Browse repository at this point
Copy the full SHA 7d8ff36View commit details
Commits on Nov 1, 2023
-
device agnostic fsdp testing (huggingface#27120)
* make fsdp test cases device agnostic * make style
Configuration menu - View commit details
-
Copy full SHA for 82c7e87 - Browse repository at this point
Copy the full SHA 82c7e87View commit details -
[
core
/Quantization
] AWQ integration (huggingface#27045)* working v1 * oops * Update src/transformers/modeling_utils.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * fixup * oops * push * more changes * add docs * some fixes * fix copies * add v1 doc * added installation guide * relax constraints * revert * attempt llm-awq * oops * oops * fixup * raise error when incorrect cuda compute capability * nit * add instructions for llm-awq * fixup * fix copies * fixup and docs * change * few changes + add demo * add v1 tests * add autoawq in dockerfile * finalize * Update tests/quantization/autoawq/test_awq.py * fix test * fix * fix issue * Update src/transformers/integrations/awq.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update docs/source/en/main_classes/quantization.md Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update docs/source/en/main_classes/quantization.md Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update src/transformers/integrations/awq.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update src/transformers/integrations/awq.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * add link to example script * Update docs/source/en/main_classes/quantization.md Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * add more content * add more details * add link to quantization docs * camel case + change backend class name * change to string * fixup * raise errors if libs not installed * change to `bits` and `group_size` * nit * nit * Apply suggestions from code review Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * disable training * address some comments and fix nits * fix * final nits and fix tests * adapt to our new runners * make fix-copies * Update src/transformers/utils/quantization_config.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/utils/quantization_config.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/integrations/awq.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/integrations/awq.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * move to top * add conversion test * final nit * add more elaborated test --------- Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for ae093ee - Browse repository at this point
Copy the full SHA ae093eeView commit details -
Fix docstring get maskformer resize output image size (huggingface#27196
Configuration menu - View commit details
-
Copy full SHA for 7102552 - Browse repository at this point
Copy the full SHA 7102552View commit details -
Fix the typos and grammar mistakes in CONTRIBUTING.md. (huggingface#2…
…7193) Fix the typos and grammar mistakes in CONTRIBUTING.md
Configuration menu - View commit details
-
Copy full SHA for 636f704 - Browse repository at this point
Copy the full SHA 636f704View commit details -
Configuration menu - View commit details
-
Copy full SHA for f3c1a17 - Browse repository at this point
Copy the full SHA f3c1a17View commit details -
added unsqueeze_dim to apply_rotary_pos_emb (huggingface#27117)
* added unsqueeze_dim to apply_rotary_pos_emb * Added docstring * Modified docstring * Modified docstring * Modified docstring * Modified docstring * Modified docstring * ran make fix-copies and make fixup * Update src/transformers/models/llama/modeling_llama.py Accepting the proposed changes in formatting. Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * incorporating PR suggestions * incorporating PR suggestions * incorporating PR suggestions * incorporating PR suggestions * .. --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 037fb7d - Browse repository at this point
Copy the full SHA 037fb7dView commit details -
Added cache_block_outputs option to enable GPTQ for non-regular models (
huggingface#27032) * Added cache_block_outputs option to enable GPTQ for non-regular models * Update src/transformers/utils/quantization_config.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * Update src/transformers/utils/quantization_config.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * Fixed style * Update src/transformers/utils/quantization_config.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for f9b4bea - Browse repository at this point
Copy the full SHA f9b4beaView commit details -
[WhisperForCausalLM] Add WhisperForCausalLM for speculative decoding (h…
…uggingface#27195) * finish * add tests * fix all tests * [Assistant Decoding] Add test * fix more * better * finish * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * finish --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 391d14e - Browse repository at this point
Copy the full SHA 391d14eView commit details -
Add TensorFlow implementation of ConvNeXTv2 (huggingface#25558)
* Add type annotations to TFConvNextDropPath * Use tf.debugging.assert_equal for TFConvNextEmbeddings shape check * Add TensorFlow implementation of ConvNeXTV2 * check_docstrings: add TFConvNextV2Model to exclusions TFConvNextV2Model and TFConvNextV2ForImageClassification have docstrings which are equivalent to their PyTorch cousins, but a parsing issue prevents them from passing the test. Adding exclusions for these two classes as discussed in huggingface#25558.
Configuration menu - View commit details
-
Copy full SHA for f8afb2b - Browse repository at this point
Copy the full SHA f8afb2bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 21a2fba - Browse repository at this point
Copy the full SHA 21a2fbaView commit details -
improving TimmBackbone to support FrozenBatchNorm2d (huggingface#27160)
* supporting freeze_batch_norm_2d * supporting freeze_batch_norm_2d * including unfreeze + separate into methods * fix typo * calling unfreeze * lint * Update src/transformers/models/timm_backbone/modeling_timm_backbone.py Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by: Rafael Padilla <rafael.padilla@huggingface.co> Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 1e32b05 - Browse repository at this point
Copy the full SHA 1e32b05View commit details -
Translate task summary to chinese (huggingface#27180)
* translate task_summary.md to chinese * update translation * update translation * fix _toctree.yml
Configuration menu - View commit details
-
Copy full SHA for 239cd0e - Browse repository at this point
Copy the full SHA 239cd0eView commit details -
Add exllamav2 better (huggingface#27111)
* add_ xllamav2 arg * add test * style * add check * add doc * replace by use_exllama_v2 * fix tests * fix doc * style * better condition * fix logic * add deprecate msg * deprecate exllama * remove disable_exllama from the linter * remove * fix warning * Revert the commits deprecating exllama * deprecate disable_exllama for use_exllama * fix * fix loading attribute * better handling of args * remove disable_exllama from init and linter * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * better arg * fix warning * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * switch to dict * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * style * nits * style * better tests * style --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for c9e72f5 - Browse repository at this point
Copy the full SHA c9e72f5View commit details -
Fix CPU offload + disk offload tests (huggingface#27204)
Fix disk offload tests + weight sharing issues
Configuration menu - View commit details
-
Copy full SHA for 95020f2 - Browse repository at this point
Copy the full SHA 95020f2View commit details -
Enable split_batches through TrainingArguments (huggingface#26798)
* Enable split_batches through TrainingArguments * Extra dispatch_batches * Keep as default false * Add to docstring * Add to docstring * Remove the capturewarnings change * Comma
Configuration menu - View commit details
-
Copy full SHA for 3520e37 - Browse repository at this point
Copy the full SHA 3520e37View commit details -
[Whisper, Bart, MBart] Add Flash Attention 2 (huggingface#27203)
* add whisper fa2 * correct * change all * correct * correct * fix more * fix more * fix more * fix more * fix more * fix more * Apply suggestions from code review Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> * fix more * fix more * fix more * fix more * fix more --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for af3de8d - Browse repository at this point
Copy the full SHA af3de8dView commit details
Commits on Nov 2, 2023
-
Merge remote-tracking branch 'origin/main' into fuyu_follow_up_image_…
…processing_conflicts
Configuration menu - View commit details
-
Copy full SHA for 060e545 - Browse repository at this point
Copy the full SHA 060e545View commit details