Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix conflicts in fuyu_follow_up_image_processing #27228

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
133 commits
Select commit Hold shift + click to select a range
734dd96
[Docs] Make sure important decode and generate method are nicely disp…
patrickvonplaten Oct 19, 2023
bdbcd5d
Fix and re-enable ConversationalPipeline tests (#26907)
Rocketknight1 Oct 19, 2023
ad08137
[docstring] Fix docstrings for `CodeGen` (#26821)
daniilgaltsev Oct 19, 2023
73dc23f
Fix license (#26931)
MedAymenF Oct 19, 2023
cbd278f
Pin Keras for now (#26904)
Rocketknight1 Oct 19, 2023
bc4bbd9
[`FA-2` / `Mistral`] Supprot fa-2 + right padding + forward (#26912)
younesbelkada Oct 19, 2023
ae4fb84
Generate: update basic llm tutorial (#26937)
gante Oct 19, 2023
08a2edf
Corrected modalities description in README_ru.md (#26913)
letohx Oct 19, 2023
929134b
[docstring] Fix docstring for speech-to-text config (#26883)
R055A Oct 20, 2023
9b19766
fix set_transform link docs (#26856)
diegulio Oct 20, 2023
c030fc8
Fix Fuyu image scaling bug (#26918)
pcuenca Oct 20, 2023
224794b
Update README_hd.md (#26872)
biswabaibhab007 Oct 20, 2023
093848d
Added Telugu [te] translations (#26828)
hakunamatata1997 Oct 20, 2023
f71c9cc
fix logit-to-multi-hot conversion in example (#26936)
ranchlai Oct 23, 2023
7003294
Limit to inferior fsspec version (#27010)
LysandreJik Oct 23, 2023
4542566
python falcon doc-string example typo (#26995)
SoyGema Oct 23, 2023
ef978d0
skip two tests (#27013)
ArthurZucker Oct 23, 2023
d33d313
Nits in Llama2 docstring (#26996)
osanseviero Oct 23, 2023
50d0cf4
Change default `max_shard_size` to smaller value (#26942)
younesbelkada Oct 23, 2023
cb45f71
Add Seamless M4T model (#25693)
ylacombe Oct 23, 2023
244a53e
[`NLLB-MoE`] Fix NLLB MoE 4bit inference (#27012)
younesbelkada Oct 23, 2023
f9f27b0
[`SeamlessM4T`] fix copies with NLLB MoE int8 (#27018)
ArthurZucker Oct 23, 2023
c0b5ad9
small typos found (#26988)
rafaelpadilla Oct 23, 2023
f7354a3
Remove token_type_ids from default TF GPT-2 signature (#26962)
Rocketknight1 Oct 23, 2023
f09a081
Translate `pipeline_tutorial.md` to chinese (#26954)
jiaqiw09 Oct 23, 2023
33f98cf
Remove ambiguous `padding_mask` and instead use a 2D->4D Attn Mask Ma…
patrickvonplaten Oct 23, 2023
19ae050
🌐 [i18n-ZH] Translate multilingual into Chinese (#26935)
yyLeaves Oct 23, 2023
b0d1d7f
translate `preprocessing.md` to Chinese (#26955)
jiaqiw09 Oct 23, 2023
f370beb
Bugfix device map detr model (#26849)
pedrogengo Oct 23, 2023
25c022d
Fix little typo (#27028)
mertyyanik Oct 23, 2023
32f799d
🌐 [i18n-ZH] Translate create_a_model.md into Chinese (#27026)
yyLeaves Oct 23, 2023
ede051f
Fix key dtype in GPTJ and CodeGen (#26836)
fxmarty Oct 24, 2023
cc7803c
Register ModelOutput as supported torch pytree nodes (#26618)
XuehaiPan Oct 24, 2023
fc142bd
Add `default_to_square_for_size` to `CLIPImageProcessor` (#26965)
ydshieh Oct 24, 2023
576e282
Add descriptive docstring to WhisperTimeStampLogitsProcessor (#25642)
jprivera44 Oct 24, 2023
e2d6d5c
Normalize only if needed (#26049)
mjamroz Oct 24, 2023
7bde5d6
[`TFxxxxForSequenceClassifciation`] Fix the eager mode after #25085 (…
ArthurZucker Oct 24, 2023
cb0c680
Safe import of rgb_to_id from FE modules (#27037)
amyeroberts Oct 24, 2023
b18e314
add info on TRL docs (#27024)
lvwerra Oct 24, 2023
41496b9
Add fuyu device map (#26949)
SunMarc Oct 24, 2023
9da4517
Device agnostic testing (#25870)
vvvm23 Oct 24, 2023
13ef14e
Fix config silent copy in from_pretrained (#27043)
patrickvonplaten Oct 24, 2023
9333bf0
[docs] Performance docs refactor p.2 (#26791)
MKhalusova Oct 24, 2023
a0fd344
Add a default decoder_attention_mask for EncoderDecoderModel during t…
hackyon Oct 24, 2023
6cbc136
Fix RoPE config validation for FalconConfig + various config typos (#…
tomaarsen Oct 24, 2023
9286f0a
Skip-test (#27062)
ArthurZucker Oct 25, 2023
06e782d
[`core`] Refactor of `gradient_checkpointing` (#27020)
younesbelkada Oct 25, 2023
0baa924
Fix TypicalLogitsWarper tensor OOB indexing edge case (#26579)
njhill Oct 25, 2023
a64f8c1
[docstring] fix incorrect llama docstring: encoder -> decoder (#27071)
ztjhz Oct 25, 2023
ba073ea
[DOCS] minor fixes in README.md (#27048)
Akash190104 Oct 25, 2023
c34c50c
[`docs`] Add `MaskGenerationPipeline` in docs (#27063)
younesbelkada Oct 25, 2023
ba5144f
🌐 [i18n-ZH] Translate custom_models.md into Chinese (#27065)
yyLeaves Oct 25, 2023
a2f55a6
Hindi translation of pipeline_tutorial.md (#26837)
AaryaBalwadkar Oct 25, 2023
df2eebf
Handle unsharded Llama2 model types in conversion script (#27069)
coreyhu Oct 26, 2023
9c5240a
Bump werkzeug from 2.2.3 to 3.0.1 in /examples/research_projects/deci…
dependabot[bot] Oct 26, 2023
3c26924
Bump urllib3 from 1.26.17 to 1.26.18 in /examples/research_projects/l…
dependabot[bot] Oct 26, 2023
9041240
Bring back `set_epoch` for Accelerate-based dataloaders (#26850)
muellerzr Oct 26, 2023
efba1a1
Bump`flash_attn` version to `2.1` (#27079)
younesbelkada Oct 26, 2023
fe2877c
Remove unneeded prints in modeling_gpt_neox.py (#27080)
younesbelkada Oct 26, 2023
15cd096
Create SECURITY.md
ArthurZucker Oct 26, 2023
4864d08
Add-support for commit description (#26704)
ArthurZucker Oct 26, 2023
d7cb5e1
[Llama FA2] Re-add _expand_attention_mask and clean a couple things (…
patrickvonplaten Oct 26, 2023
8214d6e
add exllamav2 arg (#26437)
SunMarc Oct 26, 2023
1892592
Correct docstrings and a typo in comments (#27047)
lewis-yeung Oct 26, 2023
34a6406
Save TB logs as part of push_to_hub (#27022)
muellerzr Oct 26, 2023
6f31601
Added huggingface emoji instead of the markdown format (#27091)
shettyvarshaa Oct 26, 2023
aa4198a
[`T5Tokenizer`] Fix fast and extra tokens (#27085)
ArthurZucker Oct 27, 2023
90ee9ce
Revert "add exllamav2 arg" (#27102)
ArthurZucker Oct 27, 2023
e2bffcf
Add early stopping for Bark generation via logits processor (#26675)
isaac-chung Oct 27, 2023
66b088f
Provide alternative when warning on use_auth_token (#27105)
Wauplin Oct 27, 2023
5be1fb6
Fix no split modules underlying modules (#27090)
SunMarc Oct 27, 2023
ffff9e7
[`core`/ `gradient_checkpointing`] Refactor GC - part 2 (#27073)
younesbelkada Oct 27, 2023
29c74f5
fix detr device map (#27089)
SunMarc Oct 27, 2023
ac58937
[Attention Mask] Refactor all encoder-decoder attention mask (#27086)
patrickvonplaten Oct 27, 2023
96f9e78
Added Telugu [te] translation for README.md in main (#27077)
hakunamatata1997 Oct 27, 2023
ef23b68
translate transformers_agents.md to Chinese (#27046)
jiaqiw09 Oct 27, 2023
9e87618
Fix docstring and type hint for resize (#27104)
daniilgaltsev Oct 27, 2023
722e936
[Typo fix] flag config in WANDB (#27130)
SoyGema Oct 29, 2023
211ad4c
Fix slack report failing for doctest (#27042)
ydshieh Oct 30, 2023
1604321
[`FA2`/ `Mistral`] Revert previous behavior with right padding + forw…
younesbelkada Oct 30, 2023
e830495
Fix data2vec-audio note about attention mask (#27116)
gau-nernst Oct 30, 2023
5fbed2d
[`Trainer` / `GC`] Add `gradient_checkpointing_kwargs` in trainer and…
younesbelkada Oct 30, 2023
d751dbe
remove the obsolete code related to fairscale FSDP (#26651)
statelesshz Oct 30, 2023
691fd8f
Add `Kosmos-2` model (#24709)
ydshieh Oct 30, 2023
5769949
Fix some tests using `"common_voice"` (#27147)
ydshieh Oct 30, 2023
6b46677
[`tests` / `Quantization`] Fix bnb test (#27145)
younesbelkada Oct 30, 2023
cd19b19
make tests of pytorch_example device agnostic (#27081)
statelesshz Oct 30, 2023
3224c0c
Remove some Kosmos-2 `copied from` (#27149)
ydshieh Oct 30, 2023
9093b19
🌐 [i18n-ZH] Translate serialization.md into Chinese (#27076)
yyLeaves Oct 30, 2023
84724ef
Translating `en/main_classes` folder docs to Japanese 🇯🇵 (#26894)
rajveer43 Oct 30, 2023
5bbf671
Device agnostic trainer testing (#27131)
statelesshz Oct 30, 2023
f7ea959
[`core`/ `GC` / `tests`] Stronger GC tests (#27124)
younesbelkada Oct 30, 2023
e971486
Fix: typos in README.md (#27154)
THEFZNKHAN Oct 30, 2023
d39352d
Fix import of torch.utils.checkpoint (#27155)
NielsRogge Oct 30, 2023
8211c59
[KOSMOS-2] Update docs (#27157)
NielsRogge Oct 30, 2023
df6f36a
deprecate function `get_default_device` in `tools/base.py` (#26774)
statelesshz Oct 31, 2023
b5c8e23
Remove broken links to s-JoL/Open-Llama (#27164)
CSRessel Oct 31, 2023
9234cae
[docstring] Fix docstring for AltCLIPTextConfig, AltCLIPVisionConfig …
AksharGoyal Oct 31, 2023
14bb196
[doctring] Fix docstring for BlipTextConfig, BlipVisionConfig (#27173)
Hangsiin Oct 31, 2023
9dc4ce9
Disable CI runner check (#27170)
ydshieh Oct 31, 2023
b5db8ca
Add flash attention for `gpt_bigcode` (#26479)
susnato Oct 31, 2023
3cd3eaf
fix: Fix typical_p behaviour broken in recent change (#27165)
njhill Oct 31, 2023
2963e19
Add support for loading GPTQ models on CPU (#26719)
vivekkhandelwal1 Oct 31, 2023
a8e74eb
Trigger CI if `tiny_model_summary.json` is modified (#27175)
ydshieh Oct 31, 2023
08fadc8
Shorten the conversation tests for speed + fixing position overflows …
Rocketknight1 Oct 31, 2023
f53041a
device agnostic pipelines testing (#27129)
statelesshz Oct 31, 2023
309a906
[FEAT] Add Neftune into transformers Trainer (#27141)
younesbelkada Oct 31, 2023
05f2290
Backward compatibility fix for the Conversation class (#27176)
Rocketknight1 Oct 31, 2023
4bb50aa
[`Quantization` / `tests` ] Fix bnb MPT test (#27178)
younesbelkada Oct 31, 2023
e22b7ce
Fix dropout in `StarCoder` (#27182)
susnato Oct 31, 2023
6b7f8ff
translate traning.md to chinese (#27122)
jiaqiw09 Oct 31, 2023
77930f8
[docs] Update CPU/GPU inference docs (#26881)
stevhliu Oct 31, 2023
50378cb
device agnostic models testing (#27146)
statelesshz Oct 31, 2023
25e6e94
Unify warning styles for better readability (#27184)
oneonlee Oct 31, 2023
113ebf8
Safetensors serialization by default (#27064)
LysandreJik Oct 31, 2023
7d8ff36
🌐 [i18n-ZH] Translate tflite.md into Chinese (#27134)
yyLeaves Oct 31, 2023
82c7e87
device agnostic fsdp testing (#27120)
statelesshz Nov 1, 2023
ae093ee
[`core` / `Quantization` ] AWQ integration (#27045)
younesbelkada Nov 1, 2023
7102552
Fix docstring get maskformer resize output image size (#27196)
wesleylp Nov 1, 2023
636f704
Fix the typos and grammar mistakes in CONTRIBUTING.md. (#27193)
THEFZNKHAN Nov 1, 2023
f3c1a17
Fixing docstring in get_resize_output_image_size function (#27191)
wesleylp Nov 1, 2023
037fb7d
added unsqueeze_dim to apply_rotary_pos_emb (#27117)
ShashankMosaicML Nov 1, 2023
f9b4bea
Added cache_block_outputs option to enable GPTQ for non-regular model…
AlexKoff88 Nov 1, 2023
391d14e
[WhisperForCausalLM] Add WhisperForCausalLM for speculative decoding …
patrickvonplaten Nov 1, 2023
f8afb2b
Add TensorFlow implementation of ConvNeXTv2 (#25558)
neggles Nov 1, 2023
21a2fba
Fix docstring in get_oneformer_resize_output_image_size func (#27207)
wesleylp Nov 1, 2023
1e32b05
improving TimmBackbone to support FrozenBatchNorm2d (#27160)
rafaelpadilla Nov 1, 2023
239cd0e
Translate task summary to chinese (#27180)
jiaqiw09 Nov 1, 2023
c9e72f5
Add exllamav2 better (#27111)
SunMarc Nov 1, 2023
95020f2
Fix CPU offload + disk offload tests (#27204)
LysandreJik Nov 1, 2023
3520e37
Enable split_batches through TrainingArguments (#26798)
muellerzr Nov 1, 2023
af3de8d
[Whisper, Bart, MBart] Add Flash Attention 2 (#27203)
patrickvonplaten Nov 1, 2023
060e545
Merge remote-tracking branch 'origin/main' into fuyu_follow_up_image_…
pcuenca Nov 2, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions .circleci/create_circleci_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ def to_dict(self):
},
]
steps.extend([{"run": l} for l in self.install_steps])
steps.extend([{"run": 'pip install "fsspec>=2023.5.0,<2023.10.0"'}])
steps.extend([{"run": "pip install pytest-subtests"}])
steps.append(
{
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
commit_sha: ${{ github.sha }}
package: transformers
notebook_folder: transformers_doc
languages: de en es fr it ko pt zh ja
languages: de en es fr hi it ko pt zh ja te
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
2 changes: 1 addition & 1 deletion .github/workflows/build_pr_documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ jobs:
commit_sha: ${{ github.event.pull_request.head.sha }}
pr_number: ${{ github.event.number }}
package: transformers
languages: de en es fr it ko pt zh ja
languages: de en es fr hi it ko pt zh ja te
34 changes: 0 additions & 34 deletions .github/workflows/self-nightly-scheduled.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,36 +21,8 @@ env:
RUN_PT_TF_CROSS_TESTS: 1

jobs:
check_runner_status:
name: Check Runner Status
runs-on: ubuntu-latest
steps:
- name: Checkout transformers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Check Runner Status
run: python utils/check_self_hosted_runner.py --target_runners single-gpu-past-ci-runner-docker,multi-gpu-past-ci-runner-docker --token ${{ secrets.ACCESS_REPO_INFO_TOKEN }}

check_runners:
name: Check Runners
needs: check_runner_status
strategy:
matrix:
machine_type: [single-gpu, multi-gpu]
runs-on: ['${{ matrix.machine_type }}', nvidia-gpu, t4, past-ci]
container:
image: huggingface/transformers-all-latest-torch-nightly-gpu
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps:
- name: NVIDIA-SMI
run: |
nvidia-smi

setup:
name: Setup
needs: check_runners
strategy:
matrix:
machine_type: [single-gpu, multi-gpu]
Expand Down Expand Up @@ -276,8 +248,6 @@ jobs:
runs-on: ubuntu-latest
if: always()
needs: [
check_runner_status,
check_runners,
setup,
run_tests_single_gpu,
run_tests_multi_gpu,
Expand All @@ -288,8 +258,6 @@ jobs:
shell: bash
# For the meaning of these environment variables, see the job `Setup`
run: |
echo "Runner availability: ${{ needs.check_runner_status.result }}"
echo "Runner status: ${{ needs.check_runners.result }}"
echo "Setup status: ${{ needs.setup.result }}"

- uses: actions/checkout@v3
Expand All @@ -303,8 +271,6 @@ jobs:
CI_SLACK_REPORT_CHANNEL_ID: ${{ secrets.CI_SLACK_CHANNEL_ID_PAST_FUTURE }}
ACCESS_REPO_INFO_TOKEN: ${{ secrets.ACCESS_REPO_INFO_TOKEN }}
CI_EVENT: Nightly CI
RUNNER_STATUS: ${{ needs.check_runner_status.result }}
RUNNER_ENV_STATUS: ${{ needs.check_runners.result }}
SETUP_STATUS: ${{ needs.setup.result }}
# We pass `needs.setup.outputs.matrix` as the argument. A processing in `notification_service.py` to change
# `models/bert` to `models_bert` is required, as the artifact names use `_` instead of `/`.
Expand Down
34 changes: 0 additions & 34 deletions .github/workflows/self-past.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,36 +32,8 @@ env:
RUN_PT_TF_CROSS_TESTS: 1

jobs:
check_runner_status:
name: Check Runner Status
runs-on: ubuntu-latest
steps:
- name: Checkout transformers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Check Runner Status
run: python utils/check_self_hosted_runner.py --target_runners single-gpu-past-ci-runner-docker,multi-gpu-past-ci-runner-docker --token ${{ secrets.ACCESS_REPO_INFO_TOKEN }}

check_runners:
name: Check Runners
needs: check_runner_status
strategy:
matrix:
machine_type: [single-gpu, multi-gpu]
runs-on: ['${{ matrix.machine_type }}', nvidia-gpu, t4, past-ci]
container:
image: huggingface/transformers-${{ inputs.framework }}-past-${{ inputs.version }}-gpu
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps:
- name: NVIDIA-SMI
run: |
nvidia-smi

setup:
name: Setup
needs: check_runners
strategy:
matrix:
machine_type: [single-gpu, multi-gpu]
Expand Down Expand Up @@ -319,8 +291,6 @@ jobs:
runs-on: ubuntu-latest
if: always()
needs: [
check_runner_status,
check_runners,
setup,
run_tests_single_gpu,
run_tests_multi_gpu,
Expand All @@ -331,8 +301,6 @@ jobs:
shell: bash
# For the meaning of these environment variables, see the job `Setup`
run: |
echo "Runner availability: ${{ needs.check_runner_status.result }}"
echo "Runner status: ${{ needs.check_runners.result }}"
echo "Setup status: ${{ needs.setup.result }}"

- uses: actions/checkout@v3
Expand All @@ -351,8 +319,6 @@ jobs:
CI_SLACK_REPORT_CHANNEL_ID: ${{ secrets.CI_SLACK_CHANNEL_ID_PAST_FUTURE }}
ACCESS_REPO_INFO_TOKEN: ${{ secrets.ACCESS_REPO_INFO_TOKEN }}
CI_EVENT: Past CI - ${{ inputs.framework }}-${{ inputs.version }}
RUNNER_STATUS: ${{ needs.check_runner_status.result }}
RUNNER_ENV_STATUS: ${{ needs.check_runners.result }}
SETUP_STATUS: ${{ needs.setup.result }}
# We pass `needs.setup.outputs.matrix` as the argument. A processing in `notification_service.py` to change
# `models/bert` to `models_bert` is required, as the artifact names use `_` instead of `/`.
Expand Down
34 changes: 0 additions & 34 deletions .github/workflows/self-push.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,36 +27,8 @@ env:
RUN_PT_TF_CROSS_TESTS: 1

jobs:
check_runner_status:
name: Check Runner Status
runs-on: ubuntu-latest
steps:
- name: Checkout transformers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Check Runner Status
run: python utils/check_self_hosted_runner.py --target_runners single-gpu-ci-runner-docker,multi-gpu-ci-runner-docker --token ${{ secrets.ACCESS_REPO_INFO_TOKEN }}

check_runners:
name: Check Runners
needs: check_runner_status
strategy:
matrix:
machine_type: [single-gpu, multi-gpu]
runs-on: ['${{ matrix.machine_type }}', nvidia-gpu, t4, push-ci]
container:
image: huggingface/transformers-all-latest-gpu-push-ci
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps:
- name: NVIDIA-SMI
run: |
nvidia-smi

setup:
name: Setup
needs: check_runners
strategy:
matrix:
machine_type: [single-gpu, multi-gpu]
Expand Down Expand Up @@ -521,8 +493,6 @@ jobs:
runs-on: ubuntu-latest
if: always()
needs: [
check_runner_status,
check_runners,
setup,
run_tests_single_gpu,
run_tests_multi_gpu,
Expand All @@ -534,9 +504,7 @@ jobs:
shell: bash
# For the meaning of these environment variables, see the job `Setup`
run: |
echo "Runner availability: ${{ needs.check_runner_status.result }}"
echo "Setup status: ${{ needs.setup.result }}"
echo "Runner status: ${{ needs.check_runners.result }}"

# Necessary to get the correct branch name and commit SHA for `workflow_run` event
# We also take into account the `push` event (we might want to test some changes in a branch)
Expand Down Expand Up @@ -589,8 +557,6 @@ jobs:
CI_TITLE_PUSH: ${{ github.event.head_commit.message }}
CI_TITLE_WORKFLOW_RUN: ${{ github.event.workflow_run.head_commit.message }}
CI_SHA: ${{ env.CI_SHA }}
RUNNER_STATUS: ${{ needs.check_runner_status.result }}
RUNNER_ENV_STATUS: ${{ needs.check_runners.result }}
SETUP_STATUS: ${{ needs.setup.result }}

# We pass `needs.setup.outputs.matrix` as the argument. A processing in `notification_service.py` to change
Expand Down
36 changes: 0 additions & 36 deletions .github/workflows/self-scheduled.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,36 +25,8 @@ env:
RUN_PT_TF_CROSS_TESTS: 1

jobs:
check_runner_status:
name: Check Runner Status
runs-on: ubuntu-latest
steps:
- name: Checkout transformers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Check Runner Status
run: python utils/check_self_hosted_runner.py --target_runners single-gpu-scheduled-ci-runner-docker,multi-gpu-scheduled-ci-runner-docker --token ${{ secrets.ACCESS_REPO_INFO_TOKEN }}

check_runners:
name: Check Runners
needs: check_runner_status
strategy:
matrix:
machine_type: [single-gpu, multi-gpu]
runs-on: ['${{ matrix.machine_type }}', nvidia-gpu, t4, daily-ci]
container:
image: huggingface/transformers-all-latest-gpu
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps:
- name: NVIDIA-SMI
run: |
nvidia-smi

setup:
name: Setup
needs: check_runners
strategy:
matrix:
machine_type: [single-gpu, multi-gpu]
Expand Down Expand Up @@ -430,8 +402,6 @@ jobs:
runs-on: ubuntu-latest
if: always()
needs: [
check_runner_status,
check_runners,
setup,
run_tests_single_gpu,
run_tests_multi_gpu,
Expand Down Expand Up @@ -480,8 +450,6 @@ jobs:
runs-on: ubuntu-latest
if: always()
needs: [
check_runner_status,
check_runners,
setup,
run_tests_single_gpu,
run_tests_multi_gpu,
Expand All @@ -496,8 +464,6 @@ jobs:
shell: bash
# For the meaning of these environment variables, see the job `Setup`
run: |
echo "Runner availability: ${{ needs.check_runner_status.result }}"
echo "Runner status: ${{ needs.check_runners.result }}"
echo "Setup status: ${{ needs.setup.result }}"

- uses: actions/checkout@v3
Expand All @@ -513,8 +479,6 @@ jobs:
CI_EVENT: scheduled
CI_SHA: ${{ github.sha }}
CI_WORKFLOW_REF: ${{ github.workflow_ref }}
RUNNER_STATUS: ${{ needs.check_runner_status.result }}
RUNNER_ENV_STATUS: ${{ needs.check_runners.result }}
SETUP_STATUS: ${{ needs.setup.result }}
# We pass `needs.setup.outputs.matrix` as the argument. A processing in `notification_service.py` to change
# `models/bert` to `models_bert` is required, as the artifact names use `_` instead of `/`.
Expand Down
18 changes: 9 additions & 9 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ There are several ways you can contribute to 🤗 Transformers:

If you don't know where to start, there is a special [Good First
Issue](https://github.com/huggingface/transformers/contribute) listing. It will give you a list of
open issues that are beginner-friendly and help you start contributing to open-source. Just comment in the issue that you'd like to work
on it.
open issues that are beginner-friendly and help you start contributing to open-source. Just comment on the issue that you'd like to work
on.

For something slightly more challenging, you can also take a look at the [Good Second Issue](https://github.com/huggingface/transformers/labels/Good%20Second%20Issue) list. In general though, if you feel like you know what you're doing, go for it and we'll help you get there! 🚀

Expand All @@ -62,7 +62,7 @@ feedback.
The 🤗 Transformers library is robust and reliable thanks to users who report the problems they encounter.

Before you report an issue, we would really appreciate it if you could **make sure the bug was not
already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you're unsure whether the bug is in your code or the library, please ask on the [forum](https://discuss.huggingface.co/) first. This helps us respond quicker to fixing issues related to the library versus general questions.
already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you're unsure whether the bug is in your code or the library, please ask in the [forum](https://discuss.huggingface.co/) first. This helps us respond quicker to fixing issues related to the library versus general questions.

Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so we can quickly resolve it:

Expand Down Expand Up @@ -105,7 +105,7 @@ We have added [templates](https://github.com/huggingface/transformers/tree/main/

New models are constantly released and if you want to implement a new model, please provide the following information

* A short description of the model and link to the paper.
* A short description of the model and a link to the paper.
* Link to the implementation if it is open-sourced.
* Link to the model weights if they are available.

Expand Down Expand Up @@ -172,7 +172,7 @@ You'll need **[Python 3.8]((https://github.com/huggingface/transformers/blob/mai

which should be enough for most use cases.

5. Develop the features on your branch.
5. Develop the features in your branch.

As you work on your code, you should make sure the test suite
passes. Run the tests impacted by your changes like this:
Expand Down Expand Up @@ -208,7 +208,7 @@ You'll need **[Python 3.8]((https://github.com/huggingface/transformers/blob/mai
make quality
```

Finally, we have a lot of scripts to make sure we didn't forget to update
Finally, we have a lot of scripts to make sure we don't forget to update
some files when adding a new model. You can run these scripts with:

```bash
Expand All @@ -218,7 +218,7 @@ You'll need **[Python 3.8]((https://github.com/huggingface/transformers/blob/mai
To learn more about those checks and how to fix any issues with them, check out the
[Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide.

If you're modifying documents under `docs/source` directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check
If you're modifying documents under the `docs/source` directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check
make sure you install the documentation builder:

```bash
Expand All @@ -234,7 +234,7 @@ You'll need **[Python 3.8]((https://github.com/huggingface/transformers/blob/mai
This will build the documentation in the `~/tmp/test-build` folder where you can inspect the generated
Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request.

Once you're happy with your changes, add changed files with `git add` and
Once you're happy with your changes, add the changed files with `git add` and
record your changes locally with `git commit`:

```bash
Expand All @@ -261,7 +261,7 @@ You'll need **[Python 3.8]((https://github.com/huggingface/transformers/blob/mai

If you've already opened a pull request, you'll need to force push with the `--force` flag. Otherwise, if the pull request hasn't been opened yet, you can just push your changes normally.

6. Now you can go to your fork of the repository on GitHub and click on **Pull request** to open a pull request. Make sure you tick off all the boxes in our [checklist](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#pull-request-checklist) below. When you're ready, you can send your changes to the project maintainers for review.
6. Now you can go to your fork of the repository on GitHub and click on **Pull Request** to open a pull request. Make sure you tick off all the boxes on our [checklist](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#pull-request-checklist) below. When you're ready, you can send your changes to the project maintainers for review.

7. It's ok if maintainers request changes, it happens to our core contributors
too! So everyone can see the changes in the pull request, work in your local
Expand Down
Loading
Loading