Skip to content

Conversation

@JustinVanHeek
Copy link
Contributor

@JustinVanHeek JustinVanHeek commented Jul 27, 2025

What does this PR do?

Currently a fork bomb is being created due to the accelerator preparing a new dataloader at each evaluation when dataloader_persistent_workers=True.

It appears there was an attempt to fix this issue #29538 but the problem seems to still exist. I tried using version 4.39.0 which was the first release version that included that fix along with the most recent version of accelerate for that point in time (0.27.2) and was still able to reproduce the fork bomb.

This PR makes a minor change to the original fix by storing the prepared dataloader rather than the dataloader prior to preparing it with the accelerator.

The author of the original fix left a comment in the code specifically about storing the dataloader prior to being prepared due to accelerator.free_memory() destroying the references. I was unable to reproduce that problem when storing the prepared dataloader (even when calling accelerator.free_memory() before each evaluation), but wouldn't mind hearing from @muellerzr on why they did that in case I have missed something. Though at the same time he left this comment on the GitHub Issue suggesting he intended to make these same proposed changes.

Fixes #28469

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@SunMarc @amyeroberts

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better indeed to reuse the prepared dl. Can you share a reproducer btw ?

@JustinVanHeek
Copy link
Contributor Author

No problem, here's a reproducer. You can watch the number of processes grow by running top, making a filter with o and filtering for COMMAND=pt_data_worker.

from transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset

dataset = load_dataset("imdb", split="train[:2000]")
dataset = dataset.train_test_split(test_size=0.2)

tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")

def tokenize_function(example):
    return tokenizer(example["text"], padding="max_length", truncation=True, max_length=128)

tokenized_dataset = dataset.map(tokenize_function, batched=True)

model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)

trainer = Trainer(
    model=model,
    args=TrainingArguments(
        output_dir="checkpoints",
        max_steps=1000,
        save_strategy="no",
        eval_strategy="steps",
        eval_steps=1,
        dataloader_persistent_workers=True,
        dataloader_num_workers=1,
    ),
    train_dataset=tokenized_dataset["train"],
    eval_dataset=tokenized_dataset["test"],
)
trainer.train()

@JustinVanHeek
Copy link
Contributor Author

@SunMarc Do you need anything else from me or can this be merged? Thanks.

@SunMarc
Copy link
Member

SunMarc commented Aug 5, 2025

@SunMarc Do you need anything else from me or can this be merged? Thanks.

Yeah, sorry for the long wait. I was able to reproduce and your fix works nicely !

@SunMarc SunMarc enabled auto-merge (squash) August 5, 2025 10:40
@SunMarc SunMarc merged commit 6e4a9a5 into huggingface:main Aug 5, 2025
24 checks passed
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
zaristei pushed a commit to zaristei/transformers that referenced this pull request Sep 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

dataloader_persistent_workers=True causes fork-bomb due to repeated creation of eval_dataloader

3 participants