-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training hangs with lightning ddp and cloud dir? #408
Comments
Hi! thanks for your contribution!, great first issue! |
Hi @rxqy, thanks for opening the issue. A similar issue is also open for Sagemaker. We're looking into it and will try to fix it ASAP. |
@deependujha, Many thanks. BTW, the above code sometimes gives the FileNotFoundError (and the training loop continues for several iterations and hangs), and sometimes it just hangs. Not sure if it will help or not, but i'm still pasting it here.
|
Hey @rxqy. Could you try to add a try / except around it in LitData and let us know if it helps ? There is a race condition on deleting the file but it is file to catch and skip it. If it helps, would you mind making a PR with the fix ? |
Hi @tchaton. I think this should be on the lightning side? I wrote a pytorch ddp demo. With the exact same dataloader, we can finish training quite smoothly.
|
Just to clarify, I made no code change to my litdata or lightning package. And we are not using fabric in our trainer. |
You should instantiate the dataset in the setuo hook of the datamodule or directly within the dataloader hook |
@tchaton We're running into an identical issue. We also are getting:
We are initiating our dataset in
When using DDP with remote data, we get 1 iteration/second in terms of speed. After the 1st epoch, 15-16 steps run forward at 1 iteration/second and then training stalls for 3-5 minutes (no GPU utilization). Any ideas what the underlying issue could be? |
🐛 Bug
Hi, we are using lightning with litdata on our local machine and aws s3 system. However, training would hang randomly during the very first iterations with ddp and remote cloud directory.
I tried several different configurations, but I'm not sure what I should check next.
GPU / Strategy / FileOn / results
1 / No DDP/ local ssd / OK
1 / No DDP/ remote(s3) / OK
8 / DDP/ local ssd / OK
8 / DDP/ remote(s3) / Stuck.
To Reproduce
I'm following the exact steps on the imagenet demo. And I write a trainer myself here.
Just run python train.py with different CUDA_VISIBLE_DEVICES is enough
Code sample
Expected behavior
Training should finish
Additional context
Due to some regulations here we can not put we data or training scirpts on lightning-studio. I'm not sure if something's wrong with our s3 bucket or our our network configuration.
One thing I notice is that even if the training stucks at some iterations(<50), we can still observe large network throughputs on our machine (around 100mb/s), but the local chunk directory( ~/.lightning/chunks) stops growing.
Current environment
The text was updated successfully, but these errors were encountered: