-
-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broken pipe #1859
Comments
👋 Hello @ryan994, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com. RequirementsPython 3.8 or later with all requirements.txt dependencies installed, including $ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
@ryan994 yes this may be associated with multiple dataloader workers. I don't think there's any relation to the recent release. If you can reproduce this in a Colab notebook please advise, otherwise your solution should work locally. |
@glenn-jocher I run v3.0 before, It works prefectly with 8 workers, however, it does not work in v4.0 at same env. |
@ryan994 if you can supply a reproducible example in a common environment (one of the 4 above), we can take a look. |
@glenn-jocher ok, I will try to run it on other pc, and check if my pc's problem. |
@ryan994 ok! docker image is a good solution also for local environment issues: |
Hello, when I try to run v4.0, I meet a issue, maybe a bug?
🐛 Bug
BrokenPipeError: [Errno 32] Broken pipe
To Reproduce
I did not change anything, the commond I run is :
python train.py --batch 20 --epochs 300 --data ./data/coco128.yaml --weights ./weights/yolov5s.pt --name test123
Output:
Traceback (most recent call last):
File "train.py", line 519, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 202, in train
rank=-1, world_size=opt.world_size, workers=opt.workers, pad=0.5)[0]
File "D:\yolov5_v4.0\utils\datasets.py", line 83, in create_dataloader
collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn)
File "D:\yolov5_v4.0\utils\datasets.py", line 96, in init
self.iterator = super().iter()
File "D:\Anaconda3\envs\pytorch17\lib\site-packages\torch\utils\data\dataloader.py", line 352, in iter
return self._get_iterator()
File "D:\Anaconda3\envs\pytorch17\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "D:\Anaconda3\envs\pytorch17\lib\site-packages\torch\utils\data\dataloader.py", line 801, in init
w.start()
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "D:\Anaconda3\envs\pytorch17\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
Environment
Additional context
I google this issue, it seems about thread, when I change number of workers to 6 or below, it worked successfully. Is my computer issue or a bug? Anyone has idea? Thanks!!!
The text was updated successfully, but these errors were encountered: