Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process defunct at DDP training #4414

Closed
yukkyo opened this issue Aug 14, 2021 · 3 comments · Fixed by #4422
Closed

Process defunct at DDP training #4414

yukkyo opened this issue Aug 14, 2021 · 3 comments · Fixed by #4422
Labels
bug Something isn't working

Comments

@yukkyo
Copy link

yukkyo commented Aug 14, 2021

The DDP training did not finish correctly. This did not happen with Single GPU training.

0. Environment

  • yolov5 (commit hash: d9f23ed6d65e985c07e9ef0ec77d476dd14e2b26 )
  • torch 1.9.0
  • CUDA 11.1

1. Trying command

python -m torch.distributed.run \
        --nproc_per_node 2 train.py \
        --batch 48 \
        --workers 16 \
        --data ../../../input/nfl_yolov5_fold_0_5.yaml \
        --weights yolov5x.pt \
        --single-cls \
        --img 704 \
        --epochs 1 \
        --device 0,1

2. Output

Results on the way
✔︎ yolov5 sh train_yolov5.sh
[INFO] 2021-08-14 11:56:20,095 run: Running torch.distributed.run with args: ['/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/run.py', '--nproc_per_node', '2', 'train.py', '--batch', '48', '--workers', '16', '--data', '../../../input/nfl_yolov5_fold_0_5.yaml', '--weights', 'yolov5x.pt', '--single-cls', '--img', '704', '--epochs', '1', '--device', '0,1']
[INFO] 2021-08-14 11:56:20,096 run: Using nproc_per_node=2.
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[INFO] 2021-08-14 11:56:20,096 api: Starting elastic_operator with launch configs:
  entrypoint       : train.py
  min_nodes        : 1
  max_nodes        : 1
  nproc_per_node   : 2
  run_id           : none
  rdzv_backend     : static
  rdzv_endpoint    : 127.0.0.1:29500
  rdzv_configs     : {'rank': 0, 'timeout': 900}
  max_restarts     : 3
  monitor_interval : 5
  log_dir          : None
  metrics_cfg      : {}

[INFO] 2021-08-14 11:56:20,097 local_elastic_agent: log directory set to: /tmp/torchelastic_zw5vlukf/none_5toi8pj9
[INFO] 2021-08-14 11:56:20,097 api: [default] starting workers for entrypoint: python
[INFO] 2021-08-14 11:56:20,097 api: [default] Rendezvous'ing worker group
[INFO] 2021-08-14 11:56:20,097 static_tcp_rendezvous: Creating TCPStore as the c10d::Store implementation
/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/elastic/utils/store.py:53: FutureWarning: This is an experimental API and will be changed in future.
  "This is an experimental API and will be changed in future.", FutureWarning
[INFO] 2021-08-14 11:56:20,098 api: [default] Rendezvous complete for workers. Result:
  restart_count=0
  master_addr=127.0.0.1
  master_port=29500
  group_rank=0
  group_world_size=1
  local_ranks=[0, 1]
  role_ranks=[0, 1]
  global_ranks=[0, 1]
  role_world_sizes=[2, 2]
  global_world_sizes=[2, 2]

[INFO] 2021-08-14 11:56:20,098 api: [default] Starting worker group
[INFO] 2021-08-14 11:56:20,098 __init__: Setting worker0 reply file to: /tmp/torchelastic_zw5vlukf/none_5toi8pj9/attempt_0/0/error.json
[INFO] 2021-08-14 11:56:20,098 __init__: Setting worker1 reply file to: /tmp/torchelastic_zw5vlukf/none_5toi8pj9/attempt_0/1/error.json
train: weights=yolov5x.pt, cfg=, data=../../../input/nfl_yolov5_fold_0_5.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=1, batch_size=48, imgsz=704, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1, multi_scale=False, single_cls=True, adam=False, sync_bn=False, workers=16, project=runs/train, entity=None, name=exp, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0
github: skipping check (not a git repository), for updates see https://github.com/ultralytics/yolov5
YOLOv5 🚀 2021-8-14 torch 1.9.0+cu102 CUDA:0 (TITAN RTX, 24220.3125MB)
                                      CUDA:1 (TITAN RTX, 24217.8125MB)

Added key: store_based_barrier_key:1 to store for rank: 0
Rank 0: Completed store-based barrier for 2 nodes.
hyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
[W ProcessGroupNCCL.cpp:1569] Rank 1 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
wandb: Currently logged in as: fam_taro (use `wandb login --relogin` to force relogin)
wandb: wandb version 0.12.0 is available!  To upgrade, please run:
wandb:  $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.10.33
wandb: Syncing run sleek-dragon-30
wandb: ⭐️ View project at https://wandb.ai/fam_taro/YOLOv5
wandb: 🚀 View run at https://wandb.ai/fam_taro/YOLOv5/runs/pgxfrwy6
wandb: Run data is saved locally in /home/fujimoto/project/src/packages/yolov5/wandb/run-20210814_115623-pgxfrwy6
wandb: Run `wandb offline` to turn off syncing.

[W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
Overriding model.yaml nc=80 with nc=1

                 from  n    params  module                                  arguments
  0                -1  1      8800  models.common.Focus                     [3, 80, 3]
  1                -1  1    115520  models.common.Conv                      [80, 160, 3, 2]
  2                -1  4    309120  models.common.C3                        [160, 160, 4]
  3                -1  1    461440  models.common.Conv                      [160, 320, 3, 2]
  4                -1 12   3285760  models.common.C3                        [320, 320, 12]
  5                -1  1   1844480  models.common.Conv                      [320, 640, 3, 2]
  6                -1 12  13125120  models.common.C3                        [640, 640, 12]
  7                -1  1   7375360  models.common.Conv                      [640, 1280, 3, 2]
  8                -1  1   4099840  models.common.SPP                       [1280, 1280, [5, 9, 13]]
  9                -1  4  19676160  models.common.C3                        [1280, 1280, 4, False]
 10                -1  1    820480  models.common.Conv                      [1280, 640, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  4   5332480  models.common.C3                        [1280, 640, 4, False]
 14                -1  1    205440  models.common.Conv                      [640, 320, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  4   1335040  models.common.C3                        [640, 320, 4, False]
 18                -1  1    922240  models.common.Conv                      [320, 320, 3, 2]
 19          [-1, 14]  1         0  models.common.Concat                    [1]
 20                -1  4   4922880  models.common.C3                        [640, 640, 4, False]
 21                -1  1   3687680  models.common.Conv                      [640, 640, 3, 2]
 22          [-1, 10]  1         0  models.common.Concat                    [1]
 23                -1  4  19676160  models.common.C3                        [1280, 1280, 4, False]
 24      [17, 20, 23]  1     40374  models.yolo.Detect                      [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [320, 640, 1280]]
Model Summary: 607 layers, 87244374 parameters, 87244374 gradients, 217.3 GFLOPs

Transferred 788/794 items from yolov5x.pt
Scaled weight_decay = 0.000375
optimizer: SGD with parameter groups 131 weight, 134 weight (no decay), 134 bias
albumentations: Blur(always_apply=False, p=0.1, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.1, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01)
train: Scanning '/home/fujimoto/project/input/train_fold_0_5.cache' images and labels... 7957 found, 0 missing, 0 empty, 0 corrupted: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7957/7957 [00:00<?, ?it/s]
train: Scanning '/home/fujimoto/project/input/train_fold_0_5.cache' images and labels... 7957 found, 0 missing, 0 empty, 0 corrupted: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7957/7957 [00:00<?, ?it/s]
val: Scanning '/home/fujimoto/project/input/valid_fold_0_5.cache' images and labels... 1990 found, 0 missing, 0 empty, 0 corrupted: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1990/1990 [00:00<?, ?it/s]
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
Plotting labels...

autoanchor: Analyzing anchors... anchors/target = 2.49, Best Possible Recall (BPR) = 0.9873
Image sizes 704 train, 704 val
Using 16 dataloader workers
Logging results to runs/train/exp13
Starting training for 1 epochs...

     Epoch   gpu_mem       box       obj       cls    labels  img_size
       0/0       23G    0.1364   0.05804         0       778       704:   1%|█▎                                                                                                                                                                                                               | 1/166 [00:09<26:16,  9.55s/it]Reducer buckets have been rebuilt in this iteration.
       0/0     23.1G   0.08661   0.04975         0       474       704: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 166/166 [02:41<00:00,  1.03it/s]
               Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 42/42 [00:30<00:00,  1.39it/s]
                 all       1990      38747      0.392      0.599      0.322     0.0723

1 epochs completed in 0.055 hours.
Optimizer stripped from runs/train/exp13/weights/last.pt, 175.1MB
Optimizer stripped from runs/train/exp13/weights/best.pt, 175.1MB

wandb: Waiting for W&B process to finish, PID 24317
wandb: Program ended successfully.
wandb:
wandb: Find user logs for this run at: /home/fujimoto/project/src/packages/yolov5/wandb/run-20210814_115623-pgxfrwy6/logs/debug.log
wandb: Find internal logs for this run at: /home/fujimoto/project/src/packages/yolov5/wandb/run-20210814_115623-pgxfrwy6/logs/debug-internal.log
wandb: Run summary:
wandb:                 train/box_loss 0.08661
wandb:                 train/obj_loss 0.04975
wandb:                 train/cls_loss 0.0
wandb:              metrics/precision 0.39226
wandb:                 metrics/recall 0.59856
wandb:                metrics/mAP_0.5 0.32208
wandb:           metrics/mAP_0.5:0.95 0.07229
wandb:                   val/box_loss 0.0499
wandb:                   val/obj_loss 0.0583
wandb:                   val/cls_loss 0.0
wandb:                          x/lr0 0.00165
wandb:                          x/lr1 0.00165
wandb:                          x/lr2 0.08515
wandb:                       _runtime 221
wandb:                     _timestamp 1628942404
wandb:                          _step 1
wandb: Run history:
wandb:         train/box_loss ▁
wandb:         train/obj_loss ▁
wandb:         train/cls_loss ▁
wandb:      metrics/precision ▁
wandb:         metrics/recall ▁
wandb:        metrics/mAP_0.5 ▁
wandb:   metrics/mAP_0.5:0.95 ▁
wandb:           val/box_loss ▁
wandb:           val/obj_loss ▁
wandb:           val/cls_loss ▁
wandb:                  x/lr0 ▁
wandb:                  x/lr1 ▁
wandb:                  x/lr2 ▁
wandb:               _runtime ▁█
wandb:             _timestamp ▁█
wandb:                  _step ▁█
wandb:
wandb: Synced 5 W&B file(s), 49 media file(s), 1 artifact file(s) and 0 other file(s)
wandb:
wandb: Synced sleek-dragon-30: https://wandb.ai/fam_taro/YOLOv5/runs/pgxfrwy6

It stopped in the following state.

...
Results saved to runs/train/exp13
Destroying process group... Done.

It also kept catching the GPU 0. (But not used)

nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.23.05    Driver Version: 455.23.05    CUDA Version: 11.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  TITAN RTX           Off  | 00000000:1A:00.0 Off |                  N/A |
| 47%   58C    P8    24W / 280W |  23016MiB / 24220MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  TITAN RTX           Off  | 00000000:68:00.0 Off |                  N/A |
| 53%   64C    P8    20W / 280W |      3MiB / 24217MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A     24907      C   ...ersions/3.7.10/bin/python    23007MiB |
|    1   N/A  N/A     24907      C   ...ersions/3.7.10/bin/python        0MiB |
+-----------------------------------------------------------------------------+

And I got below result when input <Ctrl + C>.

Destroying process group... Done.
^C/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 6 leaked semaphores to clean up at shutdown
  len(cache))
Traceback (most recent call last):
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/run.py", line 637, in <module>
    main()
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/run.py", line 629, in main
    run(args)
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/run.py", line 624, in run
    )(*cmd_args)
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
    return f(*args, **kwargs)
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 238, in launch_agent
    result = agent.run()
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/elastic/agent/server/api.py", line 700, in run
    result = self._invoke_run(role)
  File "/home/fujimoto/.pyenv/versions/3.7.10/lib/python3.7/site-packages/torch/distributed/elastic/agent/server/api.py", line 828, in _invoke_run
    time.sleep(monitor_interval)
KeyboardInterrupt
The last command took 730.224 seconds.

Do you know how to deal with this?

@yukkyo yukkyo added the bug Something isn't working label Aug 14, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Aug 14, 2021

👋 Hello @yukkyo, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher glenn-jocher linked a pull request Aug 15, 2021 that will close this issue
@glenn-jocher
Copy link
Member

glenn-jocher commented Aug 15, 2021

@yukkyo good news 😃! Your original issue may now be fixed ✅ in PR #4422.

This PR updates the DDP process group, and was verified over 3 epochs of COCO training with 4x A100 DDP NCCL on EC2 P4d instance with official Docker image and CUDA 11.1 pip install from https://pytorch.org/get-started/locally/

d=yolov5 && git clone https://github.com/ultralytics/yolov5 -b master $d && cd $d

python -m torch.distributed.launch --nproc_per_node 4 --master_port 1 train.py --data coco.yaml --batch 64 --weights '' --project study --cfg yolov5l.yaml --epochs 300 --name yolov5l-1280 --img 1280 --linear --device 0,1,2,3
python -m torch.distributed.launch --nproc_per_node 4 --master_port 2 train.py --data coco.yaml --batch 64 --weights '' --project study --cfg yolov5l.yaml --epochs 300 --name yolov5l-1280 --img 1280 --linear --device 4,5,6,7

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload with model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@yukkyo
Copy link
Author

yukkyo commented Aug 15, 2021

@glenn-jocher
Thanks for notice! 👍 👍 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants