From 71a2bfe7865eee6b742fdd317d4e85321287567c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E4=B9=90=E4=B9=90=E4=B9=90=E4=B9=90=E4=B9=90=E4=B9=90?= =?UTF-8?q?=E4=B9=90?= <46926252+DataSttructure@users.noreply.github.com> Date: Thu, 21 Jul 2022 10:30:23 +0800 Subject: [PATCH] [Fix] Fix batch_size description error --- docs/en/tutorials/customize_datasets.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/tutorials/customize_datasets.md b/docs/en/tutorials/customize_datasets.md index e937f23a5b..3dc8922110 100644 --- a/docs/en/tutorials/customize_datasets.md +++ b/docs/en/tutorials/customize_datasets.md @@ -33,7 +33,7 @@ data = dict( - `train`, `val` and `test`: The [`config`](https://github.com/open-mmlab/mmcv/blob/master/docs/en/understand_mmcv/config.md)s to build dataset instances for model training, validation and testing by using [`build and registry`](https://github.com/open-mmlab/mmcv/blob/master/docs/en/understand_mmcv/registry.md) mechanism. -- `samples_per_gpu`: How many samples per batch and per gpu to load during model training, and the `batch_size` of training is equal to `samples_per_gpu` times gpu number, e.g. when using 8 gpus for distributed data parallel trainig and `samples_per_gpu=4`, the `batch_size` is `8*4=16`. +- `samples_per_gpu`: How many samples per batch and per gpu to load during model training, and the `batch_size` of training is equal to `samples_per_gpu` times gpu number, e.g. when using 8 gpus for distributed data parallel trainig and `samples_per_gpu=4`, the `batch_size` is `8*4=32`. If you would like to define `batch_size` for testing and validation, please use `test_dataloaser` and `val_dataloader` with mmseg >=0.24.1.