Skip to content

Commit

Permalink
[Refactor] Support progressive test with fewer memory cost (#709)
Browse files Browse the repository at this point in the history
* Support progressive test with fewer memory cost.

* Temp code

* Using processor to refactor evaluation workflow.

* refactor eval hook.

* Fix process bar.

* Fix middle save argument.

* Modify some variable name of dataset evaluate api.

* Modify some viriable name of eval hook.

* Fix some priority bugs of eval hook.

* Depreciated efficient_test.

* Fix training progress blocked by eval hook.

* Depreciated old test api.

* Fix test api error.

* Modify outer api.

* Build a sampler test api.

* TODO: Refactor format_results.

* Modify variable names.

* Fix num_classes bug.

* Fix sampler index bug.

* Fix grammaly bug.

* Support batch sampler.

* More readable test api.

* Remove some command arg and fix eval hook bug.

* Support format-only arg.

* Modify format_results of datasets.

* Modify tool which use test apis.

* support cityscapes eval

* fixed cityscapes

* 1. Add comments for batch_sampler;

2. Keep eval hook api same and add deprecated warning;

3. Add doc string for dataset.pre_eval;

* Add efficient_test doc string.

* Modify test tool to compat old version.

* Modify eval hook to compat with old version.

* Modify test api to compat old version api.

* Sampler explanation.

* update warning

* Modify deploy_test.py

* compatible with old output, add efficient test back

* clear logic of exclusive

* Warning about efficient_test.

* Modify format_results save folder.

* Fix bugs of format_results.

* Modify deploy_test.py.

* Update doc

* Fix deploy test bugs.

* Fix custom dataset unit tests.

* Fix dataset unit tests.

* Fix eval hook unit tests.

* Fix some imcompatible.

* Add pre_eval argument for eval hooks.

* Update eval hook doc string.

* Make pre_eval false in default.

* Add unit tests for dataset format_results.

* Fix some comments and bc-breaking bug.

* Fix pre_eval set cfg field.

* Remove redundant codes.

Co-authored-by: Jiarui XU <xvjiarui0826@gmail.com>
  • Loading branch information
clownrat6 and xvjiarui authored Aug 20, 2021
1 parent 99d8376 commit e235c1a
Show file tree
Hide file tree
Showing 22 changed files with 652 additions and 191 deletions.
2 changes: 1 addition & 1 deletion configs/_base_/schedules/schedule_160k.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
# runtime settings
runner = dict(type='IterBasedRunner', max_iters=160000)
checkpoint_config = dict(by_epoch=False, interval=16000)
evaluation = dict(interval=16000, metric='mIoU')
evaluation = dict(interval=16000, metric='mIoU', pre_eval=True)
2 changes: 1 addition & 1 deletion configs/_base_/schedules/schedule_20k.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
# runtime settings
runner = dict(type='IterBasedRunner', max_iters=20000)
checkpoint_config = dict(by_epoch=False, interval=2000)
evaluation = dict(interval=2000, metric='mIoU')
evaluation = dict(interval=2000, metric='mIoU', pre_eval=True)
2 changes: 1 addition & 1 deletion configs/_base_/schedules/schedule_40k.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
# runtime settings
runner = dict(type='IterBasedRunner', max_iters=40000)
checkpoint_config = dict(by_epoch=False, interval=4000)
evaluation = dict(interval=4000, metric='mIoU')
evaluation = dict(interval=4000, metric='mIoU', pre_eval=True)
2 changes: 1 addition & 1 deletion configs/_base_/schedules/schedule_80k.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
# runtime settings
runner = dict(type='IterBasedRunner', max_iters=80000)
checkpoint_config = dict(by_epoch=False, interval=8000)
evaluation = dict(interval=8000, metric='mIoU')
evaluation = dict(interval=8000, metric='mIoU', pre_eval=True)
6 changes: 3 additions & 3 deletions docs/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@ python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [-

Optional arguments:

- `RESULT_FILE`: Filename of the output results in pickle format. If not specified, the results will not be saved to a file.
- `RESULT_FILE`: Filename of the output results in pickle format. If not specified, the results will not be saved to a file. (After mmseg v0.17, the output results become pre-evaluation results or format result paths)
- `EVAL_METRICS`: Items to be evaluated on the results. Allowed values depend on the dataset, e.g., `mIoU` is available for all dataset. Cityscapes could be evaluated by `cityscapes` as well as standard `mIoU` metrics.
- `--show`: If specified, segmentation results will be plotted on the images and shown in a new window. It is only applicable to single GPU testing and used for debugging and visualization. Please make sure that GUI is available in your environment, otherwise you may encounter the error like `cannot connect to X server`.
- `--show-dir`: If specified, segmentation results will be plotted on the images and saved to the specified directory. It is only applicable to single GPU testing and used for debugging and visualization. You do NOT need a GUI available in your environment for using this option.
- `--eval-options`: Optional parameters during evaluation. When `efficient_test=True`, it will save intermediate results to local files to save CPU memory. Make sure that you have enough local storage space (more than 20GB).
- `--eval-options`: Optional parameters for `dataset.format_results` and `dataset.evaluate` during evaluation. When `efficient_test=True`, it will save intermediate results to local files to save CPU memory. Make sure that you have enough local storage space (more than 20GB). (`efficient_test` argument does not have effect after mmseg v0.17, we use a progressive mode to evaluation and format results which can largely save memory cost and evaluation time.)

Examples:

Expand Down Expand Up @@ -98,4 +98,4 @@ Assume that you have already downloaded the checkpoints to the directory `checkp
--eval mIoU
```

Using ```pmap``` to view CPU memory footprint, it used 2.25GB CPU memory with ```efficient_test=True``` and 11.06GB CPU memory with ```efficient_test=False``` . This optional parameter can save a lot of memory.
Using ```pmap``` to view CPU memory footprint, it used 2.25GB CPU memory with ```efficient_test=True``` and 11.06GB CPU memory with ```efficient_test=False``` . This optional parameter can save a lot of memory. (After mmseg v0.17, efficient_test has not effect and we use a progressive mode to evaluation and format results efficiently by default.)
12 changes: 6 additions & 6 deletions docs_zh-CN/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ python tools/test.py ${配置文件} ${检查点文件} [--out ${结果文件}]

可选参数:

- `RESULT_FILE`: pickle 格式的输出结果的文件名,如果不专门指定,结果将不会被专门保存成文件
- `EVAL_METRICS`: 在结果里将被评估的指标这主要取决于数据集, `mIoU` 对于所有数据集都可获得,像 Cityscapes 数据集可以通过 `cityscapes` 命令来专门评估,就像标准的 `mIoU`一样
- `--show`: 如果被指定,分割结果将会在一张图像里画出来并且在另一个窗口展示它仅仅是用来调试与可视化,并且仅针对单卡 GPU 测试请确认 GUI 在您的环境里可用,否则您也许会遇到报错 `cannot connect to X server`
- `--show-dir`: 如果被指定,分割结果将会在一张图像里画出来并且保存在指定文件夹里它仅仅是用来调试与可视化,并且仅针对单卡GPU测试使用该参数时,您的环境不需要 GUI
- `--eval-options`: 评估时的可选参数,当设置 `efficient_test=True` 时,它将会保存中间结果至本地文件里以节约 CPU 内存请确认您本地硬盘有足够的存储空间(大于20GB)
- `RESULT_FILE`: pickle 格式的输出结果的文件名,如果不专门指定,结果将不会被专门保存成文件。(MMseg v0.17 之后,args.out 将只会保存评估时的中间结果或者是分割图的保存路径。)
- `EVAL_METRICS`: 在结果里将被评估的指标这主要取决于数据集, `mIoU` 对于所有数据集都可获得,像 Cityscapes 数据集可以通过 `cityscapes` 命令来专门评估,就像标准的 `mIoU`一样
- `--show`: 如果被指定,分割结果将会在一张图像里画出来并且在另一个窗口展示它仅仅是用来调试与可视化,并且仅针对单卡 GPU 测试请确认 GUI 在您的环境里可用,否则您也许会遇到报错 `cannot connect to X server`
- `--show-dir`: 如果被指定,分割结果将会在一张图像里画出来并且保存在指定文件夹里它仅仅是用来调试与可视化,并且仅针对单卡GPU测试使用该参数时,您的环境不需要 GUI
- `--eval-options`: 评估时的可选参数,当设置 `efficient_test=True` 时,它将会保存中间结果至本地文件里以节约 CPU 内存请确认您本地硬盘有足够的存储空间(大于20GB)。(MMseg v0.17 之后,`efficient_test` 不再生效,我们重构了 test api,通过使用一种渐近式的方式来提升评估和保存结果的效率。

例子:

Expand Down Expand Up @@ -96,4 +96,4 @@ python tools/test.py ${配置文件} ${检查点文件} [--out ${结果文件}]
--eval mIoU
```

使用 ```pmap``` 可查看 CPU 内存情况, ```efficient_test=True``` 会使用约 2.25GB 的 CPU 内存, ```efficient_test=False``` 会使用约 11.06GB 的 CPU 内存。 这个可选参数可以节约很多 CPU 内存。
使用 ```pmap``` 可查看 CPU 内存情况, ```efficient_test=True``` 会使用约 2.25GB 的 CPU 内存, ```efficient_test=False``` 会使用约 11.06GB 的 CPU 内存。 这个可选参数可以节约很多 CPU 内存。(MMseg v0.17 之后, `efficient_test` 参数将不再生效, 我们使用了一种渐近的方式来更加有效快速地评估和保存结果。)
136 changes: 101 additions & 35 deletions mmseg/apis/test.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
import os.path as osp
import tempfile
import warnings

import mmcv
import numpy as np
Expand All @@ -19,7 +20,6 @@ def np2tmp(array, temp_file_name=None, tmpdir=None):
function will generate a file name with tempfile.NamedTemporaryFile
to save ndarray. Default: None.
tmpdir (str): Temporary directory to save Ndarray files. Default: None.
Returns:
str: The numpy file name.
"""
Expand All @@ -36,8 +36,11 @@ def single_gpu_test(model,
show=False,
out_dir=None,
efficient_test=False,
opacity=0.5):
"""Test with single GPU.
opacity=0.5,
pre_eval=False,
format_only=False,
format_args={}):
"""Test with single GPU by progressive mode.
Args:
model (nn.Module): Model to be tested.
Expand All @@ -46,24 +49,60 @@ def single_gpu_test(model,
out_dir (str, optional): If specified, the results will be dumped into
the directory to save output results.
efficient_test (bool): Whether save the results as local numpy files to
save CPU memory during evaluation. Default: False.
save CPU memory during evaluation. Mutually exclusive with
pre_eval and format_results. Default: False.
opacity(float): Opacity of painted segmentation map.
Default 0.5.
Must be in (0, 1] range.
pre_eval (bool): Use dataset.pre_eval() function to generate
pre_results for metric evaluation. Mutually exclusive with
efficient_test and format_results. Default: False.
format_only (bool): Only format result for results commit.
Mutually exclusive with pre_eval and efficient_test.
Default: False.
format_args (dict): The args for format_results. Default: {}.
Returns:
list: The prediction results.
list: list of evaluation pre-results or list of save file names.
"""
if efficient_test:
warnings.warn(
'DeprecationWarning: ``efficient_test`` will be deprecated, the '
'evaluation is CPU memory friendly with pre_eval=True')
mmcv.mkdir_or_exist('.efficient_test')
# when none of them is set true, return segmentation results as
# a list of np.array.
assert [efficient_test, pre_eval, format_only].count(True) <= 1, \
'``efficient_test``, ``pre_eval`` and ``format_only`` are mutually ' \
'exclusive, only one of them could be true .'

model.eval()
results = []
dataset = data_loader.dataset
prog_bar = mmcv.ProgressBar(len(dataset))
if efficient_test:
mmcv.mkdir_or_exist('.efficient_test')
for i, data in enumerate(data_loader):
# The pipeline about how the data_loader retrieval samples from dataset:
# sampler -> batch_sampler -> indices
# The indices are passed to dataset_fetcher to get data from dataset.
# data_fetcher -> collate_fn(dataset[index]) -> data_sample
# we use batch_sampler to get correct data idx
loader_indices = data_loader.batch_sampler

for batch_indices, data in zip(loader_indices, data_loader):
with torch.no_grad():
result = model(return_loss=False, **data)

if efficient_test:
result = [np2tmp(_, tmpdir='.efficient_test') for _ in result]

if format_only:
result = dataset.format_results(
result, indices=batch_indices, **format_args)
if pre_eval:
# TODO: adapt samples_per_gpu > 1.
# only samples_per_gpu=1 valid now
result = dataset.pre_eval(result, indices=batch_indices)

results.extend(result)

if show or out_dir:
img_tensor = data['img'][0]
img_metas = data['img_metas'][0].data[0]
Expand All @@ -90,27 +129,22 @@ def single_gpu_test(model,
out_file=out_file,
opacity=opacity)

if isinstance(result, list):
if efficient_test:
result = [np2tmp(_, tmpdir='.efficient_test') for _ in result]
results.extend(result)
else:
if efficient_test:
result = np2tmp(result, tmpdir='.efficient_test')
results.append(result)

batch_size = len(result)
for _ in range(batch_size):
prog_bar.update()

return results


def multi_gpu_test(model,
data_loader,
tmpdir=None,
gpu_collect=False,
efficient_test=False):
"""Test model with multiple gpus.
efficient_test=False,
pre_eval=False,
format_only=False,
format_args={}):
"""Test model with multiple gpus by progressive mode.
This method tests model with multiple gpus and collects the results
under two different modes: gpu and cpu modes. By setting 'gpu_collect=True'
Expand All @@ -123,39 +157,71 @@ def multi_gpu_test(model,
data_loader (utils.data.Dataloader): Pytorch data loader.
tmpdir (str): Path of directory to save the temporary results from
different gpus under cpu mode. The same path is used for efficient
test.
test. Default: None.
gpu_collect (bool): Option to use either gpu or cpu to collect results.
Default: False.
efficient_test (bool): Whether save the results as local numpy files to
save CPU memory during evaluation. Default: False.
save CPU memory during evaluation. Mutually exclusive with
pre_eval and format_results. Default: False.
pre_eval (bool): Use dataset.pre_eval() function to generate
pre_results for metric evaluation. Mutually exclusive with
efficient_test and format_results. Default: False.
format_only (bool): Only format result for results commit.
Mutually exclusive with pre_eval and efficient_test.
Default: False.
format_args (dict): The args for format_results. Default: {}.
Returns:
list: The prediction results.
list: list of evaluation pre-results or list of save file names.
"""
if efficient_test:
warnings.warn(
'DeprecationWarning: ``efficient_test`` will be deprecated, the '
'evaluation is CPU memory friendly with pre_eval=True')
mmcv.mkdir_or_exist('.efficient_test')
# when none of them is set true, return segmentation results as
# a list of np.array.
assert [efficient_test, pre_eval, format_only].count(True) <= 1, \
'``efficient_test``, ``pre_eval`` and ``format_only`` are mutually ' \
'exclusive, only one of them could be true .'

model.eval()
results = []
dataset = data_loader.dataset
# The pipeline about how the data_loader retrieval samples from dataset:
# sampler -> batch_sampler -> indices
# The indices are passed to dataset_fetcher to get data from dataset.
# data_fetcher -> collate_fn(dataset[index]) -> data_sample
# we use batch_sampler to get correct data idx

# batch_sampler based on DistributedSampler, the indices only point to data
# samples of related machine.
loader_indices = data_loader.batch_sampler

rank, world_size = get_dist_info()
if rank == 0:
prog_bar = mmcv.ProgressBar(len(dataset))
if efficient_test:
mmcv.mkdir_or_exist('.efficient_test')
for i, data in enumerate(data_loader):

for batch_indices, data in zip(loader_indices, data_loader):
with torch.no_grad():
result = model(return_loss=False, rescale=True, **data)

if isinstance(result, list):
if efficient_test:
result = [np2tmp(_, tmpdir='.efficient_test') for _ in result]
results.extend(result)
else:
if efficient_test:
result = np2tmp(result, tmpdir='.efficient_test')
results.append(result)
if efficient_test:
result = [np2tmp(_, tmpdir='.efficient_test') for _ in result]

if format_only:
result = dataset.format_results(
result, indices=batch_indices, **format_args)
if pre_eval:
# TODO: adapt samples_per_gpu > 1.
# only samples_per_gpu=1 valid now
result = dataset.pre_eval(result, indices=batch_indices)

results.extend(result)

if rank == 0:
batch_size = len(result)
for _ in range(batch_size * world_size):
batch_size = len(result) * world_size
for _ in range(batch_size):
prog_bar.update()

# collect results from all ranks
Expand Down
6 changes: 4 additions & 2 deletions mmseg/core/evaluation/__init__.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# Copyright (c) OpenMMLab. All rights reserved.
from .class_names import get_classes, get_palette
from .eval_hooks import DistEvalHook, EvalHook
from .metrics import eval_metrics, mean_dice, mean_fscore, mean_iou
from .metrics import (eval_metrics, intersect_and_union, mean_dice,
mean_fscore, mean_iou, pre_eval_to_metrics)

__all__ = [
'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore',
'eval_metrics', 'get_classes', 'get_palette'
'eval_metrics', 'get_classes', 'get_palette', 'pre_eval_to_metrics',
'intersect_and_union'
]
46 changes: 37 additions & 9 deletions mmseg/core/evaluation/eval_hooks.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
import os.path as osp
import warnings

import torch.distributed as dist
from mmcv.runner import DistEvalHook as _DistEvalHook
Expand All @@ -16,15 +17,28 @@ class EvalHook(_EvalHook):
Default: False.
efficient_test (bool): Whether save the results as local numpy files to
save CPU memory during evaluation. Default: False.
pre_eval (bool): Whether to use progressive mode to evaluate model.
Default: False.
Returns:
list: The prediction results.
"""

greater_keys = ['mIoU', 'mAcc', 'aAcc']

def __init__(self, *args, by_epoch=False, efficient_test=False, **kwargs):
def __init__(self,
*args,
by_epoch=False,
efficient_test=False,
pre_eval=False,
**kwargs):
super().__init__(*args, by_epoch=by_epoch, **kwargs)
self.efficient_test = efficient_test
self.pre_eval = pre_eval
if efficient_test:
warnings.warn(
'DeprecationWarning: ``efficient_test`` for evaluation hook '
'is deprecated, the evaluation hook is CPU memory friendly '
'with ``pre_eval=True`` as argument for ``single_gpu_test()`` '
'function')

def _do_evaluate(self, runner):
"""perform evaluation and save ckpt."""
Expand All @@ -33,10 +47,8 @@ def _do_evaluate(self, runner):

from mmseg.apis import single_gpu_test
results = single_gpu_test(
runner.model,
self.dataloader,
show=False,
efficient_test=self.efficient_test)
runner.model, self.dataloader, show=False, pre_eval=self.pre_eval)
runner.log_buffer.clear()
runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
key_score = self.evaluate(runner, results)
if self.save_best:
Expand All @@ -52,15 +64,28 @@ class DistEvalHook(_DistEvalHook):
Default: False.
efficient_test (bool): Whether save the results as local numpy files to
save CPU memory during evaluation. Default: False.
pre_eval (bool): Whether to use progressive mode to evaluate model.
Default: False.
Returns:
list: The prediction results.
"""

greater_keys = ['mIoU', 'mAcc', 'aAcc']

def __init__(self, *args, by_epoch=False, efficient_test=False, **kwargs):
def __init__(self,
*args,
by_epoch=False,
efficient_test=False,
pre_eval=False,
**kwargs):
super().__init__(*args, by_epoch=by_epoch, **kwargs)
self.efficient_test = efficient_test
self.pre_eval = pre_eval
if efficient_test:
warnings.warn(
'DeprecationWarning: ``efficient_test`` for evaluation hook '
'is deprecated, the evaluation hook is CPU memory friendly '
'with ``pre_eval=True`` as argument for ``multi_gpu_test()`` '
'function')

def _do_evaluate(self, runner):
"""perform evaluation and save ckpt."""
Expand Down Expand Up @@ -90,7 +115,10 @@ def _do_evaluate(self, runner):
self.dataloader,
tmpdir=tmpdir,
gpu_collect=self.gpu_collect,
efficient_test=self.efficient_test)
pre_eval=self.pre_eval)

runner.log_buffer.clear()

if runner.rank == 0:
print('\n')
runner.log_buffer.output['eval_iter_num'] = len(self.dataloader)
Expand Down
Loading

0 comments on commit e235c1a

Please sign in to comment.