Skip to content

Commit

Permalink
bump to v1.0.0rc2 (#1405)
Browse files Browse the repository at this point in the history
* [Improve] add a version option in docs menu (#1162)
* [Enhance] update dev_scripts for link checking (#1164)
* [Refactoring] decompose the implementations of different metrics into several files (#1161)
* [Fix] Fix PPL bug (#1172)
* [Fix] Fix some known bugs. (#1200)
* [Fix] Benchmark related bugs (#1236)
* [Enhancement] Support rerun failed or canceled jobs in `train_benchmark.py` (#1259)
* [Fix]  Fix bugs in `sr test config`, `realbasicvsr config` and `pconv config` (#1167)
* [Fix] fix test of Vid4 datasets bug (#1293)
* [Feature] Support multi-metrics with different sample-model (#1171)
* [Fix] fix GenerateSegmentIndices ut (#1302)
* [Enhancement] Reduce the randomness in unit test of `stylegan3_utils.py` (#1306)
* [CI] Fix GitHub windows CI (#1320)
* [Fix] fix basicvsr++ mirror sequence bug (#1304)
* [Fix] fix sisr-test psnr config (#1319)
* [Fix] fix vsr models pytorch2onnx (#1300)
* [Bug] Ensure the output type of `GenerateFacialHeatmap` is `np.float32` (#1310)
* [Bug] Fix sampling behavior of `unpaired_dataset.py` and  urls in cyclegan's README  (#1308)
* [README] Fix TTSR's README (#1325)
* [CI] Update `paths-ignore` for GitHub CI (#1327)
* [Bug] Save gt images in PGGAN's `forward` (#1328)
* [Bug] Correct RDN number of channels (#1332)
* [Bug] Revise flip transformation in some conditional gan's setting (#1331)
* [Unit Test] Fix unit test of SNR (#1335)
* [Bug] Revise flavr config (#1336)
* [Fix] fix realesrgan ema (#1341)
* [Fix] Fix bugs find during benchmark running (#1348)
* [Fix] fix liif test config (#1353)
* [Enhancement] Complete save_best in configs (#1349)
* [Config] Revise discriminator's learning rate of TTSR to align with 0.x version (#1352)
* [Fix] fix edsr configs (#1367)
* [Enhancement] Add pixel value clip in visualizer (#1365)
* [Bug] Fix randomness in FixedCrop + add L1 loss in Pix2Pix (#1364)
* [Fix] fix realbasicvsr config (#1358)
* [Enhancement] Fix PESinGAN-inter-pad setting + add SinGAN Dataset + add SinGAN demo (#1363)
* [Fix] fix types of exceptions in demos (#1372)
* [Enhancement] Support deterministic training in benchmark (#1356)
* [Fix] Avoid cast int and float in GenDataPreprocessor (#1385)
* [Config] Update metric config in ggan (#1386)
* [Config] Revise batch size in wang-gp's config (#1384)
* [Fix]: add type and change default number of preprocess_div2k_dataset.py (#1380)
* [Feature] Support qualitative comparison tools (#1303)
* [Docs] Revise docs (change PackGenInputs and GenDataSample to mmediting ones) (#1382)
* [Config] Revise Pix2Pix edges2shoes config (#1391)
* [Bug] fix rdn and srcnn train configs (#1392)
* [Fix] Fix test/val pipeline of pegan configs (#1393)
* [Fix] Modify Readme of S3 (#1398)
* [Fix] Correct fid of ggan (#1397)
* [Feature] support instance_aware_colorization inference (#1370)

Co-authored-by: ruoning <w853133995@outlook.com>
Co-authored-by: Yifei Yang <2744335995@qq.com>
Co-authored-by: LeoXing1996 <xingzn1996@hotmail.com>
Co-authored-by: Z-Fran <49083766+Z-Fran@users.noreply.github.com>
Co-authored-by: Qunliang Xing <ryanxingql@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: ruoning <44117949+ruoningYu@users.noreply.github.com>
  • Loading branch information
8 people authored Nov 3, 2022
1 parent 998df8c commit 68fd55c
Show file tree
Hide file tree
Showing 202 changed files with 7,123 additions and 2,538 deletions.
1 change: 1 addition & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ workflows:
tools/.* lint_only false
configs/.* lint_only false
.circleci/.* lint_only false
.dev_scripts/.* lint_only true
base-revision: 1.x
# this is the path of the configuration we should trigger once
# path filtering and pipeline parameter value updates are
Expand Down
4 changes: 3 additions & 1 deletion .circleci/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ jobs:
pip install git+https://github.com/open-mmlab/mmengine.git@main
pip install -U openmim
mim install 'mmcv >= 2.0.0rc1'
mim install 'mmdet >= 3.0.0rc2'
pip install -r requirements/tests.txt
- run:
name: Build and install
Expand Down Expand Up @@ -98,13 +99,14 @@ jobs:
name: Build Docker image
command: |
docker build .circleci/docker -t mmedit:gpu --build-arg PYTORCH=<< parameters.torch >> --build-arg CUDA=<< parameters.cuda >> --build-arg CUDNN=<< parameters.cudnn >>
docker run --gpus all -t -d -v /home/circleci/project:/mmedit -v /home/circleci/mmengine:/mmengine -w /mmedit --name mmedit mmedit:gpu
docker run --gpus all -t -d -v /home/circleci/project:/mmedit -v /home/circleci/mmengine:/mmengine -v /home/circleci/mmdetection:/mmdetection -w /mmedit --name mmedit mmedit:gpu
- run:
name: Install mmedit dependencies
command: |
docker exec mmedit pip install -e /mmengine
docker exec mmedit pip install -U openmim
docker exec mmedit mim install 'mmcv >= 2.0.0rc1'
docker exec mmedit mim install 'mmdet >= 3.0.0rc2'
docker exec mmedit pip install -r requirements/tests.txt
- run:
name: Build and install
Expand Down
57 changes: 54 additions & 3 deletions .dev_scripts/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
- [4. Monitor your training](#4-monitor-your-training)
- [5. Train with a list of models](#5-train-with-a-list-of-models)
- [6. Train with skipping a list of models](#6-train-with-skipping-a-list-of-models)
- [7. Automatically check links](#automatically-check-links)

## 1. Check UT

Expand Down Expand Up @@ -128,7 +129,7 @@ python .dev_scripts/train_benchmark.py mm_lol \
--quotatype=auto
```

# 4. Monitor your training
## 4. Monitor your training

After you submitting jobs following [3-Train-all-the-models](#3-train-all-the-models), you will find a `xxx.log` file.
This log file list all the job name of job id you have submitted. With this log file, you can monitor your training by running `.dev_scripts/job_watcher.py`.
Expand All @@ -141,7 +142,7 @@ python .dev_scripts/job_watcher.py --work-dir work_dirs/benchmark_fp32/ --log 20

Then, you will find `20220923-140317.csv`, which reports the status and recent log of each job.

# 5. Train with a list of models
## 5. Train with a list of models

If you only need to run some of the models, you can list all the models' name in a file, and specify the models when using `train_benchmark.py`.

Expand All @@ -162,7 +163,7 @@ python .dev_scripts/train_benchmark.py mm_lol \

Specifically, you need to enable `--rerun`, and specify the list of models to rerun by `--rerun-list`

# 6. Train with skipping a list of models
## 6. Train with skipping a list of models

If you want to train all the models while skipping some models, you can also list all the models' name in a file, and specify the models when running `train_benchmark.py`.

Expand All @@ -182,3 +183,53 @@ python .dev_scripts/train_benchmark.py mm_lol \
```

Specifically, you need to enable `--skip`, and specify the list of models to skip by `--skip-list`

## 7. Train failed or canceled jobs

If you want to rerun failed or canceled jobs in the last run, you can combine `--rerun` flag with `--rerun-failure` and `--rerun-cancel` flags.

For example, the log file of the last run is `train-20221009-211904.log`, and now you want to rerun the failed jobs. You can use the following command:

```bash
python .dev_scripts/train_benchmark.py mm_lol \
--job-name RERUN \
--rerun train-20221009-211904.log \
--rerun-fail \
--run
```

We can combine `--rerun-fail` and `--rerun-cancel` with flag `---models` to rerun a **subset** of failed or canceled model.

```bash
python .dev_scripts/train_benchmark.py mm_lol \
--job-name RERUN \
--rerun train-20221009-211904.log \
--rerun-fail \
--models sagan \ # only rerun 'sagan' models in all failed tasks
--run
```

Specifically, `--rerun-fail` and `--rerun-cancel` can be used together to rerun both failed and cancaled jobs.

## 8. `deterministic` training

Set `torch.backends.cudnn.deterministic = True` and `torch.backends.cudnn.benchmark = False` can remove randomness operation in Pytorch training. You can add `--deterministic` flag when start your benchmark training to remove the influence of randomness operation.

```shell
python .dev_scripts/train_benchmark.py mm_lol --job-name xzn --models pix2pix --cpus-per-job 16 --run --deterministic
```

## 9. Automatically check links

Use the following script to check whether the links in documentations are valid:

```shell
python3 .github/scripts/doc_link_checker.py --target docs/zh_cn
python3 .github/scripts/doc_link_checker.py --target README_zh-CN.md
python3 .github/scripts/doc_link_checker.py --target docs/en
python3 .github/scripts/doc_link_checker.py --target README.md
```

You can specify the `--target` by a file or a directory.

**Notes:** DO NOT use it in CI, because requiring too many http requirements by CI will cause 503 and CI will propabaly fail.
11 changes: 5 additions & 6 deletions .dev_scripts/create_ceph_configs.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def convert_data_config(data_cfg):
dataset: dict = data_cfg['dataset']

dataset_type: str = dataset['type']
if 'mmcls' in dataset_type:
if dataset_type in ['ImageNet', 'CIFAR10']:
repo_name = 'classification'
else:
repo_name = 'editing'
Expand Down Expand Up @@ -112,8 +112,6 @@ def convert_data_config(data_cfg):
bg_dir_path = bg_dir_path.replace(dataroot_prefix,
ceph_dataroot_prefix)
bg_dir_path = bg_dir_path.replace(repo_name, 'detection')
bg_dir_path = bg_dir_path.replace('openmmlab:',
'sproject:')
pipeline['bg_dir'] = bg_dir_path
elif type_ == 'CompositeFg':
fg_dir_path = pipeline['fg_dirs']
Expand Down Expand Up @@ -188,9 +186,10 @@ def update_ceph_config(filename, args, dry_run=False):

# 2. change visualizer
if hasattr(config, 'vis_backends'):
for vis_cfg in config['vis_backends']:
if vis_cfg['type'] == 'GenVisBackend':
vis_cfg['ceph_path'] = work_dir
# TODO: support upload to ceph
# for vis_cfg in config['vis_backends']:
# if vis_cfg['type'] == 'GenVisBackend':
# vis_cfg['ceph_path'] = work_dir

# add pavi config
if args.add_pavi:
Expand Down
85 changes: 85 additions & 0 deletions .dev_scripts/doc_link_checker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Copyright (c) MegFlow. All rights reserved.
# /bin/python3

import argparse
import os
import re


def make_parser():
parser = argparse.ArgumentParser('Doc link checker')
parser.add_argument(
'--http', default=False, type=bool, help='check http or not ')
parser.add_argument(
'--target',
default='./docs',
type=str,
help='the directory or file to check')
return parser


pattern = re.compile(r'\[.*?\]\(.*?\)')


def analyze_doc(home, path):
print('analyze {}'.format(path))
problem_list = []
code_block = 0
with open(path) as f:
lines = f.readlines()
for line in lines:
line = line.strip()
if line.startswith('```'):
code_block = 1 - code_block

if code_block > 0:
continue

if '[' in line and ']' in line and '(' in line and ')' in line:
all = pattern.findall(line)
for item in all:
# skip ![]()
if item.find('[') == item.find(']') - 1:
continue

# process the case [text()]()
offset = item.find('](')
if offset == -1:
continue
item = item[offset:]
start = item.find('(')
end = item.find(')')
ref = item[start + 1:end]

if ref.startswith('http') or ref.startswith('#'):
continue
if '.md#' in ref:
ref = ref[ref.find('#'):]
fullpath = os.path.join(home, ref)
if not os.path.exists(fullpath):
problem_list.append(ref)
else:
continue
if len(problem_list) > 0:
print(f'{path}:')
for item in problem_list:
print(f'\t {item}')
print('\n')
raise Exception('found link error')


def traverse(target):
if os.path.isfile(target):
analyze_doc(os.path.dirname(target), target)
return
for home, dirs, files in os.walk(target):
for filename in files:
if filename.endswith('.md'):
path = os.path.join(home, filename)
if os.path.islink(path) is False:
analyze_doc(home, path)


if __name__ == '__main__':
args = make_parser().parse_args()
traverse(args.target)
6 changes: 6 additions & 0 deletions .dev_scripts/download_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ def download(args):

http_prefix_long = 'https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmediting/' # noqa
http_prefix_short = 'https://download.openmmlab.com/mmediting/'
http_prefix_gen = 'https://download.openmmlab.com/mmgen/'

# load model list
if args.model_list:
Expand Down Expand Up @@ -112,6 +113,11 @@ def download(args):
model_name = model_weight_url[len(http_prefix_long):]
elif model_weight_url.startswith(http_prefix_short):
model_name = model_weight_url[len(http_prefix_short):]
elif model_weight_url.startswith(http_prefix_gen):
model_name = model_weight_url[len(http_prefix_gen):]
elif model_weight_url == '':
print(f'{model_info.Name} weight is missing')
return None
else:
raise ValueError(f'Unknown url prefix. \'{model_weight_url}\'')

Expand Down
2 changes: 1 addition & 1 deletion .dev_scripts/job_watcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from pygments.util import ClassNotFound
from simple_term_menu import TerminalMenu

CACHE_DIR = '~/.task_watcher'
CACHE_DIR = osp.join(osp.abspath('~'), '.task_watcher')


def show_job_out(name, root, job_name_list):
Expand Down
5 changes: 5 additions & 0 deletions .dev_scripts/metric_mapping.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# key-in-metafile: key-in-results.pkl
METRICS_MAPPING = {
'FID': {
'keys': ['FID-Full-50k/fid'],
'tolerance': 0.5,
'rule': 'less'
},
'PSNR': {
'keys': ['PSNR'],
'tolerance': 0.1,
Expand Down
6 changes: 6 additions & 0 deletions .dev_scripts/test_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,12 +100,18 @@ def create_test_job_batch(commands, model_info, args, port, script_name):

http_prefix_short = 'https://download.openmmlab.com/mmediting/'
http_prefix_long = 'https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmediting/' # noqa
http_prefix_gen = 'https://download.openmmlab.com/mmgen/'
model_weight_url = model_info.weights

if model_weight_url.startswith(http_prefix_long):
model_name = model_weight_url[len(http_prefix_long):]
elif model_weight_url.startswith(http_prefix_short):
model_name = model_weight_url[len(http_prefix_short):]
elif model_weight_url.startswith(http_prefix_gen):
model_name = model_weight_url[len(http_prefix_gen):]
elif model_weight_url == '':
print(f'{fname} weight is missing')
return None
else:
raise ValueError(f'Unknown url prefix. \'{model_weight_url}\'')

Expand Down
Loading

0 comments on commit 68fd55c

Please sign in to comment.