Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre release #1

Merged
merged 260 commits into from
Jul 7, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
260 commits
Select commit Hold shift + click to select a range
a3fc8f6
merge mmdet resnet
xvjiarui Apr 1, 2020
4c5df48
change default to navie syncbn, reformart pretrain
xvjiarui Apr 1, 2020
5313e35
remove work_dir from slurm_train
xvjiarui Apr 1, 2020
a7afe48
fixed norm import, fixed csaill r101 config
xvjiarui Apr 1, 2020
825f12f
add eval_hook w\o dist
xvjiarui Apr 1, 2020
e9d3d60
make crop and pad together
xvjiarui Apr 2, 2020
1be3fd6
add augment ablation
xvjiarui Apr 2, 2020
c651526
use ratio range, set norm_mult=0
xvjiarui Apr 3, 2020
7692f14
add lr_mult, fix aux_head config
xvjiarui Apr 3, 2020
4947a21
add ratio option in test_aug
xvjiarui Apr 4, 2020
6ac9df8
add CCNet
xvjiarui Apr 5, 2020
2ac9e17
refactor slurm_train
xvjiarui Apr 6, 2020
9354e85
Merge branch 'master' into ccnet
xvjiarui Apr 6, 2020
58ae663
fixed cc head
xvjiarui Apr 6, 2020
a270d70
reorder rotate, change psp to syncbn, clean configs
xvjiarui Apr 7, 2020
c0211aa
Merge branch 'master' into ccnet
xvjiarui Apr 8, 2020
e5d3bc1
fixed lr
xvjiarui Apr 8, 2020
e89bb5d
add psa head
xvjiarui Apr 10, 2020
71a54c4
add optimizer builder
xvjiarui Apr 10, 2020
9618ee1
Merge branch 'master' into psanet
xvjiarui Apr 10, 2020
f5230cb
fixed psa config
xvjiarui Apr 10, 2020
4fd8738
add nonlocal and gcnet
xvjiarui Apr 10, 2020
e916d1f
rename ccnet
xvjiarui Apr 10, 2020
f78db2c
fixed backward bug
xvjiarui Apr 10, 2020
5ce030f
add norm_cfg in NLNet, add FCN
xvjiarui Apr 11, 2020
558a398
add uper head
xvjiarui Apr 12, 2020
3f793c0
add hrnet
xvjiarui Apr 12, 2020
bf0b17a
add option in train.py, update requirements
xvjiarui Apr 13, 2020
b96ce40
update docs
xvjiarui Apr 13, 2020
1a195c7
make dataset transforms align with mmdet
xvjiarui Apr 13, 2020
3ce1dfc
fixed resnext name bug
xvjiarui Apr 13, 2020
29147be
rename to EncoderDecoder
xvjiarui Apr 13, 2020
0a98b5e
rename to EncoderDecoder in config
xvjiarui Apr 13, 2020
bb9b0c2
support input transform in decode_head
xvjiarui Apr 13, 2020
30821a5
add test cases
xvjiarui Apr 13, 2020
c43ae27
Add .gitlab-ci.yml
Apr 13, 2020
fb124e8
reformat res_lay
xvjiarui Apr 13, 2020
d2d16e5
update ci
xvjiarui Apr 13, 2020
71f54fe
update ci
xvjiarui Apr 13, 2020
c9d6072
fixed resnet doctest
xvjiarui Apr 13, 2020
6ee4fa9
add test image
xvjiarui Apr 13, 2020
ec0fe6f
fixed torch.Tensor.clone in test
xvjiarui Apr 13, 2020
807a143
Add DeeplabV3 with ASPP head
Apr 14, 2020
41e8d06
Merge branch 'deeplabv3' into 'master'
Apr 14, 2020
b81fd21
Cityscapes dataset refactor
Apr 14, 2020
b422f9e
Merge branch 'dataset' into 'master'
Apr 14, 2020
e970ebe
Set align_corners=False
Apr 14, 2020
7704a2a
Merge branch 'align_corners' into 'master'
Apr 14, 2020
ddc1fca
Merge branch 'master' into hr
xvjiarui Apr 14, 2020
ff3c0be
align hrnet with master
xvjiarui Apr 14, 2020
0aad872
Merge branch 'master' into nlgc
xvjiarui Apr 14, 2020
e28db69
align nlgc with master
xvjiarui Apr 15, 2020
ee90840
align ccnet with master
xvjiarui Apr 15, 2020
bcd6990
fixed head opti constructor
xvjiarui Apr 15, 2020
c010289
Merge branch 'master' into psanet
xvjiarui Apr 15, 2020
d572856
align psa with master, fixed psa cpp bug
xvjiarui Apr 15, 2020
5084e56
merge, align uper with master, fixed uper fpn padding
xvjiarui Apr 15, 2020
e343d56
Merge branch 'hr' into dev
xvjiarui Apr 16, 2020
3990e8a
merged nlgc
xvjiarui Apr 16, 2020
dae8e80
merged ccnet
xvjiarui Apr 16, 2020
236576d
merge uper
xvjiarui Apr 16, 2020
162b1cd
add align_corners option in decode head
xvjiarui Apr 16, 2020
9cd10fa
remove unused ops
xvjiarui Apr 17, 2020
6bb4946
change deeplab default test
xvjiarui Apr 18, 2020
50cfc5a
add resize wrapper
xvjiarui Apr 18, 2020
90faa7e
change test size
xvjiarui Apr 18, 2020
0d003bc
refactor aspp, psp, uper_head
xvjiarui Apr 18, 2020
c8664ff
Add OHEM sampler
Apr 19, 2020
ac83915
Merge branch 'ohem' into 'dev'
Apr 19, 2020
5945248
Add ANN, DANet, OCRNet
Apr 29, 2020
71dd1d0
Merge branch 'ann' into 'dev'
Apr 29, 2020
e31c830
Deeplabv3plus
May 6, 2020
da2f126
Merge branch 'deeplabv3plus' into 'dev'
May 6, 2020
cbad5d0
Rename and refactor
May 8, 2020
e8618da
Merge branch 'rename' into 'dev'
May 8, 2020
48d53cf
fixed samples per_gpu
xvjiarui May 8, 2020
bfc1f20
fixed gpu_ids and head_optimizer
xvjiarui May 8, 2020
1d341e5
Add VOC, ADE, PASCAL_CONTEXT
May 8, 2020
9df18dd
Merge branch 'voc' into 'dev'
May 8, 2020
c78e9aa
make validate by default
xvjiarui May 9, 2020
9ef5d4f
unify resnet
xvjiarui May 9, 2020
f385895
rename psp_module to ppm
xvjiarui May 9, 2020
9cacac5
merged decode_seg into encode_decode
xvjiarui May 9, 2020
9c73243
add cityscapes unified configs
xvjiarui May 12, 2020
3b149cb
fixed flake
xvjiarui May 12, 2020
7981fca
Merge branch 'dev' into unify
xvjiarui May 12, 2020
7e339d7
fixed flake in analyze_log.py
xvjiarui May 12, 2020
5929fe7
Merge branch 'dev' into unify
xvjiarui May 12, 2020
6873bf1
add iter configs
xvjiarui May 12, 2020
ac18d27
fixed checkpoint
xvjiarui May 12, 2020
110cb24
fixed eval configs
xvjiarui May 13, 2020
335747d
fixed epoch config pretrain
xvjiarui May 13, 2020
ccd0585
add resnetv1d
xvjiarui May 13, 2020
2e6d8fa
add resnetv1c
xvjiarui May 13, 2020
61fd810
set pretrain, 4x2, Brightness, 60ki
xvjiarui May 15, 2020
9fbf265
fixed naive syncbn import
xvjiarui May 17, 2020
2688a7d
add some docs
xvjiarui May 17, 2020
2168261
add 60ki for resnet base methods
xvjiarui May 17, 2020
7ca9eee
fixed ci
xvjiarui May 17, 2020
226e5de
remove with_seg, move unit8 convert to loader
xvjiarui May 18, 2020
cfd5353
fixed pretrain
xvjiarui May 18, 2020
0917d7e
fixed pretrain
xvjiarui May 18, 2020
3b86e8e
support FileClient
xvjiarui May 19, 2020
d51f481
add 40ki configs
xvjiarui May 19, 2020
f0a4845
use non_pad mode
xvjiarui May 20, 2020
4996f33
move crop ahead
xvjiarui May 20, 2020
d76b95a
use pad crop
xvjiarui May 21, 2020
c7d0a81
use aug and cudnn=True
xvjiarui May 22, 2020
38c90d5
use new petrel
xvjiarui May 22, 2020
f9b884e
remove head class_weight
xvjiarui May 22, 2020
1fa75fd
add cat_mat_ratio
xvjiarui May 22, 2020
2654c94
add 80k
xvjiarui May 24, 2020
b3e994b
add 512 1024 config
xvjiarui May 26, 2020
61b70de
add backbone ablation
xvjiarui May 27, 2020
24cdbdc
fixed pretrain
xvjiarui May 27, 2020
b6d444a
add HeadLr10
xvjiarui May 28, 2020
4fe66ec
change pretrain
xvjiarui May 28, 2020
c522a2b
remove head10
xvjiarui May 29, 2020
02e1ab4
add hrnet
xvjiarui May 31, 2020
bdefb01
fixed train cfg
xvjiarui May 31, 2020
a7709e8
add ade
xvjiarui May 31, 2020
30b346d
fixed cascade eval bug
xvjiarui May 31, 2020
44ce94d
fixed eval, change to 4 imgs
xvjiarui Jun 1, 2020
feb0aaa
add ohem
xvjiarui Jun 1, 2020
02844b8
add ade for psa, hr
xvjiarui Jun 1, 2020
2066f28
fixed 1024 hr
xvjiarui Jun 1, 2020
6ddd0cf
add 160ki
xvjiarui Jun 1, 2020
b548028
fixed psa mask size
xvjiarui Jun 2, 2020
bdf8dda
fixed psa mask
xvjiarui Jun 2, 2020
612bcf9
fixed psa resize
xvjiarui Jun 2, 2020
9783171
add hrnet 160ki cityscapes
xvjiarui Jun 2, 2020
d0f822a
add ocr 160ki
xvjiarui Jun 2, 2020
a822ac5
add deeplabv3(+) ade
xvjiarui Jun 2, 2020
96ff8ee
Pat adapt
sunnyxiaohu Jun 3, 2020
83d4e33
Merge branch 'pat_adapt' into 'dev'
Jun 3, 2020
e96a00d
add pre-release config
xvjiarui Jun 3, 2020
074c5a3
fixed test
xvjiarui Jun 3, 2020
516185f
adapt dataloader build
sunnyxiaohu Jun 4, 2020
7a96f81
fix yapf
sunnyxiaohu Jun 4, 2020
cee9004
fixed psa align corners
xvjiarui Jun 5, 2020
a413e23
add voc
xvjiarui Jun 8, 2020
bb9d0b4
fixed ocr config
xvjiarui Jun 8, 2020
e848399
fixed 160k
xvjiarui Jun 8, 2020
14941a1
fixed ocr ade classes
xvjiarui Jun 8, 2020
0510a65
add ade for all
xvjiarui Jun 10, 2020
46c6e19
use mmcv parrots_wrapper
sunnyxiaohu Jun 11, 2020
97b3562
fix lint
sunnyxiaohu Jun 11, 2020
cfe3b59
Merge branch 'dev' into pool_dataloader
sunnyxiaohu Jun 11, 2020
872233e
Apply suggestion to mmseg/datasets/builder.py
sunnyxiaohu Jun 11, 2020
e048562
fix commit
sunnyxiaohu Jun 11, 2020
5ce7472
Merge branch 'pool_dataloader' of gitlab.sz.sensetime.com:open-mmlab/…
sunnyxiaohu Jun 11, 2020
b75d686
Merge branch 'pool_dataloader' into 'dev'
sunnyxiaohu Jun 11, 2020
6543b97
fixed table generate
xvjiarui Jun 11, 2020
e557936
Vis
Jun 12, 2020
4451aad
Merge branch 'vis' into 'dev'
Jun 12, 2020
f877af5
merged dev
xvjiarui Jun 12, 2020
95271c8
add more test
xvjiarui Jun 12, 2020
c235190
add aug for ade voc12
xvjiarui Jun 12, 2020
9ca3907
migrate to mmcv iter runner
xvjiarui Jun 12, 2020
0948963
remove unused utils
xvjiarui Jun 12, 2020
f30a608
add head optimizer constructor
xvjiarui Jun 12, 2020
98e73ba
fixed import error
xvjiarui Jun 12, 2020
8d3e314
change to TTA
xvjiarui Jun 13, 2020
8ab7f72
remove legacy configs
xvjiarui Jun 13, 2020
3656166
remove optimizer
xvjiarui Jun 13, 2020
4d21c7b
remove unused backbone
xvjiarui Jun 13, 2020
ca095e4
remove unused utils
xvjiarui Jun 13, 2020
344b9b0
speed up pipeline, align with mmdet
xvjiarui Jun 13, 2020
8c0cd74
add cross entropy test
xvjiarui Jun 13, 2020
08df6c6
add loss and dataset test
xvjiarui Jun 13, 2020
5fcc7ea
fixed test
xvjiarui Jun 13, 2020
d3ccbf1
test load anno
xvjiarui Jun 13, 2020
7d3f0c4
add mIoU test
xvjiarui Jun 13, 2020
2699348
change deeplabv3+ -> depthwise sep conv module
xvjiarui Jun 13, 2020
a96915e
remove 20ki and 60ki configs
xvjiarui Jun 13, 2020
43bbfcb
add voc 80ki
xvjiarui Jun 13, 2020
6838ff4
refactor and add test
xvjiarui Jun 13, 2020
2d18662
fixed eval hook
xvjiarui Jun 13, 2020
1cd613f
add docs and docstring
xvjiarui Jun 14, 2020
190e7e9
update docker
xvjiarui Jun 14, 2020
9451014
merge master
xvjiarui Jun 14, 2020
25ab842
update doc
xvjiarui Jun 14, 2020
375ecb1
add github action, add some doc
xvjiarui Jun 15, 2020
83c01fb
fixed action
xvjiarui Jun 15, 2020
69fb974
Fixed test
xvjiarui Jun 16, 2020
2943153
add voc 20k
xvjiarui Jun 16, 2020
ca9c71f
update doc, and model zoo
xvjiarui Jun 18, 2020
b535021
update .dev and train
xvjiarui Jun 18, 2020
e1b18bc
all model_zoo.md
xvjiarui Jun 18, 2020
91ae431
rename to ADE20K
xvjiarui Jun 18, 2020
624ff36
rename models
xvjiarui Jun 18, 2020
87f4915
update docs
xvjiarui Jun 18, 2020
7d82cd2
add -d8
xvjiarui Jun 18, 2020
c2433ab
add palette, update model zoo
xvjiarui Jun 18, 2020
c54bff5
remove dist sampler
xvjiarui Jun 19, 2020
31c8df0
update model zoo
xvjiarui Jun 19, 2020
f4df225
remove optimizer
xvjiarui Jun 19, 2020
6a77939
remove unused files
xvjiarui Jun 19, 2020
5bae0ff
remove pascal context
xvjiarui Jun 19, 2020
0683f04
change to 8 gpus
xvjiarui Jun 19, 2020
9adc73f
update performance and link
xvjiarui Jun 20, 2020
4770402
remove deprecate code, save memory of aug test
xvjiarui Jun 20, 2020
c55be4b
update docs
xvjiarui Jun 20, 2020
b409f6d
finalize modelzoo
xvjiarui Jun 20, 2020
c73e9b3
back to 4 gpus
xvjiarui Jun 20, 2020
55b5163
benchmard speed finished
xvjiarui Jun 20, 2020
d966259
fixed concate PALETTE
xvjiarui Jun 20, 2020
ce25821
update memory
xvjiarui Jun 20, 2020
23a365f
use backend for imdecode
xvjiarui Jun 20, 2020
4da7ffd
minor fix
xvjiarui Jun 21, 2020
4573b66
add encnet
xvjiarui Jun 21, 2020
15298c1
add encnet
xvjiarui Jun 22, 2020
ae2295c
fixed cityscapes eval
xvjiarui Jun 22, 2020
28a8873
fixed test
xvjiarui Jun 22, 2020
55c083c
refactor decode
xvjiarui Jun 23, 2020
bdbb444
add sem_fpn, point_rend
xvjiarui Jun 23, 2020
1f9adf2
fixed test
xvjiarui Jun 24, 2020
8c39082
Merge branch 'pre-release' into point_rend
xvjiarui Jun 25, 2020
55cb9e6
add neck
xvjiarui Jun 25, 2020
ceed30f
remove fpn dropout
xvjiarui Jun 28, 2020
f2d129d
remove fpn dropout
xvjiarui Jun 28, 2020
b25007a
add missing folder
xvjiarui Jun 28, 2020
81f4c43
fixed inplace
xvjiarui Jun 28, 2020
cd58de3
add norm and drpout
xvjiarui Jun 29, 2020
4d5f8ec
fixed dropout
xvjiarui Jun 29, 2020
659ddb2
fixed point rend
xvjiarui Jun 29, 2020
a04554b
set norm_decay_mult 0
xvjiarui Jul 1, 2020
a11dd19
merged with mmcv
xvjiarui Jul 2, 2020
b6f5cd2
merged prerelease
xvjiarui Jul 2, 2020
bc96b51
Merge branch 'encnet' into 'pre-release'
Jul 2, 2020
9dd56f3
add encnet model zoo
xvjiarui Jul 3, 2020
3415ff0
merged pre-release
xvjiarui Jul 3, 2020
8b0e523
update ci
xvjiarui Jul 3, 2020
e4af8ff
Merge branch 'point_rend' into 'pre-release'
Jul 3, 2020
d95bca2
remove point rend configs
xvjiarui Jul 3, 2020
206e3a1
fixed mmcv ci
xvjiarui Jul 3, 2020
e8f5162
fixed syncbn cpu
xvjiarui Jul 3, 2020
8b8b637
less test config
xvjiarui Jul 3, 2020
e292de9
fixed test forward
xvjiarui Jul 3, 2020
207c382
remove test lint
xvjiarui Jul 3, 2020
5d9885e
reformat code
xvjiarui Jul 5, 2020
a242364
restrcutre unit test
xvjiarui Jul 5, 2020
8bb82dd
precommit hook
xvjiarui Jul 5, 2020
29f4257
add data
xvjiarui Jul 5, 2020
ebb7ca8
add more doc string
xvjiarui Jul 6, 2020
7bad7d7
fixed ci, delete unused module
xvjiarui Jul 6, 2020
f231096
update readme
xvjiarui Jul 7, 2020
11907d8
update install guide
xvjiarui Jul 7, 2020
d6a6bcd
reduce size
xvjiarui Jul 7, 2020
d52a863
update ack
xvjiarui Jul 7, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
125 changes: 125 additions & 0 deletions .dev/clean_models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
import argparse
import glob
import json
import os
import os.path as osp

import mmcv

# build schedule look-up table to automatically find the final model
SCHEDULES_LUT = {
'20ki': 20000,
'40ki': 40000,
'60ki': 60000,
'80ki': 80000,
'160ki': 160000
}
RESULTS_LUT = ['mIoU', 'mAcc', 'aAcc']


def get_final_iter(config):
iter_num = SCHEDULES_LUT[config.split('_')[-2]]
return iter_num


def get_final_results(log_json_path, iter_num):
result_dict = dict()
with open(log_json_path, 'r') as f:
for line in f.readlines():
log_line = json.loads(line)
if 'mode' not in log_line.keys():
continue

if log_line['mode'] == 'train' and log_line['iter'] == iter_num:
result_dict['memory'] = log_line['memory']

if log_line['iter'] == iter_num:
result_dict.update({
key: log_line[key]
for key in RESULTS_LUT if key in log_line
})
return result_dict


def parse_args():
parser = argparse.ArgumentParser(description='Gather benchmarked models')
parser.add_argument(
'root',
type=str,
help='root path of benchmarked models to be gathered')
parser.add_argument(
'config',
type=str,
help='root path of benchmarked configs to be gathered')

args = parser.parse_args()
return args


def main():
args = parse_args()
models_root = args.root
config_name = args.config

# find all models in the root directory to be gathered
raw_configs = list(mmcv.scandir(config_name, '.py', recursive=True))

# filter configs that is not trained in the experiments dir
used_configs = []
for raw_config in raw_configs:
work_dir = osp.splitext(osp.basename(raw_config))[0]
if osp.exists(osp.join(models_root, work_dir)):
used_configs.append(work_dir)
print(f'Find {len(used_configs)} models to be gathered')

# find final_ckpt and log file for trained each config
# and parse the best performance
model_infos = []
for used_config in used_configs:
exp_dir = osp.join(models_root, used_config)
# check whether the exps is finished
final_iter = get_final_iter(used_config)
final_model = 'iter_{}.pth'.format(final_iter)
model_path = osp.join(exp_dir, final_model)

# skip if the model is still training
if not osp.exists(model_path):
print(f'{used_config} not finished yet')
continue

# get logs
log_json_path = glob.glob(osp.join(exp_dir, '*.log.json'))[0]
log_txt_path = glob.glob(osp.join(exp_dir, '*.log'))[0]
model_performance = get_final_results(log_json_path, final_iter)

if model_performance is None:
print(f'{used_config} does not have performance')
continue

model_time = osp.split(log_txt_path)[-1].split('.')[0]
model_infos.append(
dict(
config=used_config,
results=model_performance,
iters=final_iter,
model_time=model_time,
log_json_path=osp.split(log_json_path)[-1]))

# publish model for each checkpoint
for model in model_infos:

model_name = osp.split(model['config'])[-1].split('.')[0]

model_name += '_' + model['model_time']
for checkpoints in mmcv.scandir(
osp.join(models_root, model['config']), suffix='.pth'):
if checkpoints.endswith(f"iter_{model['iters']}.pth"
) or checkpoints.endswith('latest.pth'):
continue
print('removing {}'.format(
osp.join(models_root, model['config'], checkpoints)))
os.remove(osp.join(models_root, model['config'], checkpoints))


if __name__ == '__main__':
main()
197 changes: 197 additions & 0 deletions .dev/gather_models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
import argparse
import glob
import json
import os
import os.path as osp
import shutil
import subprocess

import mmcv
import torch

# build schedule look-up table to automatically find the final model
RESULTS_LUT = ['mIoU', 'mAcc', 'aAcc']


def process_checkpoint(in_file, out_file):
checkpoint = torch.load(in_file, map_location='cpu')
# remove optimizer for smaller file size
if 'optimizer' in checkpoint:
del checkpoint['optimizer']
# if it is necessary to remove some sensitive data in checkpoint['meta'],
# add the code here.
torch.save(checkpoint, out_file)
sha = subprocess.check_output(['sha256sum', out_file]).decode()
final_file = out_file.rstrip('.pth') + '-{}.pth'.format(sha[:8])
subprocess.Popen(['mv', out_file, final_file])
return final_file


def get_final_iter(config):
iter_num = config.split('_')[-2]
assert iter_num.endswith('k')
return int(iter_num[:-1]) * 1000


def get_final_results(log_json_path, iter_num):
result_dict = dict()
with open(log_json_path, 'r') as f:
for line in f.readlines():
log_line = json.loads(line)
if 'mode' not in log_line.keys():
continue

if log_line['mode'] == 'train' and log_line['iter'] == iter_num:
result_dict['memory'] = log_line['memory']

if log_line['iter'] == iter_num:
result_dict.update({
key: log_line[key]
for key in RESULTS_LUT if key in log_line
})
return result_dict


def parse_args():
parser = argparse.ArgumentParser(description='Gather benchmarked models')
parser.add_argument(
'root',
type=str,
help='root path of benchmarked models to be gathered')
parser.add_argument(
'config',
type=str,
help='root path of benchmarked configs to be gathered')
parser.add_argument(
'out_dir',
type=str,
help='output path of gathered models to be stored')
parser.add_argument('out_file', type=str, help='the output json file name')
parser.add_argument(
'--filter', type=str, nargs='+', default=[], help='config filter')
parser.add_argument(
'--all', action='store_true', help='whether include .py and .log')

args = parser.parse_args()
return args


def main():
args = parse_args()
models_root = args.root
models_out = args.out_dir
config_name = args.config
mmcv.mkdir_or_exist(models_out)

# find all models in the root directory to be gathered
raw_configs = list(mmcv.scandir(config_name, '.py', recursive=True))

# filter configs that is not trained in the experiments dir
used_configs = []
for raw_config in raw_configs:
work_dir = osp.splitext(osp.basename(raw_config))[0]
if osp.exists(osp.join(models_root, work_dir)):
used_configs.append((work_dir, raw_config))
print(f'Find {len(used_configs)} models to be gathered')

# find final_ckpt and log file for trained each config
# and parse the best performance
model_infos = []
for used_config, raw_config in used_configs:
bypass = True
for p in args.filter:
if p in used_config:
bypass = False
break
if bypass:
continue
exp_dir = osp.join(models_root, used_config)
# check whether the exps is finished
final_iter = get_final_iter(used_config)
final_model = 'iter_{}.pth'.format(final_iter)
model_path = osp.join(exp_dir, final_model)

# skip if the model is still training
if not osp.exists(model_path):
print(f'{used_config} train not finished yet')
continue

# get logs
log_json_paths = glob.glob(osp.join(exp_dir, '*.log.json'))
log_json_path = log_json_paths[0]
model_performance = None
for idx, _log_json_path in enumerate(log_json_paths):
model_performance = get_final_results(_log_json_path, final_iter)
if model_performance is not None:
log_json_path = _log_json_path
break

if model_performance is None:
print(f'{used_config} model_performance is None')
continue

model_time = osp.split(log_json_path)[-1].split('.')[0]
model_infos.append(
dict(
config=used_config,
raw_config=raw_config,
results=model_performance,
iters=final_iter,
model_time=model_time,
log_json_path=osp.split(log_json_path)[-1]))

# publish model for each checkpoint
publish_model_infos = []
for model in model_infos:
model_publish_dir = osp.join(models_out,
model['raw_config'].rstrip('.py'))
model_name = osp.split(model['config'])[-1].split('.')[0]

publish_model_path = osp.join(model_publish_dir,
model_name + '_' + model['model_time'])
trained_model_path = osp.join(models_root, model['config'],
'iter_{}.pth'.format(model['iters']))
if osp.exists(model_publish_dir):
for file in os.listdir(model_publish_dir):
if file.endswith('.pth'):
print(f'model {file} found')
model['model_path'] = osp.abspath(
osp.join(model_publish_dir, file))
break
if 'model_path' not in model:
print(f'dir {model_publish_dir} exists, no model found')

else:
mmcv.mkdir_or_exist(model_publish_dir)

# convert model
final_model_path = process_checkpoint(trained_model_path,
publish_model_path)
model['model_path'] = final_model_path

new_json_path = f'{model_name}-{model["log_json_path"]}'
# copy log
shutil.copy(
osp.join(models_root, model['config'], model['log_json_path']),
osp.join(model_publish_dir, new_json_path))
if args.all:
new_txt_path = new_json_path.rstrip('.json')
shutil.copy(
osp.join(models_root, model['config'],
model['log_json_path'].rstrip('.json')),
osp.join(model_publish_dir, new_txt_path))

if args.all:
# copy config to guarantee reproducibility
raw_config = osp.join(config_name, model['raw_config'])
mmcv.Config.fromfile(raw_config).dump(
osp.join(model_publish_dir, osp.basename(raw_config)))

publish_model_infos.append(model)

models = dict(models=publish_model_infos)
mmcv.dump(models, osp.join(models_out, args.out_file))


if __name__ == '__main__':
main()
Loading