Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] New Codec for IPR, a wrapper for multi task losses #1628

Merged
merged 59 commits into from
Sep 13, 2022
Merged
Show file tree
Hide file tree
Changes from 48 commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
1d24f6f
add ipr
Tau-J Sep 2, 2022
8f41181
refine ipr, dsnt implementation
Tau-J Sep 2, 2022
f28f3dc
fix bug
Tau-J Sep 2, 2022
808bff9
fix simcc bug
Tau-J Sep 2, 2022
1f5cd5f
add new hook
Tau-J Sep 5, 2022
7175a76
add lambda_t for ipr
Tau-J Sep 5, 2022
6fb75dd
rm soft_argmax_head
Tau-J Sep 5, 2022
8a7b4f1
fix bug in config
Tau-J Sep 5, 2022
2b5a163
fix bug in ipr config
Tau-J Sep 5, 2022
6d69777
fix bug in simcc_head
Tau-J Sep 5, 2022
8696e5e
fix bug in ipr config
Tau-J Sep 5, 2022
9732b62
modify config
Tau-J Sep 5, 2022
a33d052
fix docstring
Tau-J Sep 5, 2022
92cf53e
fix dsnt loss bug
Tau-J Sep 6, 2022
a90fd3a
fix docstring
Tau-J Sep 6, 2022
53c7a4e
remove heatmaps generation in DSNTLoss
Tau-J Sep 6, 2022
2f4ead5
add debiased-ipr config
Tau-J Sep 6, 2022
bf16fe6
new dsnt
Tau-J Sep 7, 2022
1b55d3b
fix log()
Tau-J Sep 7, 2022
155bc70
reorganize configs
Tau-J Sep 7, 2022
5c8c56e
remove hook
Tau-J Sep 7, 2022
2ea5597
Merge branch 'tau/ipr' of https://github.com/Tau-J/mmpose into tau/ipr
Tau-J Sep 7, 2022
f8122aa
remove DSNTLoss
Tau-J Sep 8, 2022
62c0b04
add docstring
Tau-J Sep 8, 2022
57e6ded
add model zoo doc
Tau-J Sep 8, 2022
94bd470
fix debias ipr
Tau-J Sep 8, 2022
255929e
minor changes
Tau-J Sep 8, 2022
b1e6ac0
add model zoo doc
Tau-J Sep 8, 2022
aa66b6f
load weights into deconv layers
Tau-J Sep 8, 2022
d163fdb
fix bug in simcc standard
Tau-J Sep 9, 2022
31d9c76
fix ipr load
Tau-J Sep 9, 2022
4b97b48
rename loss_wrappers
Tau-J Sep 9, 2022
24bd295
Merge branch 'dev-1.x' into tau/ipr
Tau-J Sep 9, 2022
976e799
remove mesh_loss
Tau-J Sep 9, 2022
b41bc33
Merge branch 'tau/ipr' of https://github.com/Tau-J/mmpose into tau/ipr
Tau-J Sep 9, 2022
60316c6
support epoch-specific factors in dsnt head
Tau-J Sep 9, 2022
ccf8124
add debias, beta args to dsnt_head
Tau-J Sep 10, 2022
7972311
update ipr, dsnt
Tau-J Sep 13, 2022
0b478f8
refine docstring
Tau-J Sep 13, 2022
5c8af31
fix config
Tau-J Sep 13, 2022
bd9bece
update unittest
Tau-J Sep 13, 2022
7132e07
fix lint
Tau-J Sep 13, 2022
162b96d
rm debug code
Tau-J Sep 13, 2022
d519bf1
Update configs/body_2d_keypoint/integral_regression/coco/resnet_ipr_c…
Tau-J Sep 13, 2022
bcca76b
add alg doc
Tau-J Sep 13, 2022
ede9a9c
add simcc alg doc
Tau-J Sep 13, 2022
e307842
add simcc alg doc
Tau-J Sep 13, 2022
4b2592f
add simcc alg doc
Tau-J Sep 13, 2022
3b6fe99
fix unittest
ly015 Sep 13, 2022
6a08dea
add simcc alg doc
Tau-J Sep 13, 2022
187df02
Merge branch 'tau/ipr' of https://github.com/Tau-J/mmpose into tau/ipr
Tau-J Sep 13, 2022
70b7b6b
remove packed transformed_keypoints
Tau-J Sep 13, 2022
a4620ee
rm pack_transformed in simcc config
Tau-J Sep 13, 2022
dac8fb7
update simcc config
Tau-J Sep 13, 2022
ae8d158
fix index out-of-range risk
ly015 Sep 13, 2022
665188d
update formatting
ly015 Sep 13, 2022
f32353e
update model zoo scripts
ly015 Sep 13, 2022
b5785b3
fix model zoo collection
ly015 Sep 13, 2022
d117a9c
fix name
Tau-J Sep 13, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
_base_ = ['../../../_base_/default_runtime.py']

# runtime
train_cfg = dict(max_epochs=210, val_interval=10)

# optimizer
optim_wrapper = dict(optimizer=dict(
type='Adam',
lr=5e-4,
))

# learning policy
param_scheduler = [
dict(
type='LinearLR', begin=0, end=500, start_factor=0.001,
by_epoch=False), # warm-up
dict(
type='MultiStepLR',
begin=0,
end=210,
milestones=[170, 200],
gamma=0.1,
by_epoch=True)
]

# automatically scaling LR based on the actual training batch size
auto_scale_lr = dict(base_batch_size=512)

# hooks
default_hooks = dict(checkpoint=dict(save_best='coco/AP', rule='greater'))

# codec settings
codec = dict(
type='IntegralRegressionLabel',
input_size=(256, 256),
heatmap_size=(64, 64),
sigma=2.0,
normalize=True)

# model settings
model = dict(
type='TopdownPoseEstimator',
data_preprocessor=dict(
type='PoseDataPreprocessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True),
backbone=dict(
type='ResNet',
depth=50,
),
head=dict(
type='DSNTHead',
in_channels=2048,
in_featuremap_size=(8, 8),
num_joints=17,
loss=dict(
type='MultipleLossWrapper',
losses=[
dict(type='SmoothL1Loss', use_target_weight=True),
dict(type='KeypointMSELoss', use_target_weight=True)
]),
decoder=codec),
test_cfg=dict(
flip_test=True,
shift_coords=True,
shift_heatmap=True,
),
init_cfg=dict(
type='Pretrained',
checkpoint='https://download.openmmlab.com/mmpose/'
'pretrain_models/td-hm_res50_8xb64-210e_coco-256x192.pth'))

# base dataset settings
dataset_type = 'CocoDataset'
data_mode = 'topdown'
data_root = 'data/coco/'

file_client_args = dict(backend='disk')

# pipelines
train_pipeline = [
dict(type='LoadImage', file_client_args=file_client_args),
dict(type='GetBBoxCenterScale'),
dict(type='RandomFlip', direction='horizontal'),
dict(type='RandomHalfBody'),
dict(type='RandomBBoxTransform'),
dict(type='TopdownAffine', input_size=codec['input_size']),
dict(
type='GenerateTarget',
target_type='heatmap+keypoint_label',
encoder=codec),
dict(type='PackPoseInputs')
]
test_pipeline = [
dict(type='LoadImage', file_client_args=file_client_args),
dict(type='GetBBoxCenterScale'),
dict(type='TopdownAffine', input_size=codec['input_size']),
dict(type='PackPoseInputs')
]

# data loaders
train_dataloader = dict(
batch_size=64,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_mode=data_mode,
ann_file='annotations/person_keypoints_train2017.json',
data_prefix=dict(img='train2017/'),
pipeline=train_pipeline,
))
val_dataloader = dict(
batch_size=32,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_mode=data_mode,
ann_file='annotations/person_keypoints_val2017.json',
bbox_file=f'{data_root}person_detection_results/'
'COCO_val2017_detections_AP_H_56_person.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
))
test_dataloader = val_dataloader

# evaluators
val_evaluator = dict(
type='CocoMetric',
ann_file=f'{data_root}annotations/person_keypoints_val2017.json')
test_evaluator = val_evaluator
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
_base_ = ['../../../_base_/default_runtime.py']

# runtime
train_cfg = dict(max_epochs=210, val_interval=10)

# optimizer
optim_wrapper = dict(optimizer=dict(
type='Adam',
lr=5e-4,
))

# learning policy
param_scheduler = [
dict(
type='LinearLR', begin=0, end=500, start_factor=0.001,
by_epoch=False), # warm-up
dict(
type='MultiStepLR',
begin=0,
end=210,
milestones=[170, 200],
gamma=0.1,
by_epoch=True)
]

# automatically scaling LR based on the actual training batch size
auto_scale_lr = dict(base_batch_size=512)

# hooks
default_hooks = dict(checkpoint=dict(save_best='coco/AP', rule='greater'))

# codec settings
codec = dict(
type='IntegralRegressionLabel',
input_size=(256, 256),
heatmap_size=(64, 64),
sigma=2.0,
normalize=True)

# model settings
model = dict(
type='TopdownPoseEstimator',
data_preprocessor=dict(
type='PoseDataPreprocessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True),
backbone=dict(type='ResNet', depth=50),
head=dict(
type='DSNTHead',
in_channels=2048,
in_featuremap_size=(8, 8),
num_joints=17,
debias=True,
beta=10.,
loss=dict(
type='MultipleLossWrapper',
losses=[
dict(type='SmoothL1Loss', use_target_weight=True),
dict(type='JSDiscretLoss', use_target_weight=True)
]),
decoder=codec),
test_cfg=dict(
flip_test=True,
shift_coords=True,
shift_heatmap=True,
),
init_cfg=dict(
type='Pretrained',
checkpoint='https://download.openmmlab.com/mmpose/'
'pretrain_models/td-hm_res50_8xb64-210e_coco-256x192.pth'))

# base dataset settings
dataset_type = 'CocoDataset'
data_mode = 'topdown'
data_root = 'data/coco/'

file_client_args = dict(backend='disk')

# pipelines
train_pipeline = [
dict(type='LoadImage', file_client_args=file_client_args),
dict(type='GetBBoxCenterScale'),
dict(type='RandomFlip', direction='horizontal'),
dict(type='RandomHalfBody'),
dict(type='RandomBBoxTransform'),
dict(type='TopdownAffine', input_size=codec['input_size']),
dict(
type='GenerateTarget',
target_type='heatmap+keypoint_label',
encoder=codec),
dict(type='PackPoseInputs')
]
test_pipeline = [
dict(type='LoadImage', file_client_args=file_client_args),
dict(type='GetBBoxCenterScale'),
dict(type='TopdownAffine', input_size=codec['input_size']),
dict(type='PackPoseInputs')
]

# data loaders
train_dataloader = dict(
batch_size=16,
num_workers=2,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_mode=data_mode,
ann_file='annotations/person_keypoints_train2017.json',
data_prefix=dict(img='train2017/'),
pipeline=train_pipeline,
))
val_dataloader = dict(
batch_size=32,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_mode=data_mode,
ann_file='annotations/person_keypoints_val2017.json',
bbox_file=f'{data_root}person_detection_results/'
'COCO_val2017_detections_AP_H_56_person.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
))
test_dataloader = val_dataloader

# evaluators
val_evaluator = dict(
type='CocoMetric',
ann_file=f'{data_root}annotations/person_keypoints_val2017.json')
test_evaluator = val_evaluator
Loading