-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathTODO.txt
55 lines (42 loc) · 2.48 KB
/
TODO.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
1/now working:
we train the model first by the 16 kps real pose data, and then freeze the front conv layers(maybe half or more),
add a FC layer in the last, finetune the model to learning a mapping of unpaired keypoints.
The advantage is that we fully use the datasets.
And something else maybe useful is the pretrained model, e.g. VGG, mobileNet.
The framework I prepare to use "https://github.com/Daniil-Osokin/gccpm-look-into-person-cvpr19.pytorch" is acutally
finetuned from image pretrained model because the competition datasets only have 30,462 images for trainning.
Maybe we can directly train on our anime datasets?(over 10,000 after data augmentation)
2/MMD works research.
OpenMMD: https://github.com/peterljq/OpenMMD
Chinese Tutorial: https://www.bilibili.com/read/cv3400259
3/openmmd works.
Following the steps.
I use pmxeditor generate the bone csv file.
[the basic anime pose estiamtion is OK, but something strange is for different dataset, they perform good on differnet images.
I consider that it is because of the limited data. e.g. 7png ]
2/13 data collection and process of pmx file is basically over.
75 models. next to collect them when free
from -https://bowlroll.net/file/keyword/MMD >20190801
-http://mmd.xiaolindraw.com/
-https://www.deviantart.com/mmd-downloads-galore/gallery/39472353/models
Now 30000+ frames dataset
Next plan:
posewarp data:
一个data代表一个视频,mat格式
data['data'](['bbox']/['X'])[0][0]
一个是中心点body box,(frame_num, 4)一个是(kps_num,coordinate,frame_num)
set dtype = [('X', 'O'), ('bbox', 'O')]
get two
3/13 test posewarp source code. And request for ICCV 2019 'dance' paper. If possible, try 'dance'
how to construct dataset:
discover:
1/上半身、下半身的坐标不会发生任何改变。所有点的旋转角度只会改变他们下属的裙带节点。
2/center只变position,所以平移不改变姿势。
3/上本身跟着grove和center做平移,下半身因为有IK所有会扭曲
step:
中心永远0,0,0, 根据初始和center->groove的offset算所有点的初始偏移。此后以groove作为候选的起转中心。
1/把几个postion应用上去。上半身,直接跟着偏移,下半身还不知道咋算。
2/对上半身,我们现在有了上半身的绝对坐标。‘上半身’->'上半身2'->'左肩'/'右肩'/'首'。
‘首’->‘头’,右肩->右腕->右ひじ->右手首
注意初始方向
3/右肩C和右腕是一样的,可以混用继承右肩的转角,最后我们需要的是右腕