Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CMU-trained Weights and Cross dataset evaluation #83

Open
Samleo8 opened this issue Jun 15, 2020 · 5 comments
Open

CMU-trained Weights and Cross dataset evaluation #83

Samleo8 opened this issue Jun 15, 2020 · 5 comments

Comments

@Samleo8
Copy link

Samleo8 commented Jun 15, 2020

In your paper under Experiments > CMU Panoptic Dataset, you noted that

We also conducted experiments to demonstrate that the learnt model indeed generalizes to new setups. For that we applied a CMU-trained model to Human3.6M validation scenes. (...) To provide a quantitative measure of the generalizing ability, we have measured the MPJPE for the set of joints which seem to have the most similar semantics (namely, ’elbows’, ’wrists’ and ’knees’). The measured MPJPE is 36 mm for learnable triangulation, and 34 mm for the volumetric approach, which seems reasonable when compared to the results of the methods trained on Human3.6M (16-18 mm, depending on the triangulation method)

I am trying to replicate some of the results, but was not exactly successful. I understand that the results are different when running H36M-trained on CMU validation (rather than the other way round as you did in the paper), but as can be seen from my test results in issue #76 , I get quite poor results. Granted, you did mention in your paper that the CMU-trained weights were the best because it could learn from truncated views, but it suggests that the H36-trained model is fairly poor when dealing with occlusions.

It would be great if you could provide weights for the CMU-trained model, as it would help validate the results in your paper, and also augment the strength/usability of your model. I believe that this model is will be great for auto-generating 3D ground-truth data for many of the community's datasets, but without the CMU-trained model, many in the community cannot use it practically for these purposes.

Also, how did you make a proper comparison between the joints when the CMU and H36M have vastly different joint positions? Is there code for this?

Thank you!

@karfly
Copy link
Owner

karfly commented Jun 17, 2020

Hi, @Samleo8!
You can find weights for the CMU-trained model here.

@Samleo8
Copy link
Author

Samleo8 commented Jun 18, 2020

Thank you so much!

@Samleo8 Samleo8 closed this as completed Jun 18, 2020
@Samleo8
Copy link
Author

Samleo8 commented Jun 18, 2020

Hi, unfortunately I encountered this error when trying to perform evaluation:

RuntimeError: Error(s) in loading state_dict for VolumetricTriangulationNet:
        Unexpected key(s) in state_dict: "backbone.tri_confidences.features.0.weight", "backbone.tri_confidences.features.0.bias", "backbone.tri_confidences.features.1.weight", "backbone.tri
_confidences.features.1.bias", "backbone.tri_confidences.features.1.running_mean", "backbone.tri_confidences.features.1.running_var", "backbone.tri_confidences.features.1.num_batches_tracked
", "backbone.tri_confidences.features.4.weight", "backbone.tri_confidences.features.4.bias", "backbone.tri_confidences.features.5.weight", "backbone.tri_confidences.features.5.bias", "backbo
ne.tri_confidences.features.5.running_mean", "backbone.tri_confidences.features.5.running_var", "backbone.tri_confidences.features.5.num_batches_tracked", "backbone.tri_confidences.head.0.we
ight", "backbone.tri_confidences.head.0.bias", "backbone.tri_confidences.head.2.weight", "backbone.tri_confidences.head.2.bias", "backbone.tri_confidences.head.4.weight", "backbone.tri_confi
dences.head.4.bias", "backbone.vol_confidences.features.0.weight", "backbone.vol_confidences.features.0.bias", "backbone.vol_confidences.features.1.weight", "backbone.vol_confidences.feature
s.1.bias", "backbone.vol_confidences.features.1.running_mean", "backbone.vol_confidences.features.1.running_var", "backbone.vol_confidences.features.1.num_batches_tracked", "backbone.vol_con
fidences.features.4.weight", "backbone.vol_confidences.features.4.bias", "backbone.vol_confidences.features.5.weight", "backbone.vol_confidences.features.5.bias", "backbone.vol_confidences.f
eatures.5.running_mean", "backbone.vol_confidences.features.5.running_var", "backbone.vol_confidences.features.5.num_batches_tracked", "backbone.vol_confidences.head.0.weight", "backbone.vol
_confidences.head.0.bias", "backbone.vol_confidences.head.2.weight", "backbone.vol_confidences.head.2.bias", "backbone.vol_confidences.head.4.weight", "backbone.vol_confidences.head.4.bias".

Problem temporarily resolved by setting strict to False in model.load_state_dict(state_dict, strict=False)

@Samleo8 Samleo8 reopened this Jun 18, 2020
@Samleo8
Copy link
Author

Samleo8 commented Jun 18, 2020

I'm not sure if it's intentional, but it seems that the pretrained weights have 17 keypoints, whereas the CMU default keypoints has 19 keypoints?

It seems that you are using pretrained weights with coco keypooint mappings; how do you effectively evaluate against the CMU dataset then (since the ground truth keypoints are different)?

@anas-zafar
Copy link

@Samleo8 by any chance do you still have the cmu weights available with you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants