Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference or visualization in the wild #9

Open
SeanLiu081 opened this issue Mar 11, 2022 · 3 comments
Open

Inference or visualization in the wild #9

SeanLiu081 opened this issue Mar 11, 2022 · 3 comments

Comments

@SeanLiu081
Copy link

Hi,
thanks for open-sourcing such a great research.I know the PROX recoding data contains depth and mask gt, as well as camera parameters, but other wild videos (such as my selfie video) doesn't have these information,can your method output a visualization?
Or I need to use other pre-trained models to get these information (such as openpose,deeplabv3) , and then run your optimization pipline?

@sanweiliti
Copy link
Owner

Hi,

Yes you also need depth/human mask/keypoints to run the optimization pipeline, and you can use other open-source tools to get them as you mentioned.

@lucasjinreal
Copy link

@sanweiliti hi, may I ask the 3d pose point predicted from 3d pose model such VIBE or PARE can be used for any stage from LEMO?

@sanweiliti
Copy link
Owner

@jinfagang
Hi, if you want to smooth the 3D joints from VIBE/PARE, you can create an optimization pipeline and insert the motion smoothness prior the same way as we did here, to be specific, in the optimization, you can include the 3D joint loss (to enforce 3D joints to be consistent with VIBE/PARE outputs), motion smoothness loss, and other regularizors (such as shape/pose priors).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants