Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pose_transformer_v2 (BERT style transformer) use while running model on video #116

Open
thribhuvanrapolu opened this issue May 12, 2024 · 3 comments

Comments

@thribhuvanrapolu
Copy link

In the run tracking demo on videos, the paper mentions that the BERT-style transformer model(pose_transformer_v2) enables future predictions and amodal completion of missing detections within the same framework.

However, in the PHALP.py script, after running the (pose_transformer_v2), its output is deleted at line 260(in PHALP.py), and I can't find the model output/values used anywhere.

Where exactly does the code utilize the pose transformer v2? Is it involved in the rendering process?

@geopavlakos
Copy link
Collaborator

The pose transformer is used to predict future poses for each tracklet (and compare them with the detected poses when doing identity tracking). These future poses are not visualized. Currently, we only visualize the single-frame estimates from HMR2.0.

@thribhuvanrapolu
Copy link
Author

Thanks for clarifying!

@thribhuvanrapolu
Copy link
Author

Has there been any work done to evaluate the performance of this pose_transformer_v2(BERT style transformer)?
I have looked into the LART paper but the transformer model looks different from 4D-Humans.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants