You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the run tracking demo on videos, the paper mentions that the BERT-style transformer model(pose_transformer_v2) enables future predictions and amodal completion of missing detections within the same framework.
However, in the PHALP.py script, after running the (pose_transformer_v2), its output is deleted at line 260(in PHALP.py), and I can't find the model output/values used anywhere.
Where exactly does the code utilize the pose transformer v2? Is it involved in the rendering process?
The text was updated successfully, but these errors were encountered:
The pose transformer is used to predict future poses for each tracklet (and compare them with the detected poses when doing identity tracking). These future poses are not visualized. Currently, we only visualize the single-frame estimates from HMR2.0.
Has there been any work done to evaluate the performance of this pose_transformer_v2(BERT style transformer)?
I have looked into the LART paper but the transformer model looks different from 4D-Humans.
In the run tracking demo on videos, the paper mentions that the BERT-style transformer model(pose_transformer_v2) enables future predictions and amodal completion of missing detections within the same framework.
However, in the PHALP.py script, after running the (pose_transformer_v2), its output is deleted at line 260(in PHALP.py), and I can't find the model output/values used anywhere.
Where exactly does the code utilize the pose transformer v2? Is it involved in the rendering process?
The text was updated successfully, but these errors were encountered: