-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for pretrained weights for depth-only model #5
Comments
Hi @DavidTu21 , We have released the depth-only model. You can download it from link. This BodyMAP model predicts both SMPL model for 3D human pose as well as the 3D pressure map. For your task, it may benefit you to train a BodyMAP model only for 3D human pose. To train a model which only predicts 3D human pose from depth modality, you would need to modify the config file:
Hope this helps your work. |
Hi @Tandon-A Thank you so much for your detailed explanation and for uploading the depth-only model. I really appreciate it. I am wondering if I would be able to perform an inference on the depth image (my own depth image) using this uploaded depth-only model? I saw from the save_inference.py code that batch_mesh_pred, batch_pmap_pred, _, _ = model.infer(batch_depth_images, batch_pressure_images, batch_labels[:, 157:159]), can I replace this model inference input batch_depth_images with my depth image only? If so, are there any requirements on the input depth format? My current depth input data is a depth map in npy format. And for sure I will also start training a BodyMap model only for 3D human pose too. Kind regards, |
Hi @Tandon-A , Sorry for sending another comment, I think it is related to the previous request. I have finished training the depth-only model using your suggested config (and thank you for that again!) and got the model training weights up to 100 epochs. While I started model inferencing using python save_inference.py --model_path xxx --opts_path xxx --save_path xxx, I faced an issue as below:
I suspect that this is because I did not change anything in the inferencing script to only take depth as the input. Do you have suggestions on how I could modify the inferencing the code using only depth as the input? Thank you very much again for your valuable time and detailed suggestions! Kind regards, |
Hi @DavidTu21 ,
You can just set batch_pmap_pred to be a zeros tensor before this line for your model. batch_pmap_pred = torch.zeros(batch_mesh_pred['out_joint_pos'].shape[0], 6890).to(DEVICE) This will set the pmpa predictions to be zeros. |
You can certainly do inference on your depth image. Please convert your depth image to the format followed by SLP, and then process it similar to what is being done in the SLPDataset file. |
Dear author,
Thank you for your work on the BodyMap project! We are currently working on a project that aims to infer an SMPL model from depth images of in-bed human poses. Unfortunately, we don’t have access to 2D pressure map data, so we're focused on using depth information alone as the input.
We noticed that in your ablation study, the “depth-only” model performed well in predicting 3D body shape and pose. Would it be possible to access any pretrained weights specifically for the depth-only model? These would be highly valuable for our analysis and would allow us to benchmark our results more effectively.
If pretrained weights aren't available, any guidance on training the model from scratch would be greatly appreciated.
Thank you for your work and for considering our request!
David
The text was updated successfully, but these errors were encountered: