Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for pretrained weights for depth-only model #5

Open
DavidTu21 opened this issue Oct 31, 2024 · 5 comments
Open

Request for pretrained weights for depth-only model #5

DavidTu21 opened this issue Oct 31, 2024 · 5 comments

Comments

@DavidTu21
Copy link

Dear author,

Thank you for your work on the BodyMap project! We are currently working on a project that aims to infer an SMPL model from depth images of in-bed human poses. Unfortunately, we don’t have access to 2D pressure map data, so we're focused on using depth information alone as the input.

We noticed that in your ablation study, the “depth-only” model performed well in predicting 3D body shape and pose. Would it be possible to access any pretrained weights specifically for the depth-only model? These would be highly valuable for our analysis and would allow us to benchmark our results more effectively.

If pretrained weights aren't available, any guidance on training the model from scratch would be greatly appreciated.

Thank you for your work and for considering our request!

David

@Tandon-A
Copy link
Collaborator

Tandon-A commented Nov 3, 2024

Hi @DavidTu21 ,

We have released the depth-only model. You can download it from link.
These model weights are released for only non-commercial purposes, please check license file for details.

This BodyMAP model predicts both SMPL model for 3D human pose as well as the 3D pressure map. For your task, it may benefit you to train a BodyMAP model only for 3D human pose.

To train a model which only predicts 3D human pose from depth modality, you would need to modify the config file:

  • Set "modality" to "depth" link (as you only have depth information)
  • Set "pmap_loss" to false link
  • Set "contact_loss_fn" to "ct0" link
  • Set "lambda_pmap_loss", and "lambda_contact_loss" to 0.0 link
  • Set "main_model_fn" to "PMM1" link
  • Set "model_fn" to "PME1" link

Hope this helps your work.

@DavidTu21
Copy link
Author

DavidTu21 commented Nov 3, 2024

Hi @Tandon-A

Thank you so much for your detailed explanation and for uploading the depth-only model. I really appreciate it. I am wondering if I would be able to perform an inference on the depth image (my own depth image) using this uploaded depth-only model?

I saw from the save_inference.py code that batch_mesh_pred, batch_pmap_pred, _, _ = model.infer(batch_depth_images, batch_pressure_images, batch_labels[:, 157:159]), can I replace this model inference input batch_depth_images with my depth image only? If so, are there any requirements on the input depth format? My current depth input data is a depth map in npy format.

And for sure I will also start training a BodyMap model only for 3D human pose too.

Kind regards,
David

@DavidTu21
Copy link
Author

Hi @Tandon-A ,

Sorry for sending another comment, I think it is related to the previous request.

I have finished training the depth-only model using your suggested config (and thank you for that again!) and got the model training weights up to 100 epochs. While I started model inferencing using python save_inference.py --model_path xxx --opts_path xxx --save_path xxx, I faced an issue as below:

Traceback (most recent call last):
  File "save_inference.py", line 70, in <module>
    batch_pmap_pred *= MAX_PMAP_REAL
TypeError: unsupported operand type(s) for *=: 'NoneType' and 'float'

I suspect that this is because I did not change anything in the inferencing script to only take depth as the input. Do you have suggestions on how I could modify the inferencing the code using only depth as the input? Thank you very much again for your valuable time and detailed suggestions!

Kind regards,
David

@Tandon-A
Copy link
Collaborator

Hi @DavidTu21 ,

batch_pmap_pred *= MAX_PMAP_REAL

You can just set batch_pmap_pred to be a zeros tensor before this line for your model.

batch_pmap_pred = torch.zeros(batch_mesh_pred['out_joint_pos'].shape[0], 6890).to(DEVICE)

This will set the pmpa predictions to be zeros.

@Tandon-A
Copy link
Collaborator

I am wondering if I would be able to perform an inference on the depth image (my own depth image) using this uploaded depth-only model?

You can certainly do inference on your depth image. Please convert your depth image to the format followed by SLP, and then process it similar to what is being done in the SLPDataset file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants