You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your great assets. I am trying to align the smplx parameters with the scene world. I have a few questions after looking into eval_dataset.py and align_smpl.py. I am not an expert in 3D transformation, hope you don't mind my "silly" questions:
I feel some parts in eval_dataset.py and align_smpl.py are not consistent:
How could we get the smpl.obj from the smplx.pkl? Should we pass the smplx.pkl to vposer to otain pose parameters, and then use smplx to obtain the shape vertices. I get some related codes in utils/vis_utils.py. But not very sure if there are further transformations.
In align_smpl.py, the smpl.obj are loaded, rescaled (become larger), and then transformed to scene coordinates using pose2scene RT matrix. While in eval_dataset, we do nothing to smpl.obj, but to rescale and transform the smpl parameters (specifically the global orientation and translation). Also in eval_dataset, we load the scene_downsampled.ply, instead of the textured mesh. Some other confusing parts includes:
Is scene_downsampled.ply just downsampled from textured_output.obj? Or there are still rescaling and RT transformation here.
What is the transform_norm.txt? I notice this is to transform the scenes into some canonical space. But I didn't find explanations about this in the paper or git. Why should we apply transform_norm to smpl parameters in eval_dataset.py ?
Why in align_smpl.py we apply scale to smpl.obj, while in eval_dataset.py we apply 1/scale to the gloabl orientation and translation.
In align_smpl.py, we apply the rescale to the whole smpl model. While in eval_dataset.py, we only apply the rescale to the global translation, in which case the scale of the smpl remains unchanged.
I know these are A LOT of questions. I would greatly appreciate it if you could help clarify this. I believe these can also be helpful for other beginner to use this dataset.
The text was updated successfully, but these errors were encountered:
Hi, thanks for asking! For the 1st question, yes, the obj files are obtained from pkl files. No transformations are needed.
For question 2, scene_downsampled.ply is downsampled from textured_output.obj. You can simply open them together in meshlab to verify that. For the transformations, essentially, there are 3 coordinate systems: the smplx coordinate system (from mocap device), the gaze coordinate system (from hololens2), the scene coordinate system (from 3D scanner). In align_smpl.py, what we are doing is transforming smplx to the scene coordinate space: $$X_{scene} = W_{p2s}sX_p$$ where s is the scale factor, and $W_{pose2scene}=[R|t]$ transforms the scaled smplx vertex $X_p$ to the 3D scene space. In eval_dataset.py, basically we are doing similar things, but there are several points to pay attention to:
(1) the 3D scene points are transformed into the canonical space using transform_norm.txt such that the pointnet++ backbone can extract more informative features. Thus, aligning smplx to the transformed scene is as follows: $$X_{scene}' = W_nX_{scene}=W_nW_{p2s}sX_p=[R_n|t_n][R|t]sX_p$$ Note that this equals to: $$1/sX_{scene}' = [R_n|t_n/s][R|t/s]X_p$$ (2) In eval_dataset.py we don't use smplx vertex $X_p$, and instead we use the global translation, orientation and latent vector to represent smplx poses. So we transform the global orientation $R_g$ and translation $t_g$ using the above equation, since $X_p=[R_g|t_g]X_{local}$.
So you can see in eval_dataset.py we don't scale the smplx, but we rescale the 3D scene (we don't scale global translation and orientation). That's why 1/scale. While in align_smpl.py, we simply rescale smplx vertex since it's easier and we only want to visualize. Both transformations aim to align smplx and the scene and essentially they are doing the same thing.
Hi, thanks for your great assets. I am trying to align the smplx parameters with the scene world. I have a few questions after looking into eval_dataset.py and align_smpl.py. I am not an expert in 3D transformation, hope you don't mind my "silly" questions:
I feel some parts in eval_dataset.py and align_smpl.py are not consistent:
I know these are A LOT of questions. I would greatly appreciate it if you could help clarify this. I believe these can also be helpful for other beginner to use this dataset.
The text was updated successfully, but these errors were encountered: