Compute action for humanoid env based on mocap data #3365
Replies: 2 comments 3 replies
-
Just an update on this. I have been working off of
I then saved these values along with the states for the walking data in the main
I then save the state and torque data and use them with a humanoid built with
The commands are "working" in that the humanoid moves, but it just slowly falls to the ground. I tried debugging by setting the action to a fixed value.
I cannot figure out how to the get all of the values scaled within the correct range. I am also not sure if I am choosing the joints correctly. The rotations listed in the wiki for the gym humanoid shows the joint rotations in order of x-z-y. I am still unclear if this is x position of rotation (assumed to be the first index) or rotation about x (could be y or z depending on coordinate system). I also see that the Any help is appreciated! Thanks! @erwincoumans Are you able to provide some insight? |
Beta Was this translation helpful? Give feedback.
-
This command should work, if all dependencies are installed properly
Once that works, you can check what files are used and see the action bounds etc. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I am trying to implement behavioral cloning using the humanoid and mocap data. I know it is not as sophisticated as DeepMimic but its a simpler starting point. I have been playing with lots of the humanoid examples and I am struggling to figure out how I can compute values for the action space. I played with several examples in
bullet3/examples/pybullet/gym/
../pybullet_envs/deep_mimic/env/testHumanoid.py
../pybullet_envs/deep_mimic/mocap/ { inverse_kinematics.py , render_reference.py, etc.}
(location of mocap data i've been playing with)../pybullet_examples/humanoidMotionCapture.py
I have also been using the humanoid from:
gym_env.DeepMimicHumanoidGymEnv:DeepMimicHumanoidGymEnv
I know how to define networks with the correct dimensions for
action_space
andobservation_space
if I want to train from scratch or use the DeepMimic approach. For behavioral cloning, I need (state, action) pairs that I can use as expert data. My issue is that the action space is only(17, )
, but everything being passed to the joints seems to be in theobservation_space
dimensions. I tried using the output of the variousPD
calculations, such astaus = stablePD.computePD(...)
, but this is also based on a larger joint space.Do I need to use something like
calculateInverseDynamics
to compute the joint torques? Or, is theaction_space (17, )
just a subset of the joints and there is a simple way to do it?I am still quite new to pybullet so I am still working through everything.
Beta Was this translation helpful? Give feedback.
All reactions