This repository provides the implementation for the paper:
One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment LocomotionNico Bohlinger, Grzegorz Czechmanowski, Maciej Krupka, Piotr Kicki, Krzysztof Walas, Jan Peters and Davide Tateo
Conference on Robot Learning, 2024
Paper / Project page / Video
- Install RL-X
Default installation for a Linux system with a NVIDIA GPU. For other configurations, see the RL-X documentation.
conda create -n one_policy_to_run_them_all python=3.11.4
conda activate one_policy_to_run_them_all
git clone git@github.com:nico-bohlinger/RL-X.git
cd RL-X
pip install -e .[all] --config-settings editable_mode=compat
pip uninstall $(pip freeze | grep -i '\-cu12' | cut -d '=' -f 1) -y
pip install "torch==2.2.1" --index-url https://download.pytorch.org/whl/cu118 --upgrade
pip install -U "jax[cuda12_pip]==0.4.25" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
- Install the project
git clone git@github.com:nico-bohlinger/one_policy_to_run_them_all.git
cd one_policy_to_run_them_all
pip install -e .
Note: If package versions are conflicting, due to newer versions of RL-X, modify RL-X's dependencies to align with the requirements.txt file in this repository as it contains the exact package versions used for this project.
- Run the following commands to start an experiment
cd one_policy_to_run_them_all/experiments
sbatch experiment.sh
- Move the trained model to the experiments folder or use the
pre_trained_model
file - Run the following commands to test a trained model
cd one_policy_to_run_them_all/experiments
bash test.sh
Either modify the commands.txt file, where the values are target x, y and yaw velocities, or connect a Xbox 360 controller and control the target x,y velocity with the left joystick and the yaw velocity with the right joystick.
To switch the robot, either change the robot id in the multi_render.txt file or press the LB and RB button on the controller.
- Copy an existing robot folder in the
one_policy_to_run_them_all/environments
directory - Change the name of the folder and all imports in the files
- Update the XML and meshes in the
data
folder - In the environment.py adjust the following variables: LONG_NAME, SHORT_NAME, nominal_joint_positions, max_joint_velocities, initial_drop_height, collision_groups, foot_names, joint_names, joint_nr_direct_child_joints and the joints_masks if present / needed (see Cassie robot environment)
- Adjust the reward coefficients and curriculum_steps in
reward_functions/rudin_own_var.py
- Adjust the controller gains and scaling_factor in
control_functions/rudin2022.py
- Adjust the foot geom indices in
domain_randomization/mujoco_model_functions/default.py
anddomain_randomization/seen_robot_functions/default.py
- Adjust the domain randomization ranges in the
domain_randomization
folder
If you use or refer to this repository in your research, please cite:
@article{bohlinger2024onepolicy,
title={One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion},
author={Bohlinger, Nico and Czechmanowski, Grzegorz and Krupka, Maciej and Kicki, Piotr and Walas, Krzysztof and Peters, Jan and Tateo, Davide},
journal={Conference on Robot Learning},
year={2024}
}
The robot assets and XML files do not belong to the authors of this repository. They are used for research purposes only and are not covered by the MIT license. The MIT license only covers the code in this repository.