Here we provide a sim-to-real RL training and testing environment for robotic assembly, as well as a modification to APEX-DDPG by adding the option of recording and using human demonstrations. We include two examples in simulation (pybullet), Franka Panda robot performing the peg-in-hole task and a robot-less end-effector performing the lap joint task. We also include a template for connecting your real robot, as you roll out a successfully learned policy.
This repository is tested to be compatible with Ray 0.7.5. Hence, the following instruction is for working with Ray 0.7.5. Please feel free to try later versions of Ray and modify the code accordingly.
-
Install the conda environment for ray 0.7.5: https://pypi.org/project/ray/. Use Python 3.6.
$ pip install ray==0.7.5
-
Install the following dependencies:
$ pip install pybullet==2.2.6 tensorflow==1.10.0 gym opencv-python getch pygame transforms3d
-
Download the ray source code from https://github.com/ray-project/ray/releases/tag/ray-0.7.5 and keep the rllib folder in your local working directory.
-
In the ray folder, find python/ray/setup-dev.py and run
$ python setup-dev.py
to link to the local rllib. -
Clone this repository inside the rllib folder.
-
Run
$ python copy-to-rllib.py
to install the patch.
Configure the parameters:
- Environment parameters in
envs_launcher.py
- Training hyper-parameters in
hyper_parameters/*.yaml
file.
Test and visualize the simulation environment with default input device:
python run.py
Provide a demonstration with the xbox controller:
python run.py --input-type xbc --save-demo-data=True --demo-data-path=human_demo_data/<example>
Train a model:
python train.py -f hyper_parameters/*.yaml
Roll out a model:
python rollout.py <path_to_trained_model>/checkpoint-<iteration>/checkpoint-<iteration>
- All code is written in Python 3.6.
- All code was tested on MacOS, Windows, and Ubuntu.
- For licensing information see LICENSE.