Motivation
The calculation of inverse kinematics can be computationally expensive, since analytical solutions are often not available and numerical methods must be used instead. These numerical algorithms can be sped up by providing an initial estimate that is close to the correct solution. The goal of this work is to obtain the initial estimates using neural networks. We compare two network architectures for this problem: An invertible neural network (INN) trained on a forward kinematics dataset, and a generative adversarial network (GAN) trained on an inverse kinematics dataset. Our approach can be seen as an extension to the work conducted by [Ardizzone et al.](https://arxiv.org/abs/1808.04730) by using more complex robot configurations and extending it to a 3D setting.Use the following setup.sh script to clone the repo, build a docker image and start a container.
#!/bin/bash
git clone https://github.com/a-doering/learning-inverse-kinematics.git
cd learning-inverse-kinematics
docker build -f Dockerfile -t learnik .
# This will also activate the conda environment
docker run -ti learnik /bin/bash
The data is generated using rejection sampling.
This is a 2D example of a 7 degree robot arm with one prismatic and six rotational joints.
3D follows the same concept.
We create a dataset with n TCP (Tool Center Point) positions (pos
) each with m joint configurations (thetas
).
Forward | One Inverse | m Inverses |
---|---|---|
Sample n positions | Sample configurations within epsilon ball of each position | Repeat until you have m configurations per position |
Before we can train the models, we need to create training data. When chosing parameters, keep in mind that the INN needs only the forward kinematics.
# Generate 2D training data
python src/kinematics/robot_arm_2d.py
# Generate 3D training data
python src/kinematics/robot_arm_3d.py
In the beginning of the training we can see mode collapse (all generated configurations are more or less the same), but then we can see how the configurations fan out.
Training 2D | Training 3D |
---|---|
python src/kinematics/gan/train.py
python src/kinematics/gan_3d/train.py
# You can login using wandb (weights and biases) to log your training.
python src/inn/train.py
python src/evaluate/plot_losses.py
Examples 2D for a 7DOF robot.
Generated configurations for diffferent target positions
Generated configurations for diffferent latent variables
python src/evaluate/evaluate_gan.py
Examples 2D for a 7DOF robot.
Ground Truth Distributions | Predicted Distributions |
---|---|
Positions |
Positions |
Thetas |
Thetas |
Examples 3D for a 7DOF robot.
Ground Truth Distributions | Predicted Distributions |
---|---|
Positions |
Positions |
Thetas |
Thetas |
python src/evaluate/evaluate_with_mmd.py
python src/evaluate/evaluate_null_space_with_mmd.py
python src/evaluate/plot_distributions.py
Andreas Doering 💻📖🤔 GAN, Kinematics | Arman Mielke 💻 🤔 INN | Johannes Tenhumberg 🔌🤔🔬 Idea, Mentoring, rokin |
This work was conducted as a research project of the Advanced Deep Learning for Robotics course by professor Berthold Bäumel of the Technical Unviersity of Munich under supervison of Johannes Tenhumberg. This project has been supported by a Google Educational Grant.