Skip to content

Learning inverse kinematics using invertible neural networks and GANs. Research project for "Advanced Deep Learning for Robotics".

Notifications You must be signed in to change notification settings

a-doering/learning-inverse-kinematics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning Inverse Kinematics with GANs and INNs (Invertible Neural Networks)

Motivation The calculation of inverse kinematics can be computationally expensive, since analytical solutions are often not available and numerical methods must be used instead. These numerical algorithms can be sped up by providing an initial estimate that is close to the correct solution. The goal of this work is to obtain the initial estimates using neural networks. We compare two network architectures for this problem: An invertible neural network (INN) trained on a forward kinematics dataset, and a generative adversarial network (GAN) trained on an inverse kinematics dataset. Our approach can be seen as an extension to the work conducted by [Ardizzone et al.](https://arxiv.org/abs/1808.04730) by using more complex robot configurations and extending it to a 3D setting.

1. Installation using Docker

Use the following setup.sh script to clone the repo, build a docker image and start a container.

#!/bin/bash
git clone https://github.com/a-doering/learning-inverse-kinematics.git
cd learning-inverse-kinematics
docker build -f Dockerfile -t learnik .
# This will also activate the conda environment
docker run -ti learnik /bin/bash

2. Generate Training Data

The data is generated using rejection sampling. This is a 2D example of a 7 degree robot arm with one prismatic and six rotational joints. 3D follows the same concept. We create a dataset with n TCP (Tool Center Point) positions (pos) each with m joint configurations (thetas).

Forward One Inverse m Inverses
Forward One Inverse All Inverse
Sample n positions Sample configurations within epsilon ball of each position Repeat until you have m configurations per position

Before we can train the models, we need to create training data. When chosing parameters, keep in mind that the INN needs only the forward kinematics.

# Generate 2D training data
python src/kinematics/robot_arm_2d.py
# Generate 3D training data
python src/kinematics/robot_arm_3d.py

3.1 Train a GAN

In the beginning of the training we can see mode collapse (all generated configurations are more or less the same), but then we can see how the configurations fan out.

Training 2D Training 3D
python src/kinematics/gan/train.py
python src/kinematics/gan_3d/train.py
# You can login using wandb (weights and biases) to log your training.

3.2 Train an INN

python src/inn/train.py
python src/evaluate/plot_losses.py

4.1 Evaluation/Visualization GAN

Examples 2D for a 7DOF robot.


Generated configurations for diffferent target positions

Generated configurations for diffferent latent variables

python src/evaluate/evaluate_gan.py

4.2 Evaluation/Visualization INN

Examples 2D for a 7DOF robot.
Ground Truth Distributions Predicted Distributions

Positions

Positions

Thetas

Thetas
Examples 3D for a 7DOF robot.
Ground Truth Distributions Predicted Distributions

Positions

Positions

Thetas

Thetas
python src/evaluate/evaluate_with_mmd.py
python src/evaluate/evaluate_null_space_with_mmd.py
python src/evaluate/plot_distributions.py

Contributors


Andreas Doering

💻📖🤔
GAN, Kinematics

Arman Mielke

💻 🤔
INN

Johannes Tenhumberg

🔌🤔🔬
Idea, Mentoring, rokin

Acknowledgements

This work was conducted as a research project of the Advanced Deep Learning for Robotics course by professor Berthold Bäumel of the Technical Unviersity of Munich under supervison of Johannes Tenhumberg. This project has been supported by a Google Educational Grant.

About

Learning inverse kinematics using invertible neural networks and GANs. Research project for "Advanced Deep Learning for Robotics".

Topics

Resources

Stars

Watchers

Forks