Skip to content

This repository implements a convolutional recurrent neural network that learns the functional mapping between video data and the probability of future collisions. The labels used to train this deep learning algorithm are generated by a self-supervised collision-detection deep learning method.

Notifications You must be signed in to change notification settings

trevor-richardson/collision_anticipation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Robot Pain Anticipation


This repository implements a convolutional recurrent neural network that learns the functional mapping between video data and the probability of future collisions. The labels used to train this deep learning algorithm are generated by a self-supervised collision-detection deep learning method.

A custom built stateful deep Convolutional LSTM, implemented in PyTorch, is used to predict future collisions with an object placed in projectile motion. The projectile is set in motion with initial random velocities in the x, y and z directions. Hit and miss labels are determined by self-supervised deep dynamics collision-detection mechanism.

Each network is trained on 3000 hit and miss simulations. The input to the neural network is a (70, 64, 64, 3) video of images validated and tested on more than 600 randomly generated simulations.

The best model has a 91.2% per frame accuracy prediction and inferences at 60 Hertz.

Demo


The video below demos a simple deterministic algorithm that chooses left or right randomly when the network prediction is above a chosen threshold, thus predicting a future collision.



The video below depicts the input to neural network at inference time.



The videos below depict the cell states and hidden outputs of the anticipation neural network for ConvLSTM layer 0.

Specific contributions

  • Custom built ConvLSTM cell class
see conv_lstm_cell.py, anticipation_model.py
  • "Dodgeball" robotic simulation in V-REP
see demo.ttt
  • Visualization class that can view the activations or cell states (what's been learned) of ConvLSTM
see visualizer.py

Scripts to run

If properly installed and demo.ttt is loaded in V-REP, one can dodge balls with the script

  python3 stateful_demo.py

One can visualize activations by running

  python3 train_anticipation.py --exp_type=activations

One can train new models by running

  python3 train_anticipation.py

One collect data by running the following script with current_scene.ttt loaded in V-REP

  python3 run_vrep_simulation.py

Installing

Change base_dir in config.ini to the absolute path of the current directory.
Packages needed to run the code include the following:

  • numpy
  • scipy
  • python3
  • pytorch
  • matplotlib
  • vrep

In the vrep_scenes directory, both demo.ttt and current_scene.ttt have custom lua code written for the sphere object and there are filepaths in both that need to be changed in order to run. These filepaths need to point at the vrep_scripts folder.

About

This repository implements a convolutional recurrent neural network that learns the functional mapping between video data and the probability of future collisions. The labels used to train this deep learning algorithm are generated by a self-supervised collision-detection deep learning method.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages