Support library for 2020 Telluride Neuromorphic Engineering Workshop Challenge "Insights into the Early Motion Pathway (VISION)"
A description of some of the illusions of interest is available here.
This library implements helper functions for generating data representative of optical illusions humans experience.
The master
branch containers a Python package called motion_illusions
. The package contains helper functions for simulating the illusory flow and bias estimation illusions.
The stepping_feet_illusion_matlab branch contains Matlab code for generating events from the stepping feet illusion.
To checkout stepping feet after cloning this repo run:
git fetch origin steppingfeet_illusion_matlab
git checkout steppingfeet_illusion_matlab`
The library implements image warping, visualization utilities, and wraps the UnFlow optical flow estimator.
The image warping is meant to simulate the small image pertubations due to saccades.
UnFlow is an unsupervised deep optical flow estimator. Currently the library supports using it to evaluate illusory patterns using pre-generated data. The intent is to extend the network to accept multiple frames as would be necessary for enforcing a causality requirement.
./examples
rotation_warp_image.py
Demonstrate continuously warp an image by a rotation and plotting utilitiestranslation_warp_image.py
Demonstrate continuously warp an image by a translation and plotting utilitiestest_unflow.py
Demonstrate the wrapping around UnFlow making it possible to evaluate on custom datasets
./motion_illusions
evaluate_unflow.py
Run UnFlow on batch of images and return resultsgeneric_unflow_input.py
A data loader object for interfacing with the UnFlow libraryrotation_translation_image_warp.py
Utilities for warping an image based on rotation and translation
./motion_illusions/utils
flow_plot.py
Convert optical flow into an imageimage_tile.py
Easily compose multiple images into a single image for display, useful for debuggingrate_limit.py
Rate limit a process by wall clock time (useful for visualization)signal_plot.py
Produce an image plotting multiple real time signalstime_iterator.py
Iterate over time using a wall clock or fixed timestep
- Python >= 3.5
- CUDA enabled GPU
- See
requirements.txt
for python dependencies
Since training neural networks is the goal it assumed a GPU is available.
Clone the repository, initiate submodules.
git clone https://github.com/prgumd/motion_illusions.git
cd motion_illusions
git submodule update --init
Download pretrained model weights from UnFlow authors. Put in root of project.
mkdir -p unflow_logs/ex
unzip -d unflow_logs/ex unflow_models.zip
A Dockerfile is provided to build an image with all packages installed. This is probably the easiest method to setup due to UnFlow needing an old version of CUDA and tensorflow.
Some of the code assumes it can display GUIs using an X server. A docker run
command is provided for Ubuntu host systems that configures the container for X forwarding.
Make sure docker is installed with access to the GPU.
Run the following to build the docker image.
cd motion_illusions
docker build --tag motion_illusions:1.0 .
docker run --gpus all --rm -it motion_illusions:1.0 -v $(pwd):/workspace bash
To launch the container with X forwarding, GPUs available, and the latest version of this package:
cd motion_illusions
docker run -u $(id -u):$(id -g) -e DISPLAY -v="/tmp/.X11-unix:/tmp/.X11-unix:rw" --ipc host --gpus all --rm -it -v $(pwd):/workdir motion_illusions:1.0 bash
The container will complain about having no user due to overriding the uid and gid for X forwarding. This is ok for our purposes, it is the simplest way to handle X forwarding without a security compromise. More details available here.
Make sure the following is installed on the host.
- CUDA 9.0
- Python >= 3.5
- virtualenv ('pip3 install virtualenv')
Run the following steps to create a virtual environment, activate it, and install packages.
cd motion_illusions
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
pip install -e .
Developed by Cornelia Fermuller, Chethan Parameshwara, and Levi Burner with the Perception and Robotics Group at the University of Maryland.