Skip to content

Code to generate visual metamers via foveated feed-forward style transfer (ICLR 2019)

Notifications You must be signed in to change notification settings

ArturoDeza/NeuroFovea

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This code/repo has moved to PyTorch:

https://github.com/ArturoDeza/NeuroFovea_PyTorch

Towards Metamerism via Foveated Style Transfer

This repository containts the code to reproduce the Metamers used in the paper (Deza, Jonnalagadda, Eckstein. ICLR 2019). Link to the paper and discussion in openreview: https://openreview.net/forum?id=BJzbG20cFQ

This code has been tested successfully on CUDA version 8.0 (Ubuntu 14.04 and 16.04) and CUDA version 10.0 (Ubuntu 18.04).

The code to implement our model is mainly driven by:

What is a Metamer?

Metamers are a set of stimuli that are physically different but perceptually indistinguishable to each other. See below for an example.

Input Metamer

When maintaing center fixation on the orange dot the two images that are flipped back and forth should be perceptually indistinguishable to each other even though they are physically different (strong difference in the periphery vs the fovea).

Rendering Metamers by varying receptive field size

Reference vs Synthesis Metamers (V1) Synthesis vs Synthesis Metamers (V2)
Left: we have a metamer that is metameric to its reference image. The rate of growth of the receptive fields on of the rendered metamer resembles the size of receptive fields of neurons in V1. Right:, we have two images that are heavily distorted in the visual periphery, are not metameric to the reference image, but are metameric to each other (perturbed with differente noise samples). The rate of growth of these receptive fields correspong to the sizes of V2 neurons, where it is hypothesized that the ventral stream is sensitive to texture.

As in our previous demo, the metameric effects will only work properly if one fixates at the orange dot at the center of the image. In the paper we provide more details on how we psychophysically tested this phenomena using an eye-tracker to control for center fixation, viewing distance, display time, and the visual angle of the stimuli. We tested our model on grayscale images, and have extended the model in this code release to color images.

Installation and pre-requisites for code functionality:

It was developed in CUDA 8.0 on Ubuntu 16.04 and has been tested on both CUDA 8.0 and CUDA 10.1 (though there might be some differences from CUDA 10.1 to 8.0) on Ubuntu 18.04. You will need to install:

CUDA 10.1

CUDNN 7.5.1

Torch (Lua)


Updated Installation Instructions as of November 1st, 2020:

All to be run under the same terminal:

Install OpenBLAS

git clone https://github.com/xianyi/OpenBLAS.git
cd OpenBLAS
make NO_AFFINITY=1 USE_OPENMP=1
sudo make install

Export CMAKE LIBRARY PATH that include OpenBLAS:

CMAKE_LIBRARY_PATH=/opt/OpenBLAS/include:/opt/OpenBLAS/lib:$CMAKE_LIBRARY_PATH

Install Torch (old school with Lua):

git clone https://github.com/nagadomi/distro.git ~/torch --recursive
cd ~/torch
./install-deps
./clean.sh
./update.sh
. ~/torch/install/bin/torch-activate [this will activate the torch installation! You may need to run this if you open a new terminal, or you can just append to your path]

Install unsup package:

luarocks install unsup

The Full Dataset is also available here for future work in both grayscale and color Metamers, they can be found in the Datasets/ folder

To complete the installation please run:

$ bash download_models_and_stimuli.sh

Example code:

Generate a V1 metamer for the 512x512 image 10.png with a center fixation, specified by the rate of growth of the receptive field: s=0.25. Note: The approximate rendering time for a metamer should be around a second.

$ th NeuroFovea.lua -image Dataset/1_color.png -scale 0.25 -refinement 1 -color 1

To create a V2 metamer, change the scale from 0.25 to 0.5. Scale is computed via receptive field size over retinal eccentricity of that receptive field and the values are only relevant given the size of the stimuli (26 x 26 degrees of visual angle rendered at 512 x 512 pixels). To compute the reference image, set the reference flag to 1.

Please read our paper to learn more about visual metamerism: https://openreview.net/forum?id=BJzbG20cFQ

We hope this code and our paper can help researchers, scientists and engineers improve the use and design of metamer models that have potentially exciting applications in both computer vision and visual neuroscience.

This code is free to use for Research Purposes, and if used/modified in any way please consider citing:

@inproceedings{
deza2018towards,
title={Towards Metamerism via Foveated Style Transfer},
author={Arturo Deza and Aditya Jonnalagadda and Miguel P. Eckstein},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=BJzbG20cFQ},
}

Other inquiries: deza@mit.edu

About

Code to generate visual metamers via foveated feed-forward style transfer (ICLR 2019)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published