Skip to content

Commit

Permalink
initial commit. added vision and video content
Browse files Browse the repository at this point in the history
  • Loading branch information
Bernd Illing committed Oct 22, 2021
0 parents commit d3622f0
Show file tree
Hide file tree
Showing 54 changed files with 6,446 additions and 0 deletions.
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2021 Bernd Illing

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
106 changes: 106 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@

<!-- REPLACE WITH REAL DOI [![DOI](https://zenodo.org/badge/188856619.svg)](https://zenodo.org/badge/latestdoi/188856619) -->

This is the code for the publication:

B. Illing, J. Ventura, G. Bellec & W. Gerstner
[*Local plasticity rules can learn deep representations using self-supervised contrastive predictions*](https://arxiv.org/abs/2010.08262), accepted at NeurIPS 2021

Contact:
[bernd.illing@epfl.ch](mailto:bernd.illing@epfl.ch)

# Structure of the code

The code is divided into three independent sections, corresponding to the three applications we apply CLAPP to:

* vision
* video
* audio

Our implementation requires the following general dependenices:

* python 3
* conda

Each section comes with its own dependencies handled by conda environments, as explained in the respective sections below.

# Vision

The implementation of the CLAPP vision experiments is based on Sindy Löwe's code of the [Greedy InfoMax model](https://github.com/loeweX/Greedy_InfoMax).

## Setup

To setup the conda environment, simply run

```bash
bash ./vision/setup_dependencies.sh
```

To activate and deactive the created conda environment, run

```bash
conda activate infomax
conda deactivate
```

respectively. The environment name `infomax`, as well as the name of our python module `GreedyInfoMax`, are GIM code legacy.

## Usage

We included three sample scripts to run CLAPP, CLAPP-s (synchronous pos. and neg. updates; version with weight symmetry in $W^{pred}$) and Hinge Loss CPC (end-to-end version of CLAPP). To run the, e.g. the Hinge Loss CPC simulations (model training + evaluation), run:

```bash
bash ./vision/scripts/vision_traineval_HingeLossCPC.sh
```

The code includes many (experimental) versions of CLAPP as command line options that are not used and mentioned in the paper. To view all command-line options of model training, run:

```bash
cd vision
python -m GreedyInfoMax.vision.main_vision --help
```

Training in general uses auto-differentiation provided by `pytorch`. We checked that the obtained updates are equivalent to evaluating the CLAPP learning rules for $W$ and $W^{pred}$, Equations (6) - (8). The used code for this sanity check can be found in `./vision/GreedyInfoMax/vision/compare_updates.py`.


# Video

The implementation of the CLAPP video experiments was inspired by Tengda Han's code for [Dense Predictive Coding](https://github.com/TengdaHan/DPC)

## Setup

The setup of the conda environment is described in `./video/env_setup.txt`. To activate and deactive the created conda environment `pdm`, run

```bash
conda activate pdm
conda deactivate
```

respectively.

## Usage

The basic simulations described in the paper can be replicated using the commands listed in `./video/commands.txt`.


# Audio

The implementation of the CLAPP audio experiments is based on Sindy Löwe's code of the [Greedy InfoMax model](https://github.com/loeweX/Greedy_InfoMax).
<!-- GUILLAUME: Your instructions go here -->

## Setup

## Usage

# Cite

Please cite our paper if you use this code in your own work:

```
@article{illing2020local,
title={Local plasticity rules can learn deep representations using self-supervised contrastive predictions},
author={Illing, Bernd and Ventura, Jean and Bellec, Guillaume and Gerstner, Wulfram},
journal={arXiv preprint arXiv:2010.08262},
year={2020}
}
```
6 changes: 6 additions & 0 deletions video/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
__pycache__/
*.tar
*.rar
UCF101/videos/
UCF101/frame/
.DS_Store
13 changes: 13 additions & 0 deletions video/HowTo_tensorboard_files.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
$ ipython

>> from tensorboard.backend.event_processing import event_accumulator
>> ea = event_accumulator.EventAccumulator('path+/events.out.tfevents.xx.xx')
(e.g. ea = event_accumulator.EventAccumulator('./temp_VGG_CLAPP_test/classification_all_layers/val/events.out.tfevents.1614370774.illing-clapp-video') )
>> ea.Reload()
>> ea.Tags()

-> ready to access

e.g.
>> ea.Scalars('global/accuracy_4_top_1')[-10:]

25 changes: 25 additions & 0 deletions video/UCF101/splits_classification/convert_paths.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# -*- coding: utf-8 -*-
"""
Created on Thu Dec 10 12:33:53 2020
@author: Jean
"""

import os

if __name__ == '__main__':

files = os.listdir('./')[1:]
print(files)

for file_name in files:
with open(file_name, 'r') as stream:
paths = stream.readlines()
print(paths[0])
for path in paths:
path.replace('\\','/')
print(paths[0])
with open(file_name, 'w') as stream:
stream.writelines(paths)


Loading

0 comments on commit d3622f0

Please sign in to comment.