Skip to content

gramuah/ccnn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Towards perspective-free object counting with deep learning

By Daniel Oñoro-Rubio and Roberto J. López-Sastre.

GRAM, University of Alcalá, Alcalá de Henares, Spain.

This is the official code repository of the work described in our ECCV 2016 paper.

This repository provides the implementation of CCNN and Hydra models for object counting.

Cite us

Was our code useful for you? Please cite us:

@inproceedings{onoro2016,
    Author = {O\~noro-Rubio, D. and L\'opez-Sastre, R.~J.},
    Title = {Towards perspective-free object counting with deep learning},
    Booktitle = {ECCV},
    Year = {2016}
}

License

The license information of this project is described in the file "LICENSE.txt".

Contents

  1. Requirements: software
  2. Requirements: hardware
  3. Basic installation
  4. Demo
  5. How to reproduce the results of the paper
  6. Remarks
  7. Acknowledgements

Requirements: software

  1. Use a Linux distribution. We have developed and tested the code on Ubuntu.

  2. Python 2.7.

  3. Requirements for Caffe and pycaffe. Follow the Caffe installation instructions.

Note: Caffe must be built with support for Python layers!

# In your Makefile.config, make sure to have this line uncommented
WITH_PYTHON_LAYER := 1
  1. Python packages you need: cython, python-opencv, python-h5py, easydict, pillow (version >= 3.4.2).

Requirements: hardware

This code allows the usage of CPU and GPU, but we strongly recommend the usage of GPU.

  1. For training, we recommend using a GPU with at least 3GB of memory.

  2. For testing, a GPU with 2GB of memory is enough.

Basic installation (sufficient for the demo)

  1. Be sure you have added to your PATH the tools directory of your Caffe installation:

    export PATH=<your_caffe_root_path>/build/tools:$PATH
  2. Be sure you have added your pycaffe compilation into your PYTHONPATH:

    export PYTHONPATH=<your_caffe_root_path>/python:$PYTHONPATH

Demo

Here, we provide a demo for predicting the number of vehicles in the test images of the TRANCOS dataset, which was used in our ECCV paper.

This demo uses the CCNN model described in the paper. The results reported in the paper can be reproduced with this demo.

To run the demo, these are the steps to follow:

  1. Download the TRANCOS dataset and extract it in the path data/TRANCOS.

  2. Download our TRANCOS CCNN pretrained model. Follow the instructions detailed here

  3. Finally, to run the demo, simply execute the following command:

    ./tools/demo.sh

How to reproduce the results of the paper?

We provide the scripts needed to train and test our models (CCNN and Hydra) on the datasets used in our ECCV paper. These are the steps to follow:

Download a dataset

To download and set up a dataset we recommend following these instructions:

  • TRANCOS dataset: Download it using this direct link, and extract the file in the path data/TRANCOS.

  • UCSD dataset: just place yourself in the $PROJECT directory and run the following script

     ./tools/get_ucsd.sh
  • UCF dataset: just place yourself in the $PROJECT directory and run the following script

     ./tools/get_ucf.sh

Note: Make sure the folder "data/" does not already contain the dataset.

Download pre-trained models

All our pre-trained models can be downloaded following these instructions:

  1. TRANCOS Models
  2. UCSD Models
  3. UCF Models

Test the pretrained models

  1. Edit the corresponding script $PROJECT/experiments/scripts/DATASET_CHOSEN_test_pretrained.sh

  2. Run the corresponding scripts.

./experiments/scripts/DATASET_CHOSEN_test_pretrained.sh

Note that the pre-trained models will let you reproduce the results in our paper.

Train/test the model chosen

  1. Edit the launching script (e.g.: $PROJECT/experiments/scripts/DATASET_CHOSEN_train_test.sh).

  2. Place you in $PROJECT folder and run the launching script by typing:

./experiments/scripts/DATASET_CHOSEN_train_test.sh

Remarks

To provide a better distribution, this repository unifies and reimplements in Python some of the original modules. Due to these changes in the libraries used, the results produced by this software might be slightly different from the ones reported in the paper.

Acknowledgements

This work is supported by the projects of the DGT with references SPIP2014-1468 and SPIP2015-01809, and the project of the MINECO TEC2013-45183-R.

Releases

No releases published

Packages

No packages published