In this release, we provide an open source implementation of the FlyNet supervised learning experiments in A Hybrid Compact Neural Architecture for Visual Place Recognition, published in the IEEE Robotics and Automation Letters (RA-L) journal (DOI 10.1109/LRA.2020.2967324). Preprint version available at https://arxiv.org/abs/1910.06840.
Project page: https://mchancan.github.io/projects/FlyNet
- (Mar 1, 2021) New (relevant) research on Sequential Place Learning that address the main limitations of CANNs is now available!
- (Nov 22, 2020) A
demo
of the CANN component has been released here.
The dataset used to run this code can be downloaded from here, which is a small subset of the Nordland dataset. However, this code can easily be adapted to run across other much larger datasets.
This code was tested on PyTorch v1.0 and Python 3.6.
We provide a demo of FlyNet on the Nordland dataset. After downloading the dataset, extract it into the dataset/
directory and run:
python main.py
FlyNet itself is released under the MIT License (refer to the LICENSE file for details) for academic purposes. For commercial usage, please contact us via mchancanl@uni.pe
If you find this project useful for your research, please use the following BibTeX entry.
@article{
chancan2020hybrid,
author = {M. {Chanc\'an} and L. {Hernandez-Nunez} and A. {Narendra} and A. B. {Barron} and M. {Milford}},
journal = {IEEE Robotics and Automation Letters},
title = {A Hybrid Compact Neural Architecture for Visual Place Recognition},
year = {2020},
volume = {5},
number = {2},
pages = {993--1000},
keywords = {Biomimetics;localization;visual-based navigation},
doi = {10.1109/LRA.2020.2967324},
ISSN = {2377-3774},
month = {April}
}