Created by Panos Achlioptas, Judy Fan, Robert X.D. Hawkins, Noah D. Goodman, Leonidas J. Guibas.
This work is based on our ICCV-2019 paper. There, we proposed speaker & listener neural models that reason and differentiate objects according to their shape via language (hence the term shape--glot). These models can operate on 2D images and/or 3D point-clouds and do learn about natural properties of shapes, including the part-based compositionality of 3D objects, from language alone. The latter fact, makes them remarkably robust, enabling a plethora of zero-shot-transfer learning applications. You can check our project's webpage for a quick introduction and produced results.
If you are interested in ShapeGlot, it is also worth looking at the newer and relevant dataset of ShapeTalk, along with the following technical papers concerning referential language and 3D shapes, that also tap and remark on ShapeGlot:
- ShapeTalk, CVPR23: Using discriminative neural listeners to edit 3D shapes via language.
- PartGlot, CVPR22: Discovering the 3D/shape part-structure automatically via ShapeGlot's referential language.
- LADIS, EMNLP22: Disentangling 3D/shape edits when using ShapeGlot and ShapeTalk.
Main Requirements:
- Python 3x (with numpy, pandas, matplotlib, nltk)
- Pytorch (version 1.0+)
Our code has been tested with Python 3.6.9, Pytorch 1.3.1, CUDA 10.0 on Ubuntu 14.04.
Clone the source code of this repository and pip install it inside your (virtual) environment.
git clone https://github.com/optas/shapeglot
cd shapeglot
pip install -e .
We provide 78,782 utterances referring to a ShapeNet chair that was contrasted against two distractor chairs via the reference game described in our accompanying paper (dataset termed as ChairsInContext). We further provide the data used in the Zero-Shot experiments which include 300 images of real-world chairs, and 1200 referential utterances for ShapeNet lamps & tables & sofas, and 400 utterances describing ModelNet beds. Last, we include image-based (VGG-16) and point-cloud-based (PC-AE) pretrained features for all ShapeNet chairs to facilitate the training of the neural speakers and listeners.
To download the data (~232 MB) please run the following commands. Notice, that you first need to accept the Terms Of Use here. Upon review we will email to you the necessary link that you need to put inside the desingated location of the download_data.sh file.
cd shapeglot/
./download_data.sh
The downloaded data will be stored in shapeglot/data
To easily expose the main functionalities of our paper, we prepared some simple, instructional notebooks.
- To tokenize, prepare and visualize the chairsInContext dataset, please look/run:
shapeglot/notebooks/prepare_chairs_in_context_data.ipynb
- To train a neural listener (only ~10 minutes on a single modern GPU):
shapeglot/notebooks/train_listener.ipynb
Note: This repo contains limited functionality compared to what was presented in the paper. This is because our original (much heavier) implementation is in low-level TensorFlow and python 2.7. If you need more functionality (e.g. pragmatic-speakers) and you are OK with Tensorflow, please email panos@cs.stanford.edu .
If you find our work useful in your research, please consider citing:
@article{shapeglot,
title={ShapeGlot: Learning Language for Shape Differentiation},
author={Achlioptas, Panos and Fan, Judy and Hawkins, Robert X. D. and Goodman, Noah D. and Guibas, Leonidas J.},
journal={CoRR},
volume={abs/1905.02925},
year={2019}
}
This provided code is licensed under the terms of the MIT license (see LICENSE for details).