This is the common repo for PyTorch deep learning models by the Data Systems Group at the University of Waterloo.
For sentiment analysis, topic classification, etc.
- Kim CNN: Baseline convolutional neural network for sentence classification (Kim, EMNLP 2014)
- Conv-RNN: Convolutional RNN (Wang et al., KDD 2017)
- HAN: Hierarchical Attention Networks (Zichao, et al, NAACL 2016)
- LSTM-Reg: Standard LSTM with Regularization (Merity et al.)
- XML-CNN: CNNs for Extreme Multi-label Text Classification (Liu et al., SIGIR 2017)
- Char-CNN: Character-level Convolutional Network (Zhang et al., NIPS 2015)
For paraphrase detection, question answering, etc.
- SM-CNN: Siamese CNN for ranking texts (Severyn and Moschitti, SIGIR 2015)
- MP-CNN: Multi-Perspective CNN (He et al., EMNLP 2015)
- NCE: Noise-Contrastive Estimation for answer selection applied on SM-CNN and MP-CNN (Rao et al., CIKM 2016)
- VDPWI: Very-Deep Pairwise Word Interaction NNs for modeling textual similarity (He and Lin, NAACL 2016)
- IDF Baseline: IDF overlap between question and candidate answers
Each model directory has a README.md
with further details.
If you are an internal Castor contributor using GPU machines in the lab, follow the instructions here.
Castor is designed for Python 3.6 and PyTorch 0.4. PyTorch recommends Anaconda for managing your environment. We'd recommend creating a custom environment as follows:
$ conda create --name castor python=3.6
$ source activate castor
And installing the packages as follows:
$ conda install pytorch torchvision -c pytorch
Other Python packages we use can be installed via pip:
$ pip install -r requirements.txt
Code depends on data from NLTK (e.g., stopwords) so you'll have to download them. Run the Python interpreter and type the commands:
>>> import nltk
>>> nltk.download()
Finally, run the following inside the utils
directory to build the trec_eval
tool for evaluating certain datasets.
$ ./get_trec_eval.sh
If you are an internal Castor contributor using GPU machines in the lab, follow the instructions here.
To fully take advantage of code here, clone these other two repos:
Castor-data
: embeddings, datasets, etc.Caster-models
: pre-trained models
Organize your directory structure as follows:
.
├── Castor
├── Castor-data
└── Castor-models
For example (using HTTPS):
$ git clone https://github.com/castorini/Castor.git
$ git clone https://git.uwaterloo.ca/jimmylin/Castor-data.git
$ git clone https://git.uwaterloo.ca/jimmylin/Castor-models.git
After cloning the Castor-data repo, you need to unzip embeddings and run data pre-processing scripts. You can choose to follow instructions under each dataset and embedding directory separately, or just run the following script in Castor-data to do all of the steps for you:
$ ./setup.sh