Word-in-Context (WiC) disambiguation as a binary classification task using static word embeddings (i.e. Word2Vec and GloVe) to determine whether words in different contexts have the same meaning.
We propose a Bi-LSTM architecture with pre-trained word embeddings and test it against a simpler feed-forward neural network. For further insights, read the dedicated report or the presentation slides (pages 2-6).
You may download the original dataset from here.
For ready-to-go usage, simply run the notebook on Colab. In case you would like to test it on your local machine, please follow the installation guide. You may find the pre-trained models in model/pretrained
.