It is slightly simplified implementation of Kim's Convolutional Neural Networks for Sentence Classification paper in MXNet.
Recently, I have been learning mxnet for Natural Language Processing (NLP). I followed this nice blog "Implementing a CNN for Text Classification in Tensorflow" blog post. to reimplement it by mxnet framwork. Data preprocessing code and courpus are directly borrowed from original author cnn-text-classification-tf.
I use the same pretrained word2vec GoogleNews-vectors-negative300.bin in Kim's paper. However, I don't implement L2-normalization of weight on penultimate layer, but provide a L2-normalization of gradients. Finally, I got a best dev accuracy 80.1%, close to 81% that reported in the orginal paper.
Please download the corpus from this repository cnn-text-classification-tf, :)
'data/rt.vec', this file was trained on the corpus by word2vec tool. I recommend to use GoogleNews word2vec, which could get better performance, since this corpus is small (contains about 10K sentences).
When using GoogleNews word2vec, this code loads it with gensim tools gensim.
If I were wrong in CNN implementation via mxnet, please correct me.