Model architectures for document representations with AutoEncoder, SeqtoSeqLSTMs with Attention, Bert and Scibert.
Under experimentation/ongoing:
- Hierarchical attention Networks (https://arxiv.org/pdf/1602.06023.pdf)
- Representation of matching language sequences (https://arxiv.org/pdf/1702.03814.pdf)