The official implementation of the Molecule Attention Transformer. ArXiv
EXAMPLE.ipynb
jupyter notebook with an example of loading pretrained weights into MAT,transformer.py
file with MAT class implementation,utils.py
file with utils functions.
More functionality will be available soon!
Pretrained weights are available here
In this section we present the average rank across the 7 datasets from our benchmark.
-
Results for hyperparameter search budget of 500 combinations.
-
Results for hyperparameter search budget of 150 combinations.
- PyTorch 1.4
Transformer implementation is inspired by The Annotated Transformer.