Skip to content
/ MAT Public

The official implementation of the Molecule Attention Transformer.

License

Notifications You must be signed in to change notification settings

ardigen/MAT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MAT

The official implementation of the Molecule Attention Transformer. ArXiv

architecture

Code

  • EXAMPLE.ipynb jupyter notebook with an example of loading pretrained weights into MAT,
  • transformer.py file with MAT class implementation,
  • utils.py file with utils functions.

More functionality will be available soon!

Pretrained weights

Pretrained weights are available here

Results

In this section we present the average rank across the 7 datasets from our benchmark.

  • Results for hyperparameter search budget of 500 combinations.

  • Results for hyperparameter search budget of 150 combinations.

  • Results for pretrained model

Requirements

  • PyTorch 1.4

Acknowledgments

Transformer implementation is inspired by The Annotated Transformer.