This repository contains the official PyTorch implementation of the work "EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations". We provide the code for training the base model setting on the OC20 S2EF-2M and S2EF-All+MD datasets.
Additionally, EquiformerV2 has been incorporated into OCP repository and used in Open Catalyst demo.
See here for setting up the environment.
The OC20 S2EF dataset can be downloaded by following instructions in their GitHub repository.
For example, we can download the OC20 S2EF-2M dataset by running:
cd ocp
python scripts/download_data.py --task s2ef --split "2M" --num-workers 8 --ref-energy
We also need to download the "val_id"
data split to run training.
After downloading, place the datasets under datasets/oc20/
by using ln -s
:
cd datasets
mkdir oc20
cd oc20
ln -s ~/ocp/data/s2ef s2ef
To train on different splits like All and All+MD, we can follow the same link above to download the datasets.
Please refer to here.
-
We train EquiformerV2 on the OC20 S2EF-2M dataset by running:
sh scripts/train/oc20/s2ef/equiformer_v2/equiformer_v2_N@12_L@6_M@2_splits@2M_g@multi-nodes.sh
The above script uses 2 nodes with 8 GPUs on each node.
If there is an import error, it is possible that
ocp/ocpmodels/common/utils.py
is not modified. Please follow here for details.We can also run training on 8 GPUs on 1 node:
sh scripts/train/oc20/s2ef/equiformer_v2/equiformer_v2_N@12_L@6_M@2_splits@2M_g@8.sh
-
We train EquiformerV2 (153M) on OC20 S2EF-All+MD by running:
sh scripts/train/oc20/s2ef/equiformer_v2/equiformer_v2_N@20_L@6_M@3_splits@all+md_g@multi-nodes.sh
The above script uses 16 nodes with 8 GPUs on each node.
-
We train EquiformerV2 (31M) on OC20 S2EF-All+MD by running:
sh scripts/train/oc20/s2ef/equiformer_v2/equiformer_v2_N@8_L@4_M@2_splits@all+md_g@multi-nodes.sh
The above script uses 8 nodes with 8 GPUs on each node.
nets
includes code of different network architectures for OC20.scripts
includes scripts for training models on OC20.main_oc20.py
is the code for training, evaluating and running relaxation.oc20/trainer
contains code for the force trainer as well as some utility functions.oc20/configs
contains config files for S2EF.
We provide the checkpoints of EquiformerV2 trained on S2EF-2M dataset for 30 epochs, EquiformerV2 (31M) trained on S2EF-All+MD, and EquiformerV2 (153M) trained on S2EF-All+MD.
Model | Split | Download | val force MAE (meV / Å) | val energy MAE (meV) |
---|---|---|---|---|
EquiformerV2 | 2M | checkpoint | config | 19.4 | 278 |
EquiformerV2 (31M) | All+MD | checkpoint | config | 16.3 | 232 |
EquiformerV2 (153M) | All+MD | checkpoint | config | 15.0 | 227 |
Please consider citing the works below if this repository is helpful:
-
@article{equiformer_v2, title={EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations}, author={Yi-Lun Liao and Brandon Wood and Abhishek Das* and Tess Smidt*}, year={2023}, journal={arxiv preprint arxiv:2306.12059} }
-
eSCN:
@inproceedings{ escn, title={{Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs}}, author={Passaro, Saro and Zitnick, C Lawrence}, booktitle={International Conference on Machine Learning (ICML)}, year={2023} }
-
@inproceedings{ equiformer, title={{Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs}}, author={Yi-Lun Liao and Tess Smidt}, booktitle={International Conference on Learning Representations (ICLR)}, year={2023}, url={https://openreview.net/forum?id=KwmPfARgOTD} }
-
@article{ oc20, author = {Chanussot*, Lowik and Das*, Abhishek and Goyal*, Siddharth and Lavril*, Thibaut and Shuaibi*, Muhammed and Riviere, Morgane and Tran, Kevin and Heras-Domingo, Javier and Ho, Caleb and Hu, Weihua and Palizhati, Aini and Sriram, Anuroop and Wood, Brandon and Yoon, Junwoong and Parikh, Devi and Zitnick, C. Lawrence and Ulissi, Zachary}, title = {{Open Catalyst 2020 (OC20) Dataset and Community Challenges}}, journal = {ACS Catalysis}, year = {2021}, doi = {10.1021/acscatal.0c04525}, }
Please direct questions to Yi-Lun Liao (ylliao@mit.edu).
Our implementation is based on PyTorch, PyG, e3nn, timm, ocp, Equiformer.