Skip to content

Latest commit

 

History

History
13 lines (11 loc) · 441 Bytes

File metadata and controls

13 lines (11 loc) · 441 Bytes

Training Graph Transformers via Curriculum-Enhanced Attention Distillation

@inproceedings{lgtggt_iclr24,
title = {Training Graph Transformers via Curriculum-Enhanced Attention Distillation},
author = {Yisong Huang and Jin Li and Xinlong Chen and Yang-Geng Fu},
booktitle = {The Twelfth International Conference on Learning Representations (ICLR)},
year = {2024}
}

links