Skip to content

Code release for the OGB KDD Cup for team DeeperBiggerBetter

Notifications You must be signed in to change notification settings

zarzarj/DeeperBiggerBetter_KDDCup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Code release for team DeeperBiggerBetter for MAG-240M in the OGB KDD Cup

Installation requirements

ogb>=1.3.0
torch>=1.7.0
pytorch-lightning>=1.2.0
torch-geometric==master (pip install git+https://github.com/rusty1s/pytorch_geometric.git)
jupyterlab (for post-processing)

Dataset

The MAG240M-LSC dataset will be automatically downloaded to the path denoted in root.py. Please change its content accordingly if you want to download the dataset to a custom hard-drive or folder. For each experiment, the test submission will be automatically saved in ./results/ after training is done.

Due to the file size of the MAG240M-LSC node feature matrix, training requires at least 256GB RAM.

Training

For training the 2-layer model on 4 gpus, run:

python rgnn.py --exp_name rgat_2layers --device=k --accelerator='ddp' --model=rgat --hidden_channels=2048 --precision=16 --scheduler=cosine --optimizer=radam --extra_mlp --train_set=train --author_labels

For evaluating the 2-layer model on the best validation checkpoint with a neighborhood of 5*(sizes) and save prediction logits, run:

python rgnn.py --exp_name rgat_2layers --device=4 --accelerator='ddp' --evaluate --eval_size=5 --eval_size_dynamic --save_eval_probs

For training the 3-layer model on 4 gpus, run:

python rgnn.py --exp_name rgat_3layers --device=4 --accelerator='ddp' --model=rgat --hidden_channels=1800 --precision=16 --scheduler=cosine --optimizer=radam --extra_mlp --train_set=train --author_labels --num_layers=3 --sizes='25-20-15' --batch_size=512

For evaluating the 3-layer model on the best validation checkpoint with a neighborhood of 5*(sizes) and save prediction logits, run:

python rgnn.py --exp_name rgat_3layers --device=4 --accelerator='ddp' --evaluate  --eval_size=5 --eval_size_dynamic --save_eval_probs

To train the models on the training + validation sets (only for the model used on the hidden test set) replace --train_set=train with --train_set=train_val

Performance

Model Valid Accuracy (%) Test Accuracy (%)* #Parameters Hardware
R-GAT [2] 70.48 69.49 12.3M GeForce RTX 2080 Ti (11GB GPU)
Ours (2-layer) 71.08 - 81.2M NVIDIA V100 (32GB GPU)
Ours (3-layer) 71.87 - 99.5M NVIDIA RTX 6000 (48GB GPU)
Ours (Ensemble) 72.72 73.53 180.7M NVIDIA V100 (32GB GPU), NVIDIA RTX 6000 (48GB GPU)

The ensemble model is generated by aggregating 10 inference runs of the 2-layer model, and 10 total inference runs (with dynamic evaluation with sizes 1-10). In order to obtain different inference results with multiple runs, a change must be done to the pytorch sparse sampling code. Specifically, a random seed must be added to the sample_adj_cpu function in sample_cpu.cpp. This is worked around by doing evaluations with different neighborhood sizes, but was used to generate the inference results of the 2-layer model.

* Test Accuracy is evaluated on the hidden test set.

References

This code is heavily based on [1].

[1] Hu et al.: Open Graph Benchmark: Datasets for Machine Learning on Graphs

[2] Schlichtkrull et al.: Modeling Relational Data with Graph Convolutional Networks

About

Code release for the OGB KDD Cup for team DeeperBiggerBetter

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published