You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I found the testing performance of the heuristic method I got is significantly different from the one reported on the open graph benchmark. For example:
Running the following command gives me Hits@50 Valid 63.49, test 53.00.
python seal_link_pred.py --use_heuristic AA --dataset ogbl-collab
Running the following command gives me Hits@50 Valid 60.36, test 50.06.
Hi, I found the testing performance of the heuristic method I got is significantly different from the one reported on the open graph benchmark. For example:
Running the following command gives me Hits@50 Valid 63.49, test 53.00.
Running the following command gives me Hits@50 Valid 60.36, test 50.06.
The result is very different from the open graph benchmark's reported one.
I went through the code on GitHub, but couldn't figure out why.
The text was updated successfully, but these errors were encountered: