You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder if there is a way to run GCC pretrain on CPU-only. As it appears to me, there is no easy way (such as specifying an option in the arguments) to do this.
P.S. Installation of RDKit is really painful for servers inside mainland china. Through checking the code, I found that there is no need to install RDKit, all one have to do is to copy the GAT and GCN layer code of DGL to the models/gat.py and models/gcn.py respectively. Please kindly correct me if I was wrong.
Any help is highly appreciated : )
The text was updated successfully, but these errors were encountered:
AndyJZhao
changed the title
Running experiments completely on GPU
Running experiments completely on CPU
Nov 28, 2020
Thanks for your comment. You're right. CPU-only training and testing are not implemented in this repo though it is possible to do. However, we suggest using GPUs because the matrix factorization in the dataloading consumes CPUs and can bottleneck training if the NNs are on CPU.
Thanks for your comment on RDKit. I've add your suggestion to README.
Hi, thanks for the inspiring work of GCC.
I wonder if there is a way to run GCC pretrain on CPU-only. As it appears to me, there is no easy way (such as specifying an option in the arguments) to do this.
P.S. Installation of RDKit is really painful for servers inside mainland china. Through checking the code, I found that there is no need to install RDKit, all one have to do is to copy the GAT and GCN layer code of DGL to the models/gat.py and models/gcn.py respectively. Please kindly correct me if I was wrong.
Any help is highly appreciated : )
The text was updated successfully, but these errors were encountered: