-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Graph heads ci tests #208
Graph heads ci tests #208
Conversation
@pzhanggit |
4947f79
to
310089b
Compare
I changed like 225 of the file model = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True) Setting I also added the following lined to the train() function inside
However, no unused parameters are tracked. |
https://discuss.pytorch.org/t/how-to-find-the-unused-parameters-in-network/63948/5 |
@pzhanggit thanks for the link. If you do not use DDP, then you need to call Am I missing something? |
e6ce4aa
to
e580a88
Compare
This PR fixed some bugs introduced by successive code developments and re-establishes the capabilities to create a graph auto encoder in HydraGNN that solely relies on message passing layers for node-to-node mapping predictions. The performance of the graph auto encoder using only message passing layers is pretty disappointing on the unit tests. However, this is in line with previous runs observed by Pei a long time ago on the FePt dataset. Would you mind helping me make sure that my Changs do not mess up the SchNet layer? Thanks, |
Thank you for your patience. These updates have not introduced any errors into the implementation of EGCL or SchNet. It has added batch normalization to the convolutional "head". The performance of the batch normalization can be assessed by overriding the This isn't necessary for this PR. However, going forward, I would be happy to assist in assessing the batch normalization performance. |
The tests failed due to that the changes introduced requiring torch-geometric>=2.4.0 which stops supporting python 3.7, as in pyg-team/pytorch_geometric#7939. We will come back to the PR later. |
@pzhanggit |
Yes, I will rebase now |
a215810
to
d2a551d
Compare
PYG released a new version (https://github.com/pyg-team/pytorch_geometric/releases/tag/2.5.2), which includes the fix for the problem. Can we try if pyg 2.5.2 works?
|
Threshold increased for SchNet in unit test for convolutional heads
d2a551d
to
afe91b1
Compare
6177412
to
d8cd181
Compare
* upgrade pyg 2.5.2 * JSON file for convolutional heads added and test_graph updated * thresholds increased for EGNN and SchNet * Update test_graphs.py Threshold increased for SchNet in unit test for convolutional heads * update DimeNet weights initilization by Justin * hyperparameter adjust for conv_head tests * format * relax error tolerance in conv_head for GIN --------- Co-authored-by: Choi <choij@ornl.gov> Co-authored-by: Zhang, Pei <zhangp1@ornl.gov>
The capability to use a full stack of convolutional layers when only nodal predictions are needed was never tested in the code.
This PR:
type="conv"
as a choice for nodal heads of HydraGNNget_conv
is called with the proper number of arguments. All models except SCFStack take two input arguments. SCFStack takes alsolast_layer
, which is a Boolean variable.