Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batching for GCNConv Layer #9637

Open
kleingeo opened this issue Sep 3, 2024 · 3 comments
Open

Batching for GCNConv Layer #9637

kleingeo opened this issue Sep 3, 2024 · 3 comments

Comments

@kleingeo
Copy link

kleingeo commented Sep 3, 2024

📚 Describe the documentation issue

It is unclear what to do in regards to batching when using a custom feature map and adjacency matrix. For instance, the docs help when dealing with the batching from the dataloader perspective, but not with a custom approach. I am dealing with a case when I am trying to use GCNConv inside of an existing model based on latent features (ie, not from the dataloader). It is unclear what to do with the batch dimension in this case for the adjacency matrix and/or feature map. needs to be clarified

Suggest a potential alternative/fix

Show an example when using GCNConv or other graph convolution layers outside of the pre-defined dataloading.

@Percibus
Copy link

I'm not sure about this, but I think in the case of GCNConv you can batch your data and it accepts any input shape as well as the node_dim attribute is set correctly (by default to -2). So you can use for example a tensor of [BxL, N, F] and it will operate with the nodes on N for each BxL element.

@kleingeo
Copy link
Author

The issue doesn't seem to be the tensor itself, but the adjacency matrix. It is specified in the MessagePassing.propagate that for a torch.tensor object, the shape on edge_index needs to be [2, num_messages]. I think for this to work, the edge_index needs to work with either a list, where each edge_index in the list corresponds to a batch sample, or for edge_index to allow [3, num_messages].

@Rain920
Copy link

Rain920 commented Nov 14, 2024

I have the same question. When using GCNConv (or other models based on MessagePassing), how do you handle the batch dimension? Specifically, where does the batch dimension appear in the shape of x passed to propagate method?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants