You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To my understanding, the self attention step uses a learnable weight vector and a neural network. However, the code in this repo seems to use a very simple constant self-attention step.
Thank you for your issue! The neural network consists of the two conv1d layers applied right before the attention (these layers contain the weight vector). Backpropagating will hence train these networks along with every other part of the GAT.
Hello.
To my understanding, the self attention step uses a learnable weight vector and a neural network. However, the code in this repo seems to use a very simple constant self-attention step.
GAT/utils/layers.py
Line 13 in 77d0d41
Is my understanding incorrect? Is there a place in the code where the neural network used for the self-attention step is trained?
Thank you.
The text was updated successfully, but these errors were encountered: