You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 19, 2023. It is now read-only.
Hi,
I just wonder why your model have to go throw layer SpecialSpmmFunctionFinal and what is the intent of this layer
The layer forward is :
classSpecialSpmmFunctionFinal(torch.autograd.Function):
"""Special function for only sparse region backpropataion layer."""@staticmethoddefforward(ctx, edge, edge_w, N, E, out_features):
# assert indices.requires_grad == Falsea=torch.sparse_coo_tensor(
edge, edge_w, torch.Size([N, N, out_features]))
b=torch.sparse.sum(a, dim=1)
ctx.N=b.shape[0]
ctx.outfeat=b.shape[1]
ctx.E=Ectx.indices=a._indices()[0, :]
returnb.to_dense()
I debuging on your default parameter with WN18k dataset. In first epoch or fisrt batch of dataset, I have this shape :
Input :
N : 40943 : number of entity of dataset
E : 294211 : number of concat of head, tail and 2hop_head, tail
edge : (2, 294211) : is present for <head_id, tail_id, and 2_hop_head_id, 2_hop_tail_id>
edge_w : (294211, 1) : is present for weighed of training in GAT layer
Output :
e_rowsum : (40943, 1) is present for : .... ??? .....
As I now, It just sum all feature of training entity into a vector embed, but I don't know why your model have to go throw this layer, Can you explain the intent of Layer SpecialSpmmFunctionFinal ?
Thanks @deepakn97
The text was updated successfully, but these errors were encountered:
I think SpecialSpmmFunctionFinal 's forward section is intend to compute the row sum of sparse matrix ,and backward return the gradient for the sparse matrix's values,but I find torch.sparse might solve the backward of row sum operation,for example:
`i = torch.LongTensor([[0, 1, 1],[2, 0, 2]]) #row, col
v = torch.FloatTensor([3, 4, 5]) #data
v.requires_grad=True
m=torch.sparse_coo_tensor(i, v, torch.Size([2,3])) #torch.Size()
m.retain_grad()
Hi,
I just wonder why your model have to go throw layer SpecialSpmmFunctionFinal and what is the intent of this layer
The layer forward is :
I debuging on your default parameter with WN18k dataset. In first epoch or fisrt batch of dataset, I have this shape :
Input :
N : 40943 : number of entity of dataset
E : 294211 : number of concat of head, tail and 2hop_head, tail
edge : (2, 294211) : is present for <head_id, tail_id, and 2_hop_head_id, 2_hop_tail_id>
edge_w : (294211, 1) : is present for weighed of training in GAT layer
Output :
e_rowsum : (40943, 1) is present for : .... ??? .....
As I now, It just sum all feature of training entity into a vector embed, but I don't know why your model have to go throw this layer, Can you explain the intent of Layer SpecialSpmmFunctionFinal ?
Thanks @deepakn97
The text was updated successfully, but these errors were encountered: