You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I load the embedding matrix at emb_matrix = np.load(emb_file) I see the shape is (39,300). When I go to train I get the error RuntimeError: Sizes of tensors must match except in dimension 1. Got 9 and 10 (The offending index is 0). Before erroring, I see the shape of emb_matrix is (torch.Size([50, 10, 300]). I think this is somehow that 10 does not divide 39, but being new to this code I don't quite get it. What would I need to do to get it to train?
The text was updated successfully, but these errors were encountered:
I think where I got confused is as follows. In data/loader.py, the subject start/end and object start/end are used to create the size of the tokens structure. I thought that within the Stanford ecosystem, these values were 1-based. However, in this file, it appears a 0-based scheme. So if I subtracted one off of these 4 values that I feed in as input data, that would appear to correct it, but I am still confused because it seems to disagree with what was done in Stanford CoreNLP (and then possibly Stanza?)
When I load the embedding matrix at
emb_matrix = np.load(emb_file)
I see the shape is (39,300). When I go to train I get the errorRuntimeError: Sizes of tensors must match except in dimension 1. Got 9 and 10 (The offending index is 0)
. Before erroring, I see the shape of emb_matrix is (torch.Size([50, 10, 300]). I think this is somehow that 10 does not divide 39, but being new to this code I don't quite get it. What would I need to do to get it to train?The text was updated successfully, but these errors were encountered: