You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How do I obtain the glove vectors in the right format? I downloaded the pretrained vectors from https://nlp.stanford.edu/projects/glove/ but it's not clear how to convert them to the expected format.
I tried running the code without word vectors (is this supposed to work?) but I get an exception:
/home/andreas/src/Structured-Self-Attentive-Sentence-Embedding/train.py in train(epoch_number)
79 total_pure_loss += loss.data
80
---> 81 if attention: # add penalization term
82 attentionT = torch.transpose(attention, 1, 2).contiguous()
83 extra_loss = Frobenius(torch.bmm(attention, attentionT) - I[:attention.size(0)])
/home/andreas/.local/lib/python2.7/site-packages/torch/autograd/variable.pyc in __bool__(self)
121 return False
122 raise RuntimeError("bool value of Variable objects containing non-empty " +
--> 123 torch.typename(self.data) + " is ambiguous")
124
125 __nonzero__ = __bool__
RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous
(If I replace this condition with False the code works).
Lastly, how could I obtain the kind of visualizations of attention as in the paper?
The text was updated successfully, but these errors were encountered:
How do I obtain the glove vectors in the right format? I downloaded the pretrained vectors from https://nlp.stanford.edu/projects/glove/ but it's not clear how to convert them to the expected format.
I tried running the code without word vectors (is this supposed to work?) but I get an exception:
(If I replace this condition with False the code works).
Lastly, how could I obtain the kind of visualizations of attention as in the paper?
The text was updated successfully, but these errors were encountered: