You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a specific reason why only the first vector out of the nn.Embedding is ever used? tgt8 ... tgt64 are always zeros at this stage so you end up picking the 0-th vector for each spatial position, or in other words qe8 .. qe64 will always be filled with identical repeating values.... the values in the Embedding will obviously change over time with training but they will always be repeated throughout the qe-s
Hi,
Is there a specific reason why only the first vector out of the nn.Embedding is ever used?
tgt8
...tgt64
are always zeros at this stage so you end up picking the 0-th vector for each spatial position, or in other wordsqe8
..qe64
will always be filled with identical repeating values.... the values in the Embedding will obviously change over time with training but they will always be repeated throughout theqe
-stranslating-images-into-maps/src/model/network.py
Line 1150 in 92b9627
translating-images-into-maps/src/model/network.py
Line 1161 in 92b9627
After permuting
qe8
to [batch, spatial, spatial, len_embedding]:The text was updated successfully, but these errors were encountered: