Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using multihead_attention, why does the queries are normalized while keys and values are not ? #33

Open
YTZ01 opened this issue Feb 25, 2023 · 1 comment

Comments

@YTZ01
Copy link

YTZ01 commented Feb 25, 2023

for i in range(len(self.attention_layers)):
seqs = torch.transpose(seqs, 0, 1)
Q = self.attention_layernormsi
mha_outputs, _ = self.attention_layers[i](Q, seqs, seqs,
attn_mask=attention_mask)
# key_padding_mask=timeline_mask
# need_weights=False) this arg do not work?
seqs = Q + mha_outputs
seqs = torch.transpose(seqs, 0, 1)
In the SASRec paper, Ⅲ. Methodology part B.Self-Attention Block, the formula uses the same embedding object as queries, keys and values, then converts it through linear projections. Why does queries are normalized, while keys and values are not in the code?

@pmixer
Copy link
Owner

pmixer commented Feb 25, 2023

Code is not well formatted, guess you mean this line:

https://github.com/pmixer/SASRec.pytorch/blob/master/model.py#L81

Personally, I believe you can try to make Q, K, V w/ or w/o layernorm in experiments, it's not required for doing so.

Well, as query comes as last layer projected result, it's better to make it numerically stable for easier training.

K and V could also be layernormed, but as they are used for generating dot product to get query weights, I guess layernorm may not greatly effect these weights.

In summary, pls try to make some modification in your own experiments and draw some conclusion based on experiments result which is most reliable and fruitful, one is not forced to obey all the settings in current implementation, some settings comes as empirical stuff("it works well, so I keep using it in this way").

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants