Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data Leakage on cross attention #14

Open
DanielWarfield1 opened this issue Jan 28, 2024 · 0 comments
Open

Data Leakage on cross attention #14

DanielWarfield1 opened this issue Jan 28, 2024 · 0 comments

Comments

@DanielWarfield1
Copy link

DanielWarfield1 commented Jan 28, 2024

Hello! Thanks for making this.

I was looking through MaskedCrossAttention, and I noticed that you generate the key and value using a dense network via

k, v = self.to_kv(media).chunk(2, dim = -1)

After this point you calculate the attention matrix, build the masks, etc.

My question is, isn't the point of flamingo that media information from only the immediately preceding image sequence attends to a particular textual token? If all media is passed through a dense network to generate the key and value, doesn't that imply that any information on any media could be present at any point within the final key and value? If so, it seems to me that masking would then be moot, as you're attempting to mask an abstract embedding of all media inputs by location, which is no longer relevant. Am I missing something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant