Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Cross-view Hybrid attention #29

Open
jianingwangind opened this issue Mar 22, 2023 · 3 comments
Open

Question about Cross-view Hybrid attention #29

jianingwangind opened this issue Mar 22, 2023 · 3 comments

Comments

@jianingwangind
Copy link

Thanks for sharing the great work.

Regarding to Cross-view Hybrid attention, is it only apllied for the HW top plane?

The query is itself, key and value are both none while later in cross-view hybrid attention the value is set to be the concatenation of queries

value = torch.cat([query, query], 0)

@yuhanglu2000
Copy link

I have the same question, it looks like there is no interaction between the features of the three planes.

@huang-yh
Copy link
Collaborator

Thanks for your interest in our work.
Your understanding of the code is correct. That is, in TPVFormer04, cross-view hybrid attention is enabled only in the HW plane, thus degrading to self-attention.

@jianingwangind
Copy link
Author

@huang-yh Thanks for your reply. May i further ask the idea behind this? Similar performance when disabling the attention in the other two planes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants