Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

layer1_input #3

Open
LymanBin opened this issue Jul 24, 2019 · 1 comment
Open

layer1_input #3

LymanBin opened this issue Jul 24, 2019 · 1 comment

Comments

@LymanBin
Copy link

If the batch size is 128, then q_embed is the features of the 128 questions after the 'word embedding', and d_embed is the features of the 128 responses. After the concatenate function, it is not the corresponding two-sentence connection, So the first layer of input is how to perform one-dimensional convolution? I don't understand this very well. Can you explain it?

@yaoguangzi
Copy link

The paper says the q_embed and d_embed will interact with each other deeply by one-dimensional convolution. However, it seems that the code can not realize this idea. Could the author explain it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants