You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Att_v = tf.contrib.layers.conv2d(G, num_outputs=opt.num_class,kernel_size=[opt.ngram], padding='SAME',activation_fn=tf.nn.relu) #b * s * c
The implementation code above is a conv2D operation on the match matrix G.
while the formulations in the paper below seems only one filter with the size(2r+1) to produce a further match matrix (K*L). I think they are little different. Is it True?
$$u = Relu(GW+b)$$
where W ∈ R2r+1 and b ∈ R K u l ∈ R K
The text was updated successfully, but these errors were encountered:
@tc-yue Hello, I have same confusion with you. I think it's different between the two operations. In the paper, the filter with size(2r + 1) only captures contextual feature, while in the implementation code the K filters capture contextual features and relationships among the categories. Have you gotten any reasonable explanation?
Att_v = tf.contrib.layers.conv2d(G, num_outputs=opt.num_class,kernel_size=[opt.ngram], padding='SAME',activation_fn=tf.nn.relu) #b * s * c
The implementation code above is a conv2D operation on the match matrix G.
while the formulations in the paper below seems only one filter with the size(2r+1) to produce a further match matrix (K*L). I think they are little different. Is it True?
where W ∈ R2r+1 and b ∈ R K u l ∈ R K
The text was updated successfully, but these errors were encountered: