You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, I think the implementation is different from the original paper. According to the paper, the output attention is computed using the output of LSTM at current time step and the attributes. But your implementation here is computed using the output of LSTM at previous time step and the attributes.
The text was updated successfully, but these errors were encountered:
Hey, I think the implementation is different from the original paper. According to the paper, the output attention is computed using the output of LSTM at current time step and the attributes. But your implementation here is computed using the output of LSTM at previous time step and the attributes.
The text was updated successfully, but these errors were encountered: