You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently,i do some experient about bert and transformer on text_classification.I find position always consists of two linear transformations with a ReLU activation in between.But you use conv?Do you have something special thought about this change.
Recently,i do some experient about bert and transformer on text_classification.I find position always consists of two linear transformations with a ReLU activation in between.But you use conv?Do you have something special thought about this change.
text_classification/a07_Transformer/a2_poistion_wise_feed_forward.py
Lines 35 to 58 in 3e7911b
The text was updated successfully, but these errors were encountered: