You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If i want to compare the robustness of the Vanilla RNN to that of the Vanilla RNN without the activation function in the recurrent, should i use the closed-form global bounds from Theorem A.1 (page 7) in the appendix for both architectures?
Or should i use Theorem A.2 (page 7) from the appendix for the RNN without the activation function? ( but if i will set 'v' to 'm' in Theorem A.2 in order to get closed-form global bounds for the RNN without the activation function i will get Theorem A.1).
I would appreciate some advice on this topic.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
If i want to compare the robustness of the Vanilla RNN to that of the Vanilla RNN without the activation function in the recurrent, should i use the closed-form global bounds from Theorem A.1 (page 7) in the appendix for both architectures?
Or should i use Theorem A.2 (page 7) from the appendix for the RNN without the activation function? ( but if i will set 'v' to 'm' in Theorem A.2 in order to get closed-form global bounds for the RNN without the activation function i will get Theorem A.1).
I would appreciate some advice on this topic.
Thanks!
The text was updated successfully, but these errors were encountered: