-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use of default clip markers as [0,1,1, 1...,1] #14
Comments
Yes. |
Thank you. What about the official Caffe LSTM implementation (BVLC/caffe#2033). |
As far as I know, they have the same protocol. |
Thank you. One last question. The Caffe code below is the lstm unit layer implementation. I'm unable to determine whether the
|
It seems like there is no default value in this code unless they provide a virtual bottom[2]. |
Ok, thank you very much. From BVLC/caffe#2033, it appears that providing the clip_markers is "required". "RecurrentLayer requires 2 input (bottom) Blobs." |
Hi @junhyukoh @aurotripathy . Is there support to access Hidden state at each timestep. Thanks |
This seems like the right place to get answers to Caffe LSTM questions :-). You can count on an answer.
I'm comparing the implementation of the LSTM layer over here and the (official merged) one in Caffe. They are different.
Are they conceptually the same relative to the clip_marker implementation?
My question is, if the sequence lengths are the same in the input (i.e., they don't vary) and they match the number of time-steps, then do we need to provide the clip_marker input (in the official caffe version)?
Can the network assume it to be [0,1,1, 1...,1]?
My reason to ask this is to debug the network. My own markers may be in error and likely confusing the network?
Thank you.
The text was updated successfully, but these errors were encountered: