Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RNN LSTM layer #1653

Closed
bhack opened this issue Dec 29, 2014 · 10 comments
Closed

RNN LSTM layer #1653

bhack opened this issue Dec 29, 2014 · 10 comments

Comments

@bhack
Copy link
Contributor

bhack commented Dec 29, 2014

@jeffdonahue @sguada Do you have code to share for your pubblication: http://arxiv.org/abs/1411.4389?

@jeffdonahue
Copy link
Contributor

Thanks for your interest in our work! We definitely plan to release code, but there's quite a bit of work to do to get it into a reasonably sane state -- there will be PRs once it's ready.

@bittnt
Copy link

bittnt commented Dec 30, 2014

@jeffdonahue Could you provide some comments if we would like to integrate caffe with the LSTM code from karpathy? Cheers.

@bhack
Copy link
Contributor Author

bhack commented Dec 30, 2014

@bittnt
Copy link

bittnt commented Dec 30, 2014

@bhack Yes. I think someone has already hacked it together. :))

@cbalint13
Copy link
Contributor

Should see this: https://github.com/dophist/kaldi-lstm (holds CUDA code too, i think is the best one).

@nakosung
Copy link

https://github.com/junhyukoh/caffe-lstm nice work from u-mich.

@bhack
Copy link
Contributor Author

bhack commented Jan 26, 2015

@junhyukoh Do you have a plan to contribute back to caffe with a PR?

@sunbaigui
Copy link

@junhyukoh looking forward to your merge :D

@junhyukoh
Copy link

@bhack @sunbaigui Thank you for your interest!
But, I think my current implementation does not perfectly fit to Caffe.
Since I'm using a mini-batch as a training sequence (an unrolled RNN), my code supports only SGD not the mini-batch update (one update after processing several training sequences).
I plan to rewrite the code and do PR when it's ready, but I cannot guarantee the timeline.
Feel free to use my code and any comments are welcome!

@jeffdonahue
Copy link
Contributor

Closing, see #1873 for (a cleaned up version of) the implementation we used for LRCN.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants