Skip to content

Code for paper "Image Caption Generation with Text-Conditional Semantic Attention"

Notifications You must be signed in to change notification settings

lingxiangwu/e2e-gLSTM-sc

 
 

Repository files navigation

This repository includes the code for end-to-end gLSTM and sentence-conditional semantic attention, as appeared in the paper "Image Caption Generation with Text-Conditional Semantic Attention". train_new.lua is the main file for e2e-gLSTM and train_sc.lua is the main file for sentence-conditional semantic attention. The implementation is based on Neuraltalk2 (https://github.com/karpathy/neuraltalk2). Please follow the instructions on Neuraltalk2 to run the code. Contact me if you have any trouble running the code (http://luoweizhou.net/contact.html). Please cite this paper (http://arxiv.org/abs/1606.04621) if you are using the code.

About

Code for paper "Image Caption Generation with Text-Conditional Semantic Attention"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 51.5%
  • Lua 44.3%
  • Python 3.4%
  • Other 0.8%