We use coherence relations inspired by computational models of discourse to study the information needs and goals of image captioning. Using an annotation protocol specifically devised for capturing image--caption coherence relations, we annotate 10,000 instances from publicly-available image--caption pairs. We show that these coherence annotations can be exploited to learn relation classifiers as an intermediary step, and also train coherence-aware, controllable image captioning models. The results show a dramatic improvement in the consistency and quality of the generated captions with respect to information needs specified via coherence relations.
-
Notifications
You must be signed in to change notification settings - Fork 0
malihealikhani/Cross-modal_Coherence_Modeling
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Cross-modal Coherence Modeling for Caption Generation
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published