Skip to content
/ UMD Public
forked from ChenTsuei/UMD

Official implementation of "User Attention-guided Multimodal Dialog Systems"

License

Notifications You must be signed in to change notification settings

pipiku915/UMD

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UMD

If you have any questions about the code, please feel free to create an issue or contact me by email chentsuei@gmail.com.

Official implementation of "User Attention-guided Multimodal Dialog Systems"

The code is under refactoring and the new code will be published soon.

Data

The crawled images can be downloaded here (corresponding url2img.txt). Other data provided by MMD can be downloaded here.

Prerequisite

  • Python 3.5+
  • PyTorch 1.0
  • NLTK 3.4
  • PIL 5.3.0

How to run

Please place the data files to the appropriate path and set it in options/dataset_option.py, then run python train <task> <saved_model_file>.

Evaluation

Perl script mteval-v14.pl is used to evaluate the text result. You should first extract the result from the log files. And convert them into XML file. For convenience, the convert.py is provided.

About

Official implementation of "User Attention-guided Multimodal Dialog Systems"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%