You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about the evaluation of Table 2 in this paper. FlickrStyle10K provides only stylized text (7000 captions) for the training.
In the Table2 evaluation, do you split these 7000 captions for training and testing?
I would like to know how the Table2 evaluation was done.
Also, could you please publish "annotations_path" data in "embeddings_generator.py"? (e.g., humor_train.json, roman_train.json)
The text was updated successfully, but these errors were encountered:
I'm sorry, I overlooked the appendix (A.2 Datasets and Evaluation Metrics).
You split the 7,000 captions, correct?
Considering the FlickrStyle10K (Gan et al., 2017) dataset, we followed (Zhao et al., 2020), and split the dataset randomly to 6/7, and 1/7 of training and test sets, correspondingly.
If possible, could you please release the training and test sets?
Thank you very much for the wonderful work.
I have a question about the evaluation of Table 2 in this paper.
FlickrStyle10K provides only stylized text (7000 captions) for the training.
In the Table2 evaluation, do you split these 7000 captions for training and testing?
I would like to know how the Table2 evaluation was done.
Also, could you please publish "annotations_path" data in "embeddings_generator.py"? (e.g., humor_train.json, roman_train.json)
The text was updated successfully, but these errors were encountered: