The user data included here corresponds to the following paper:
Beyond Memorability: Visualization Recognition and Recall.
Borkin, M.*, Bylinskii, Z.*, Kim, N.W., Bainbridge C.M., Yeh, C.S., Borkin, D., Pfister, H., & Oliva, A.
IEEE Transactions on Visualization and Computer Graphics (Proceedings of InfoVis 2015)
Please cite this paper if you use this data.
On this public github repository we can only provide the meta-data and labels. To obtain the source images, please fill out the following request form.
By using this dataset, you are agreeing to the following license agreement:
Access to, and use of, the images, and annotations in this dataset are for research and educational uses only. No commercial use, reproduction or distribution of the images, or any modifications thereof, is permitted.*
*To use any of these images in a research paper or technical report, do not exceed thumbnail size.
This data contains taxonomic labels and attributes for 393 visualizations, as described in: README
These include the source, category, and type of each visualization, as well as the following attributes: data-ink ratio, number of distinctive colors, black & white, visual density, human recognizable object (HRO), and human depiction. We also provide the transcribed title for each visualization and where the title was located on the visualization, as well as whether the visualization contained data or message redundancy. From we include at-a-glance memorability scores (after 1 second of viewing) and from we include prolonged memorability scores (after 10 seconds of viewing).
Additionally, here we include the eye tracking and user-generated text description data as well.