2012 Data (Information)
Finding Emotion in Image Descriptions: Crowdsourced Data
This dataset contains 660 images, each annotated with descriptions and mood labels.
The images were originally created by users of the WordsEye text-to-scene system (https://www.wordseye.com/) and were downloaded from the WordsEye gallery.
For each image, we used Amazon Mechanical Turk to obtain:
(a) a literal description that could function as a caption for the image,
(b) the most relevant mood for the picture (happiness, sadness, anger, surprise, fear, or disgust),
(c) a short explanation of why that mood was selected.
We published three AMT HITs for each picture, for a total of 1980 captions, mood labels, and explanations.
This data was used for the machine learning experiments presented in:
Morgan Ulinski, Victor Soto, and Julia Hirschberg. Finding Emotion in Image Descriptions. In Proceedings of the First International Workshop on Issues of Sentiment Discovery and Opinion Mining, WISDOM '12, pages 8:1-8:7.
Please cite this paper if you use this data.
- mood-annotation.csv text/comma-separated-values 692 KB Download File
- images.zip application/zip 32 MB Download File
More About This Work
- Academic Units
- Computer Science
- Published Here
- August 27, 2019
- Supplement to:
- Finding emotion in image descriptions