Academic Commons

Data (Information)

Finding Emotion in Image Descriptions: Crowdsourced Data

Ulinski, Morgan Elizabeth; Soto Martinez, Victor; Hirschberg, Julia Bell

This dataset contains 660 images, each annotated with descriptions and mood labels.
The images were originally created by users of the WordsEye text-to-scene system (https://www.wordseye.com/) and were downloaded from the WordsEye gallery.

For each image, we used Amazon Mechanical Turk to obtain:
(a) a literal description that could function as a caption for the image,
(b) the most relevant mood for the picture (happiness, sadness, anger, surprise, fear, or disgust),
(c) a short explanation of why that mood was selected.
We published three AMT HITs for each picture, for a total of 1980 captions, mood labels, and explanations.

This data was used for the machine learning experiments presented in:
Morgan Ulinski, Victor Soto, and Julia Hirschberg. Finding Emotion in Image Descriptions. In Proceedings of the First International Workshop on Issues of Sentiment Discovery and Opinion Mining, WISDOM '12, pages 8:1-8:7.
Please cite this paper if you use this data.

Files

  • thumnail for mood-annotation.csv mood-annotation.csv text/comma-separated-values 692 KB Download File

More About This Work

Academic Units
Computer Science
Published Here
August 27, 2019
Academic Commons provides global access to research and scholarship produced at Columbia University, Barnard College, Teachers College, Union Theological Seminary and Jewish Theological Seminary. Academic Commons is managed by the Columbia University Libraries.