Skip to main content
eScholarship
Open Access Publications from the University of California

Racial Bias in Emotion Inference: An Experimental Study Using a Word Embedding Method

Creative Commons 'BY' version 4.0 license
Abstract

We investigated racial bias in emotion inference by having participants describe the emotion of the featured person in the image, including Asian, Black, and White. We collected 4,197 sentences (63,900 tokens) and used the data to train Word2Vec, a neural network-based word embedding model. We calculated the cosine distance between emotion words and words indicating the target for each racial group in order to measure the strength of association. Although all images portrayed neutral emotions, the results show that negative emotion words were close to Asian and Black, whereas neutral words were close to White. This result indicates that stereotypes contribute to the racial bias in Artificial Intelligence as crowdsourcing workers can generate annotations depending on the featured person's race. Based on the present study, we suggest employing various de-biasing methods such as data augmentation or removing bias components in word embeddings before using Artificial Intelligence in the real world.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View