Multi-Dimensional Disentangled Representation Learning for Emotion Embedding Generation
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Multi-Dimensional Disentangled Representation Learning for Emotion Embedding Generation

Abstract

In the natural language processing (NLP) research community, disentangled representation learning hasbecome commonplace in text style transfer and sentiment analysis. Previous studies have demonstrated the utility of extracting style from text corpora in order to augment context-dependent downstream tasks such as text generation. Within sentiment analysis specifically, disentangled representation learning has been shown to produce latent representations that can be used to improve downstream classification tasks. In this study, we build upon this existing framework by (1) investigating disentangled representation learning in the multidimensional task of emotion detection, (2) testing the robustness of this methodology over varying datasets, and (3) exploring the interpretability of the produced latent representations. We discover that closely following existing disentangled representation learning methods for sentiment analysis in a multi- class setting, performance decreases significantly, and we are unable to effectively distinguish content and style in our learned latent representations. Further work is necessary to determine the effectiveness of style disentanglement for text in multi-class settings using adversarial training.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View