Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Automated Multi-task Learning

  • Author(s): Liang, Davis
  • Advisor(s): Cottrell, Garrison
  • et al.
Abstract

Multi-task learning (MTL) has recently contributed to learning better representations in service of various natural language (NLP) tasks. MTL aims at improving the performance of a primary task by jointly training on a secondary task. This paper introduces to the field of deep recurrent neural networks the concept of automated tasks, which exploit the sequential nature of the original input data, as secondary tasks in an MTL model. Specifically, we explore next word prediction, next character prediction, and missing word completion as potential automated tasks. Our results show that training on a primary task in parallel with a secondary automated task improves both the convergence speed and accuracy for the primary task. Furthermore, we suggest two methods for augmenting an existing network with automated tasks and establish better-than-baseline performance in topic prediction, sentiment analysis, and hashtag recommendation. Finally, we show that the MTL models can perform well on datasets like Twitter that are small and colloquial by nature. We claim that because every sequential dataset has associated automated tasks, automated MTL can be generalized to learn better representations for any sequential neural network model.

Main Content
Current View