Skip to main content
Open Access Publications from the University of California

Recommendation in Dialogue Systems

  • Author(s): Sun, Yueming
  • Advisor(s): Zhang, Yi
  • et al.

Dialogue system has been an active research field for decades and is developing fast in recent years, due to the recent breakthrough of the deep learning techniques. How to make recommendations in dialogue systems is attracting increasing attention because such systems could meet various user information needs and have much commercial potential.

Current dialogue system researches typically focus on building systems for social conversation, question answering, and performing specific tasks. However, making recommendations to users, as important information need, has not been intensively researched. Meanwhile, traditional recommender systems are usually developed for non-conversation scenarios. In this dissertation, we explore how to integrate these two systems into one framework that specifically aims at making recommendations in dialogues. Such a system helps users find items by chatting with users to understand their preferences and recommending accordingly.

First, we build conversational recommendation datasets, because existing dialogue datasets do not have user-item preference information or the dialogue utterances discussing facets of items, and current recommendation datasets do not have dialogue scripts associated with each user-item pair. We build the datasets by requesting crowdsourcing workers to compose dialogue utterances based on schemas and then use the delexicalization approach to simulate dialogues with the collected utterances. The datasets are used to train the natural language understanding component and provide recommendation information for our system.

Based on collected datasets, we propose a reinforcement learning based conversational recommendation framework. Such a framework has three components, a belief tracker, a dialogue manager, and a recommender. The dialogue agent learns to first chat with a user to understand her preferences, and when it feels confident enough, it recommends a list of items to the user. We conduct both offline and online experiments to demonstrate the effectiveness of the framework.

We further extend this framework with a personalized probabilistic recommender module. This recommender learns to predict the probability of a user likes an item given the dialogue utterance information and the personalized user preference information. By leveraging this hybrid information, the recommendation and dialogue performances are further improved. We evaluate the dialogue agent's strength in various simulated environments as well as in online user studies and demonstrate the advantages of this approach.

Main Content
Current View