With the ever-increasing volume of user-generated text (e.g., product reviews, doctor notes, chat logs), there is a need to distill valuable semantic information from such un-structured sources. We initially focus on product reviews, which conceptually consist of concepts (or aspects) such as “screen brightness”, and user opinions on these concepts such as "very positive". First, we present a novel review summarization framework that advances the state-of-the-art by leveraging a domain hierarchy of concepts to handle the semantic overlap among the aspects, and by accounting for different opinion levels. Second, we argue and empirically show that the current style of soliciting customer opinion by asking them to write free-form text reviews is suboptimal, as few aspects receive most of the ratings. Therefore, we propose various techniques to dynamically select which aspects to ask users to rate given the current review history of a product.
The last body of work leverages user chat logs to continuously optimize the workflow of a goal-oriented chatbot, such as a pizza ordering bot. On one hand, diagram-based chatbots are simple and interpretable but only support limited predefined conversation scenarios. On the other hand, the state-of-the-art Reinforcement Learning (RL) models can handle more scenarios but are not interpretable. We propose a hybrid method, which enforces workflow constraints in a chatbot, and uses RL to select the best chatbot response given the specified constraints.