- Main
Latent feature models for dyadic prediction /
Abstract
Following the Netflix prize, the collaborative filtering problem has gained significant attention within machine learning, spawning novel models and theoretical analyses. In parallel, the growth of social media has driven research in link prediction, with the aim of determining whether two individuals in a network are likely to know each other. Both problems involve the prediction of label (star ratings or friendship) between a pair of entities (user-movie or user-user). We call this general problem dyadic prediction. The problem arises in several other guises: predicting student responses to test questions, military disputes between nations, and clickthrough rates of webpages on ads, to name a few. In general, each such domain employs a markedly different approach, obscuring the underlying similarity of the problems being solved. This dissertation aims to explore the use of a single general method, based on latent feature modelling, for generic dyadic prediction problems. To this end, we make three contributions. First, we propose a generic framework with which to analyze dyadic prediction problems. This lets one reason about seemingly disparate problems in a unified manner. Second, we propose a model based on the log-linear framework, which is applicable to each of the aforementioned problems. The model learns latent features from dyadic data, and estimates a probability distribution over labels. Third, we systematically explore applications of our latent feature model to domains such as collaborative filtering, link prediction, and clickthrough rate prediction. In all cases, we show performance comparable or superior to existing state-of-the-art methods. For clickthrough rate prediction, ours represents the first application of latent feature modelling to the problem, demonstrating the value in a single framework with which to reason about these problems. We also show that latent feature modelling is scalable to datasets with hundreds of millions of observations on a single machine (the Netflix prize dataset), and hundreds of billions of observations on a small cluster (Yahoo! ad click data). We conclude with a discussion of future research directions, including transferring information from one network to another, and adapting to domains with extreme label sparsity
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-