Recommendation as Generalization: Evaluating Cognitive Models in the Wild
Skip to main content
eScholarship
Open Access Publications from the University of California

Recommendation as Generalization: Evaluating Cognitive Models in the Wild

Abstract

The explosion of data generated during human interactions on- line presents an opportunity for cognitive scientists to evaluate their models on popular real-world tasks outside the confines of the laboratory. We demonstrate this approach by evaluating two cognitive models of generalization against two machine learning approaches to recommendation on an online dataset of over 100K human playlist selections. Across two experiments we demonstrate that a model from cognitive science can both be efficiently implemented at scale and can capture generaliza- tion trends in human recommendation judgments which nei- ther machine learning model is capable of replicating. We use these results to illustrate the opportunity internet-scale datasets offer to cognitive scientists, as well as to underscore the impor- tance of using insights from cognitive modeling to supplement the standard predictive-analytic approach taken by many exist- ing machine learning approaches.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View