Sequential recommendation algorithms aim to predict users' future behavior given their historical interactions. In particular, a recent line of work has achieved state-of-the-art performance on sequential recommendation tasks by adapting ideas from metric learning and knowledge-graph completion. These algorithms replace inner products with low-dimensional embeddings and distance functions, employing a simple translation dynamic to model user behavior over time.
In this thesis, we analyze the task of sequential recommendation and discuss TransRec, a recent algorithm that models users with linear translation vectors over low-dimensional item embeddings. We present a variety of extensions to this model, increasing complexity via additional features, neural networks, and session-based approaches. We evaluate these extensions on a variety of datasets and also present relevant qualitative analyses. These extensions provide insights into the translation framework and effectively inform future research directions.
We also propose TransFM, a model that combines translation and metric-based approaches for sequential recommendation with Factorization Machines (FMs). Doing so allows us to reap the benefits of FMs (in particular, the ability to straightforwardly incorporate content-based features) while enhancing the state-of-the-art performance of translation-based models. We learn an embedding and translation space for each feature, replacing the inner product with the squared Euclidean distance to measure interaction strength. Like FMs, the model equation can be computed in linear time and optimized using classical techniques. As TransFM operates on arbitrary feature vectors, content features can be easily incorporated without significant changes to the model itself. Empirically, the performance of TransFM significantly increases when taking content features into account, outperforming state-of-the-art models on sequential recommendation tasks.