- Main
Markov Chain Models and Data Science Applications
- Huang, Li-Hsuan
- Advisor(s): Bhat, Harish S
Abstract
National Basketball Association basketball is a dynamic sport played by various 5-person units (lineups) from two teams for 48 minutes. Researchers have attempted to model the dynamics of the game using play-by-play data, which contains a game log of events on the court and times at which those events happened. Recent work involving NBA play-by-play data used discrete-time Markov chains to model team possession-related events to predict final game scores. Moreover, the availability of optical player tracking data has allowed researchers to model game progression on an individual player level. While the work captured details of games, game progression driven by lineups used by each team and the amount of time played by each lineup on the court were not considered.
Using NBA play-by-play data, we sought models for time series detailing transitions of lineups (states) and the times at which a new lineup was used. Because substitutions occur at random times, we built a continuous-time Markov chain (CTMC) model for each NBA team in which each state corresponds to a unique lineup. Using the 2014-15 NBA season, we correctly predicted 12 out of 15 playoff outcomes. Nevertheless, all CTMC models suffered from absorbing states.
Each time series might have a final state which has never been reached previously. In this case, the final state will end up being an absorbing state for the maximum-likelihood-estimate Markov model. In the long run, an absorbing Markov chain has equilibrium distribution supported entirely on the set of absorbing states. To remedy this problem in a basketball simulation, we considered three strategies: i) reroute to a more probable state if an absorbing state is reached, ii) remove lineups that played less than 2 minutes and lineups corresponding to absorbing states in a season, and iii) allow simulation to stay in the absorbing state if it is the final state reached before the game is over. Combining these strategies with scoring-rate models built using singular value decomposition and weighted observations, we obtained best training and test accuracy of 75% and 70%, training on the first 75 games and testing on the remaining 7 games for the 2015-16 NBA season. A test accuracy of 70% compares favorably with state-of-the-art models.
These ad hoc absorbing-state strategies, developed for NBA data, however, might not be valid in other applications; hence, we developed stable optimization algorithms that systematically solve the issue of absorbing states so that the Markov chains reach desired equilibria and achieve zero long-term training error. We defined long-term training error as the one-norm difference between the equilibrium distribution of an MLE derived Markov chain and the actual fraction of time spent in each state on the training data. We then applied the optimization methods to the NBA data and to two finite-state, discrete-time biomedical data sets. The results showed smaller long-term training and test errors compared to naive MLE models. Furthermore, for the biomedical data, the methods yield similar long-term errors as hidden Markov models while requiring significantly less training time.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-