Learning Hierarchical Abstractions from Human Demonstrations for Application-Scale Domains
As the collection of data becomes more and more commonplace, it unlocks new approaches to old problems in the field of artificial intelligence. Much of this benefit is realized by advances in the decision problems of machine learning and statistics, but value can be gleaned for more classical AI problems as well. Large databases of human demonstrations allow for bootstrapping planning models in complex domains that previously would have been computationally infeasible.
This dissertation explores algorithms and systems for learning planning abstractions from human demonstrations in RTS games, which are more similar to real-world applications than prior domains in classical planning. I believe that this is particularly challenging and valuable, due to the inconsistency of human planning and variations in style and skill level between human players, in addition to the complexity of the domain. Any algorithm that intends to learn from human data must overcome these hurdles. My approach draws inspiration from a number of machine learning algorithms and paradigms, which have been developed explicitly in the context of large-scale, noisy data.
The primary contributions of this thesis are two algorithms for learning hierarchical planning abstractions from a database of human replays, a system for evaluating a planning model's ability to explain demonstrations, and an initial approach to using this system to learn a hierarchical planning model from scratch with human demonstrations in a popular RTS game.