Decomposition Techniques for Learning Graphical Models
Probabilistic graphical models are ubiquitous tools for reasoning under uncertainty that have been useful to many fields. Despite their importance, learning these models from incomplete data remains a challenge, due to the high non-convexity of the corresponding optimization problem. Iterative algorithms, such as Expectation Maximization (EM), are typically used for learning from incomplete data, yet these approaches tend to exhibit behaviors that are independent of the degree of incompleteness in the data. We argue in this thesis that the degree of incompleteness is a main indicator of the difficulty of a learning problem. As such, we investigate a number of learning approaches, which are driven and motivated by this degree. In particular, we show that by exploiting certain patterns in the dataset, the learning problem can be decomposed into smaller and independent learning problems, which can lead to orders-of-magnitude speed-up in learning time. Moreover, we propose a new class of algorithms for learning graphical models, whose learned parameters and running time improve as the data becomes less incomplete.