We introduce Energy Landscape Maps (ELMs) as a new and powerful analysis tool of non-
convex problems to the machine learning community. An ELM characterizes and visualizes
an energy function with a tree structure, in which each leaf node represents a local minimum and each non-leaf node represents the barrier between adjacent energy wells. We construct ELMs using an advanced MCMC sampling method that dynamically reweights the energy function to facilitate efficient traversal of the hypothesis space. By providing an intuitive visualization of energy functions, ELMs could help researchers gain new insight into the non-convex problems and facilitate the design and analysis of non-convex optimization algorithms.
We first demonstrate this on two classic machine learning problems: clustering with Gaussian mixture models and biclustering. Next, we demonstrate the utility of ELMs in analyzing unsupervised learning of dependency grammars, an important problem in natural language processing that is highly non-convex. In particular, we analyze the curriculum learning approach to dependency grammar learning, which processes training samples from simple to complex, by plotting the sequence of ELMs over curriculum stages. Our results verify, in the case of dependency grammar learning, a previous speculation as to why a good curriculum can help learning.