- Main
Decomposing Matrices, Tensors, and Images
- Robeva, Elina
- Advisor(s): Sturmfels, Bernd
Abstract
In this thesis we apply techniques from algebraic geometry to problems arising from optimization and statistics. In particular, we consider data that takes the form of a matrix, a tensor or an image, and we study how to decompose it so as to find additional and seemingly hidden information about its origin and formation. We show that the practical uses of such decompositions are complemented by appealing algebraic and geometric structure.
In Chapter 2 of this thesis we focus on matrix shaped data. The singular value decompo- sition, which lies at the core of modern algorithms and can be found efficiently, is not always enough to capture the structure of the data. Often times the matrix at hand as well as the elements of its decomposition are required to have a certain positivity structure, and we need to design algorithms and theory to exploit this structure. Statistical mixture models, for instance, are based on finding a nonnegative decomposition of a nonnegative matrix. We study the algebraic and geometric properties of such decompositions in Section 2.1. Another type of decomposition of a nonnegative matrix, which is useful in convex optimization as well as quantum information theory, is positive semidefinite decomposition. Here we require the elements of the decomposition to be positive semidefinite matrices of a given size. We explore this notion in Section 2.2. One of the most appealing properties of a nonnegative matrix is that we can think of it in terms of a pair of nested polyhedra. We rely on this geometric interpretation when studying nonnegative and positive semidefinite decompositions.
In Chapters 3 and 4 we turn our attention to data in the shape of a tensor. It is even more crucial in this case than in the matrix case to find a decomposition, not only because it provides hidden information about the data, but also because it allows us to store the tensor more concisely. However, one of the biggest obstacles in the field is that finding a decomposition of a general tensor is NP-hard. Inspired by the spectral theorem and the singular value decomposition for matrices, we study tensors whose decomposition consists of elements with an orthogonality structure. We call such tensors orthogonally decomposable, or odeco. One of their best properties is that, like matrices, odeco tensors can be decomposed efficiently. In Chapter 3 we study the spectral properties of such tensors. We give a formula for their eigenvectors and singular vector tuples. We note that computing these for a general tensor is hard both algebraically and computationally. In Chapter 4 we study the variety of orthogonally decomposable tensors, and we give polynomial equations that cut it out. We do this by showing that a tensor is orthogonally decomposable if and only if a given algebra that arises from it is associative, yet another appealing property of odeco tensors. Despite all of these appealing properties, odeco tensors constitute a very low-dimensional variety. This is why in Section 4.2 we conclude our study of tensors by generalizing the notion of orthogonally decomposable tensors to that of frame decomposable tensors, which now cover the space of all tensors.
In Chapter 5 we study super-resolution imaging. The aim here is, given a low-resolution blurred image, to increase the resolution and remove the blur. This is achieved by decompos- ing the image into a sum of simpler images, one for each point source of light. We encode the locations of the point sources of light and their intensities in a discrete measure, and propose a convex optimization problem in the space of measures to find this unknown measure. We show that in the absence of noise and in the case of a one-dimensional image, the global optimum of this optimization problem recovers the true locations.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-