Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Interrogating the Tensor Network Regression Model

Abstract

There has been growing interest in using tensor networks as machine learning models, inspired by their successes in quantum many-body physics and tensor analysis. These models operate by first mapping input data into an exponentially-large vector space, and then performing linear regression on the resulting feature set. It is well-known that the expressive power of a tensor network regression algorithm originates from its tensor-product featurization, but it is unclear how the tensors in the network are able to convert such a high-dimensional and unstructured intermediate into a useful output. We explore this question by probing the properties of tensor network models on three fronts. First, we assess how the performance of a tensor network classifier degrades when the the size and complexity of the expanded feature space is reduced, and find that most of the space is not effectively utilized by the model. Next, we characterize how the rank of a tensor network impacts the class of regression functions that it can represent, demonstrating that even quadratic polynomials can be impossible to fully realize in most cases. Finally, we use a novel neural network algorithm to determine whether classical images possess correlation structures that mirror those found in quantum wavefunctions, and find evidence of area law scaling in the MNIST and Tiny Images datasets. Taken together, these results demonstrate how mathematical tools from tensor analysis and quantum physics can be leveraged to gain deep insight into the inner workings of tensor network machine learning models.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View