Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Scaling Up Probabilistic Circuits for Inference Demanding Applications

Abstract

There is a trade-off between expressiveness and tractability in generative modeling. On the one hand, while neural-based deep generative models are extremely expressive, the ways we can query them are limited; on the other hand, while tractable probabilistic models support efficient computation of various probabilistic queries, scaling them up is a major challenge. Probabilistic circuits are a tractable representation of probability distributions allowing for exact and efficient computation of likelihoods and marginals. We study the task of scaling up the learning of probabilistic circuits and then applying them to various applications. On the learning front, we propose a new algorithm for learning the sparse structures of probabilistic circuits that can significantly improve their capacity. On the application front, we further demonstrate the expressiveness and tractability of probabilistic circuits in two downstream applications: genetic sequence modeling and controllable language generation.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View