Algebraic matroids are combinatorial objects defined by the set of coordinates of an algebraic variety. These objects are of interest whenever coordinates hold significance: for instance, when the variety describes solution sets for a real world problem, or is defined using some combinatorial rule. In this thesis, we discuss algebraic matroids, and explore tools for their computation. We then delve into two applications that involve algebraic matroids: probability matrices and tensors from statistics, and chemical reaction networks from biology.

## Type of Work

Article (4) Book (0) Theses (1) Multimedia (0)

## Peer Review

Peer-reviewed only (3)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (1) Other files (0)

## Publication Year

## Campus

UC Berkeley (1) UC Davis (0) UC Irvine (0) UCLA (3) UC Merced (0) UC Riverside (0) UC San Diego (0) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (0) UC Office of the President (0) Lawrence Berkeley National Laboratory (0) UC Agriculture & Natural Resources (0)

## Department

Department of Statistics, UCLA (3)

## Journal

Combinatorial Theory (1)

## Discipline

Physical Sciences and Mathematics (2)

## Reuse License

BY - Attribution required (1)

## Scholarly Works (5 results)

A combinatorial neural code \({\mathscr C}\subseteq 2^{[n]}\) is called convex if it arises as the intersection pattern of convex open subsets of \(\mathbb{R}^d\). We relate the emerging theory of convex neural codes to the established theory of oriented matroids, both with respect to geometry and computational complexity and categorically. For geometry and computational complexity, we show that a code has a realization with convex polytopes if and only if it lies below the code of a representable oriented matroid in the partial order of codes introduced by Jeffs. We show that previously published examples of non-convex codes do not lie below any oriented matroids, and we construct examples of non-convex codes lying below non-representable oriented matroids. By way of this construction, we can apply Mnëv-Sturmfels universality to show that deciding whether a combinatorial code is convex is NP-hard.

On the categorical side, we show that the map taking an acyclic oriented matroid to the code of positive parts of its topes is a faithful functor. We adapt the oriented matroid ideal introduced by Novik, Postnikov, and Sturmfels into a functor from the category of oriented matroids to the category of rings; then, we show that the resulting ring maps naturally to the neural ring of the matroid's neural code.

Mathematics Subject Classifications: 52C40, 13P25

Keywords: Oriented matroids, convex neural codes, hyperplane arrangements

- 1 supplemental ZIP

We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF) algorithms. In particular, we show that extrema of the Bethe free energy correspond to approximate solutions of the DLR equations. In addition, we demonstrate a close connection between these approximate algorithms and Gibbs sampling. Finally, we compare and contrast various of the algorithms in the DLR hierarchy on spin-glass problems. The experiments show that algorithms higher up in the hierarchy give more accurate results when they converge but tend to be less stable.

We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF) algorithms. In particular, we show that extrema of the Bethe free energy correspond to approximate solutions of the DLR equations. In addition, we demonstrate a close connection between these approximate algorithms and Gibbs sampling. Finally, we compare and contrast various of the algorithms in the DLR hierarchy on spin-glass problems. The experiments show that algorithms higher up in the hierarchy give more accurate results when they converge but tend to be less stable.