Skip to main content
eScholarship
Open Access Publications from the University of California

Bregman proximity search

  • Author(s): Cayton, Lawrence
  • et al.
Abstract

In this dissertation, we study efficient solutions to proximity search problems where the notion of proximity is defined by a bregman divergence. Proximity search tasks are at the core of many machine learning algorithms and are a fundamental research topic in computational geometry, databases, and theoretical computer science. Perhaps the most basic proximity search problem is nearest neighbor search: on any input query, retrieve the most similar items from a (potentially large and complex) database efficiently, i.e. without performing a full linear scan. There is a massive body of work on proximity problems when the notion of distance is a metric, largely relying on the triangle inequality. In contrast, the tasks of efficient proximity search for the family of bregman divergences is essentially unstudied. This family includes standard Euclidean distance (squared), the Mahalanobis distance, the KL-divergence (relative entropy), the Itakura-Saito divergence, and many others. Bregman divergences need not satisfy the triangle inequality, nor do they need to be symmetric. Because these basic properties cannot be relied on, metric-based data structures are not immediately applicable. The dissertation presents a data structure and accompanying search algorithms for nearest neighbor search and range search, the two most fundamental proximity tasks. The data structure is based on a hierarchical space decomposition based on simple convex bodies called bregman balls. The search algorithms work by repeatedly calling an extremely fast optimization procedure. These optimization procedures rely on geometric properties of bregman divergences and notions of duality. We demonstrate that these search algorithms often provide orders of magnitude speedup over standard brute force search. We also examine alternate approaches to bregman proximity problems. We show that two classical data structures can be adapted for bregman divergences, yielding some theoretical bounds on query time. In the final part of the dissertation, we examine a novel approach to building nearest neighbor data structures based on learning. This approach yields theoretical guarantees akin to those in learning theory, which provides an alternative way to rigorously assess search performance. We explore the potential of this framework through several data structures.

Main Content
Current View