## Type of Work

Article (174) Book (0) Theses (4) Multimedia (0)

## Peer Review

Peer-reviewed only (173)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (1)

## Publication Year

## Campus

UC Berkeley (17) UC Davis (12) UC Irvine (7) UCLA (6) UC Merced (1) UC Riverside (0) UC San Diego (2) UCSF (13) UC Santa Barbara (11) UC Santa Cruz (0) UC Office of the President (5) Lawrence Berkeley National Laboratory (126) UC Agriculture & Natural Resources (0)

## Department

Department of Mathematics (8) Research Grants Program Office (RGPO) (4) Center for Environmental Design Research (3) Center for the Built Environment (3)

Department of Earth System Science (3) Department of Emergency Medicine (UCI) (1)

## Journal

Dermatology Online Journal (2) Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health (1)

## Discipline

Architecture (3) Medicine and Health Sciences (2) Physical Sciences and Mathematics (1)

## Reuse License

BY - Attribution required (5) BY-NC - Attribution; NonCommercial use only (1) BY-NC-SA - Attribution; NonCommercial use; Derivatives use same license (1)

## Scholarly Works (179 results)

More than twenty years ago Peter Winkler introduced a fascinating class of dependent or “co-ordinate” percolation models with his compatible sequences and clairvoyant demon scheduling problems. These, and other problems in this class, can be interpreted either as problems of embedding one sequence into another according to certain rules or as oriented percolation problems in two dimensions where the sites are open or closed according to random variables on the co-ordinate axes. In most situations, these problems are not tractable by the usual tools of independent Bernoulli percolation, and new methods are required. We study several problems in this class and their natural extensions.

We develop a new multi-scale framework flexible enough to solve a number of problems involving embedding random sequences into random sequences. A natural question in this class was considered by Grimmett, Liggett and Richthammer; they asked whether there exists an increasing $M$-Lipschitz embedding from one i.i.d. Bernoulli sequence into an independent copy with positive probability. We give a positive answer for large enough $M$. A closely related problem is to show that

two independent Poisson processes on the real line are almost surely roughly isometric (or quasi-isometric). Our approach also applies in this case answering a conjecture of Szegedy and of Peled. We also obtain a new proof for Winkler's compatible sequence problem. All these results are obtained as corollaries to an abstract embedding result that can potentially be applied to a number of one-dimensional embedding questions.

We build upon the central idea of the multi-scale construction in the above work to apply it to a different problem. On the complete graph with $M \geq 3$ vertices consider two independent discrete

time random walks $\mathbb{X}$ and $\mathbb{Y}$, choosing their steps uniformly at random. We say that it is possible to {\textbf{schedule}} a pair of trajectories $\mathbb{X}= \{X_1, X_2, \ldots\}$ and $\mathbb{Y}=\{Y_1, Y_2, \ldots \}$, if by delaying their jump times one can keep both walks at distinct vertices forever. It was conjectured by Winkler that for large enough $M$ the set of pairs of trajectories $\{\mathbb{X}, \mathbb{Y}\}$ that can be scheduled has positive measure. Noga Alon translated this problem to the language of coordinate percolation. In this representation Winkler's

conjecture is equivalent to the existence of an infinite open cluster for large enough $M$. With a multi-scale construction we provide a positive answer for $M$ sufficiently large.

The questions of Lipschitz embedding and rough isometry of random sequences have natural higher dimensional analogues. We consider the higher dimensional analogue of the Lipschitz embedding problem. We show that for $M$ sufficiently large and two independent collections of i.i.d.\ Bernoulli random variables $\mathbb{X}=\{X_v\}_{v\in \mathbb{Z}^2}$ and $\mathbb{Y}=\{Y_{v}\}_{v\in \mathbb{Z}^2}$, almost surely there exists an $M$-Lipschitz embedding of $\mathbb{X}$ into $\mathbb{Y}$. The argument is again multi-scale using similar ideas, but this is technically much more challenging because of the more complicated geometry in two dimensions.

This presents an added difficulty in extending the argument in one dimension to show that copies of two dimensional Poisson processes are almost surely rough isometric. A key ingredient is to show that one can map measurable sets to smaller measurable sets in a bi-Lipschitz manner. This motivates the final problem we consider. We show that for $0<\gamma, \gamma'<1 $ and for measurable subsets of the unit square with Lebesgue measure $\gamma$ there exist bi-Lipschitz maps with bounded Lipschitz constant(uniformly over all such sets) which are identity on the boundary and increases the Lebesgue measure of the set to at least $1-\gamma'$.

With progress in enabling autonomous cars to drive safely on the road, it is time to ask how should they be driving. This dissertation focuses on learning the desired objective function for autonomous cars with the goal of personalizing autonomous driving: drive following the passenger’s preferences across diverse environments. Traditionally autonomous cars have been trained using expert demonstrations, with an implicit assumption that the demonstrations are truly representative of optimal driving. Personalizing autonomous driving under this assumption would mean using Inverse Reinforcement Learning (IRL) to learn the objective function latent in the user’s own demonstration and then adopt the user’s own driving style. In this thesis, we question this assumption and propose algorithmic solutions for personalizing driving styles without demonstration data. Through user studies in a simulated driving environment, we first show that people do not want their autonomous cars to drive like them: they want a significantly more defensive car. Next we formalize driving preference as reward functions and propose several algorithms to learn them interactively from an alternative form of human guidance: Preference-based Learning. In Preference-based reward learning we show users several trajectory pairs sequentially and ask them to indicate their preference in each pair. This has been shown to be effective for learning reward functions in absence of demonstrations. Simple preference is, however, far less informative than all the demonstration data. The key contribution of this thesis is an algorithmic framework that leverages computational models of human behavior to enable learning from richer preference queries where response to each query contains more information than just a comparison. We propose different forms of rich preference queries. We ask people not only what they prefer, but also why they prefer. We design new queries to learn more complex reward functions that can potentially represent preferences in non-stationary environments. We introduce reward dynamics as a mixture of reward functions and parameters that govern how preferences change in response to the dynamics of the environment. We develop a unified formalism for treating all forms of human guidance as observations about the true preferences and use this formalism to derive objective functions for actively generating rich queries. We show empirically through simulations and also with user studies that richer preference queries can learn driving preference more accurately than comparison-alone queries. We also discover that richer queries not only speed up preference learning in practice but also offer more transparency into the decision-making algorithms of the autonomous car, thus enhancing people’s trust in the system. Although the human- robot system of choice in this thesis is autonomous car, our algorithmic solutions apply to personalizing other human-robot systems where the robot is a dynamical system that should match human preference and where demonstrations are unavailable due to complexity of robot operation or disparity between preferences and demonstrations.

Large scale distributed optimization has become increasingly important with the emergence of edge computation architectures such as in the federated learning setup, where large amounts of data, possibly of a secure nature and generated in an online manner can be massively distributed across personal devices. A key bottleneck for many such large-scale problems is in the communication overhead of exchanging information between devices over bandwidth limited networks as well as in the unreliability of communication for distributed optimization. The existing approaches propose to mitigate these bottlenecks either by using different forms of compression or by computing local models and mixing them iteratively. In this thesis we first propose a novel class of highly communication efficient operators that employ stochastic and deterministic quantization with aggressive sparsification such as Top-k in the form of a composed operator. Furthermore, in federated learning one can use local computations to reduce communication. Using such a framework, we incorporate local iterations into our algorithm which allows the communication to be infrequent and possibly asynchronous thereby enabling significantly reduced communication.

Putting them together we have distributed Qsparse-local-SGD for federated learning for which our analysis demonstrates convergence rates matching vanilla distributed SGD where we observe that quantization and sparsification are almost for free for smooth functions, both non-convex and convex. We characterize the asymptotic allowable limits of local iterations for synchronous and asynchronous implementations of Qsparse-local-SGD, so as to harness both the distributed processing gains as well as the benefits of quantization, sparsification and local computations. Our numerics demonstrate that Qsparse-local-SGD combines the bit savings of our composed operators, as well as local computations, thereby outperforming the cases where these techniques are individually used. We use it to train ResNet-50 on ImageNet, as well as a softmax multi-class classifier on MNIST, resulting in significant savings over the state-of-the-art, in the number of bits transmitted to reach target accuracy.