The deployment of next-generation large-scale multi-agent systems depends critically on the development of efficient inference
algorithms that can make it possible. To this end, this
dissertation presents novel algorithmic solutions that can handle
real-time noisy data, heterogeneous agents, time-varying
communication networks, and partial storage, all with convergence
guarantees. Our approach relies on the formulation of distributed
estimation and inference tasks as an online optimization problem
over probability spaces. With appropriate geometry, we design
Bayesian algorithms using the known priors and likelihoods, and
provide distributional convergence guarantees for agent estimates.
Further, we expand the scope of their application to accommodate time-varying broadcast networks, and relax the requirements on
the confidence
bounds on sampled data likelihoods. This solution offers an online
distributed algorithm accommodating arbitrary likelihood weights,
with almost sure convergence guarantees. Exponential convergence
guarantees are provided for the estimated probability ratio, and
demonstrated through applications such as a distributed Gaussian
filter and a distributed version of particle filters for cooperative
localization and parameter estimation problems.
To learn posteriors with non-conjugate likelihoods and priors, we
introduce a novel concept called distributed evidence lower bound
(DELBO) to utilize variational inference to learn optimal beliefs
over unknown parameters. This development leads to an online
Gaussian estimator capable of handling arbitrary differentiable
likelihoods, ideal for heterogeneous agents.
To enable estimation over a limited set of unknown hypotheses, we
present a discrete marginal estimation algorithm. To set this up, we
solve a NP-hard problem matching information maximizing connected
subgraphs to each unknown hypothesis. A provably correct algorithm
with asymptotic convergence guarantees is then described to identify
the optimal hypothesis for given sensor assignments.
We extend this estimation problem to relevant variables setting in
continuous space. To solve this, we introduce a novel mixing
approach merging neighbor marginals on shared variables, and prove
asymptotic convergence to the marginal consensus manifold. Further,
under additional variable independence, the algorithm is guaranteed
to converge almost surely. The Gaussian version of the algorithm is
compared with other methods, showcasing its efficiency in
cooperative estimation problems. Another version based on
variational inference is applied to distributed mapping with notable
storage and communication savings.