In this dissertation, we study Bayesian inference methods for the Gaussian sequence model, where the problem is to estimate an unknown sequence from noisy observations. This model is fundamental to nonparametric function estimation and time series analysis, and finds application across diverse scientific disciplines.
Our goal is to develop flexible methods that do not impose restrictive assumptions on the structure of the unknown sequence, and that are able to adapt to different degrees of smoothness. Moreover, we aim to construct efficient algorithms that do not require any hyperparameter tuning and that provide automatic uncertainty quantification. To achieve these goals, we leverage properties of the Cauchy distribution, and of the Bayesian state-space model framework.
In Chapter 1, we focus on "weak smoothness" assumptions formulated in terms of the kth-order differences of the unknown sequence for k ∈ N. We motivate the use of Cauchy priors through the principle of Maximum Entropy, and propose two algorithms for posterior inference: an efficient Gibbs sampler and an approximate filtering-smoothing method. Through simulations, we compare the performance of our method to other approaches from the statistical literature. The Gibbs sampler implementation of our model demonstrates favorable empirical convergence rates across diverse test functions while maintaining linear computational complexity. Applications to real-world datasets, including financial returns, temperature anomalies, and economic indicators, underscore the method’s versatility and effectiveness compared to other Bayesian and frequentist alternatives.
Chapter 2 introduces several extensions of the method discussed in Chapter 1. We first present the Cauchy local linear trend model, and generalize it to the Cauchy decomposition model. Both these approaches rely on decomposing the unknown sequence into multiple components, each characterized by Cauchy priors on differences of distinct order. Next, we extend the method from Chapter 1 to allow for irregularly spaced observations by casting the problem in an infinite-dimensional space and leveraging Cauchy process priors. We also introduce the latent Cauchy AR(k) model, which further relaxes the weak smoothness assumptions of 1, leading to a method that is particularly suitable for sequences described by linear differential equations with shocks. We conclude by extending the first-order model from Chapter 1 to handle bivariate observations, broadening its applicability to multivariate settings. For each extension, we develop Gibbs samplers for efficient posterior inference. Moreover, we evaluate the performance of our methods through simulation studies.
Chapter 3 moves into a theoretical exploration of the frequentist properties of Cauchy priors in the context of the Normal means model characterized by nearly black signals. In particular, we derive bounds on the mean squared error of the posterior mean, and demonstrate that it attains minimax risk when the sparsity level is known. This analysis lays the groundwork for future research on theoretical guarantees for the more complex models developed in Chapters 1 and 2.
This dissertation contributes to Bayesian methodology by offering practical, flexible, and computationally efficient tools for sequence estimation. The methods presented herein are broadly applicable in contexts where signals must be estimated from noisy data without imposing strong smoothness constraints.