In this thesis we examine the fundamental limits of detecting and recovering a weak signal hidden in a large amount of noise. We will consider two problems. The first one pertains to principal component analysis, and is about estimating and detecting the presence of a structured low-rank signal buried inside a large noise matrix. Prominent models of this problem include the so-called spiked or deformed ensembles from random matrix theory, which are distributions over matrices of the form ``signal + noise." It is known in these models that the top eigenpair of the data matrix becomes indicative of the presence of this signal, or "spike", when and only when its strength is above a certain "spectral" threshold. A natural question is then whether it is possible to identify the spike, or even tell if it's really present in the data below this threshold. In the first part of this thesis, we will completely characterize the fundamental limits of the detection and estimation of the spike. The analysis leading to this characterization crucially relies on a recently discovered connection of this statistical problem with the mean-field theory of spin glasses. This connection provides the necessary tools to obtain precise control on the behavior of the posterior distribution of the spike as well as the fluctuations of the associated likelihood ratio process.
The second problem we consider is that of pooling, and is about recovering a discrete signal, i.e., whose entries only take can a finite number of values, given several observation that are constrained in a combinatorial way. More precisely, in a manner akin to compressed sensing, observations of this signal can be obtained in the form of histograms: pooled measurements counting the occurrence of each symbol across subsets of the variables. We ask what is the minimal number of random measurements of this sort it takes to reconstruct the signal. In the second part of this thesis, we determine sharp upper and lower bounds on the minimal number of measurements for the signal to be essentially unique, thus in principle recoverable from the observed data. We then provide an efficient algorithm to extract it, and show that this strategy is successful only when the number of measurements is much larger than this uniqueness threshold.