An individual's personal information is gathered by a multitude of different data collectors throughout the world today. In order to maintain the trust between the individual and the aggregator, there is a need for ensuring the methods that interact with that data collection are acting in a responsible manner. One such way for maintaining trust is to use methods that come with formal privacy guarantees. A well-established source of such guarantees can be found in the framework known as differential privacy, which places significant constraints on algorithms that operate on private data.
This thesis explores the challenges of releasing samples from distributions while satisfying the requirements of differential privacy or other closely related privacy notions. We present algorithms for releasing samples in a variety of settings that differ in their privacy aims. From one angle, we protect the data values directly that arise from exponential family distributions with methods attuned to differential privacy and further methods attuned to Rényi differential privacy. From another angle, we explore protecting the identity of a secret sensitive distribution while releasing what we can from the gathered data. Additionally, a coupling-based analysis is provided for reasoning about the impact of diffusing the samples from one distribution through another in order to achieve stronger privacy guarantees from sampling than either distribution separately. These proposed methods are proven to achieve formal privacy guarantees, and we also show empirical and theoretical results about their efficacy. These results empower numerous different styles of Bayesian privacy-preserving methods, and serve as useful primitives for further privacy analyses that move beyond frequentist probabilistic methods.