Algorithmic stability is a measure of how much the output of an algorithm changes in response to small changes to its inputs. Various notions of stability give rise to desirable algorithmic properties, such as generalization in the case of uniform stability for learning algorithms. Another notion of stability, differential privacy, guarantees that the output of an algorithm will not change too much when any one element of its input sample is exchanged for another. This notion ensures that the output of an algorithm cannot ``leak" too much information about any given element of its input, ensuring privacy for individuals who may contribute their data to the input of the algorithm.
This dissertation develops new stable algorithms for promoting reliable and secure computation. We develop a new framework for generically constructing differentially private learning algorithms via boosting. We show how to use a variant of the standard notion of differential privacy to achieve stronger security guarantees for approximate fully homomorphic encryption. Finally, we develop a new stability notion for randomized algorithms called reproducibility, which guarantees that the output of an algorithm remains unchanged with high probability when its input is entirely redrawn from the same underlying distribution. We design algorithms for fundamental statistical tasks that achieve this new notion, and explore connections to related notions of stability.