Machine learning models have become a ubiquitous part of society, and it has consequently become of paramount importance to understand how to design safe and reliable models. This dissertation attempts to take steps towards this direction by consider two specific problems in reliable machine learning: adversarial examples, which are small test-time perturbations to the input designed to cause misclassification, and data-copying, which occurs when a generative model simply memorizes its training data (giving poor generalization and dangerous security risks).