Software is intertwined in our daily lives: it manages sensitive personal information, allows us to connect with our peers, and controls devices like planes, cars, etc. Recent years have seen a surge in the adoption of a new kind of software system that uses Machine Learning and Artificial Intelligence due to an abundance of data. Owing to the ubiquitous nature, the security and reliability of these systems are paramount. However, due to these systems’ complex nature, they are fragile and suffer from design and implementation flaws that make them unreliable. Unfortunately, building systems with provable guarantees is still a niche domain and remains largely unapproachable for the software development community, since these techniques require significant investment in terms of time and effort. In this dissertation, I present verification techniques that ease the manual burden required to build reliable software and machine learning systems. The proposed techniques leverage specifications in the form of input/output examples and program synthesis techniques to ensure system correctness. I present the first approach that reduces the general lemma synthesis problem to a data-driven program synthesis problem, for easing the proof burden in a foundational verification setting. I then present an extension to this data-driven synthesis approach that expands the class of lemmas that can be synthesized. I conclude with the first counterexample-guided verification approach that can provably enforce correctness properties in Neural Networks.