Hybrid-Neural Synthesis of Machine Checkable Software Correctness Proofs
Skip to main content
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Hybrid-Neural Synthesis of Machine Checkable Software Correctness Proofs


Foundational verification allows programmers to build software which has been empirically shown to have high levels of assurance in a variety of important domains. However, the cost of producing foundationally verified software remains prohibitively high for most projects, as it requires significant manual effort by highly trained experts. In my thesis, I developed \name{}, a proof search system using machine learning techniques to produce proofs of software correctness in interactive theorem provers. \name is composed of three parts: a supervised predictor model, a reinforcement predictor model, and a search system which can make use of either model to automatically prove theorems.

I demonstrated \name{}'s supervised model on the proof obligations from a large practical proof project, the CompCert verified C compiler, and show that it can effectively automate what were previously manual proofs, automatically producing proofs for \PHPercent of theorem statements in our test dataset, when combined with solver-based tooling. Without any additional solvers, we exhibit a proof completion rate that is a 4X improvement over prior state-of-the-art machine learning models for generating proofs in Coq. I then demonstrated that for some classes of proofs, the reinforcement model can build upon the supervised model and produce even more proofs with no additional labeled-data.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View