Security-critical applications constantly face threats from exploits in lower computing layers such as the operating system, virtual machine monitors, or even attacks from malicious administrators. To help protect application secrets from such attacks, there is increasing interest in hardware implementations of primitives for trusted computing, such as Intel's Software Guard Extensions (SGX) instructions. These primitives enable hardware protection of memory regions containing code and data, and provide a root of trust for measurement, remote attestation, and cryptographic sealing. However, vulnerabilities in the application itself, such as the incorrect use of SGX instructions or memory safety errors, can be exploited to divulge secrets. In this paper, we introduce a new approach to formally model these primitives and formally verify properties of so-called enclave programs that use them. More specifically, we create formal models of relevant aspects of SGX, develop several adversary models, and present a sound verification methodology (based on automated theorem proving and information ow analysis) for proving that an enclave program running on SGX does not contain a vulnerability that causes it to reveal secrets to the adversary. We introduce Moat, a tool which formally verifies confidentiality properties of applications running on SGX. We evaluate Moat on several applications, including a one time password scheme, off-the-record messaging, notary service, and secure query processing.