Skip to main content
Open Access Publications from the University of California


UCLA Electronic Theses and Dissertations bannerUCLA

Towards Theoretical Analysis and Empirical Improvement of Certied Robust Training


Recently, bound propagation based certified robust training methods have been proposed for training neural networks with certifiable robustness guarantees. Despite that state-of-the-art (SOTA) methods including interval bound propagation (IBP) and CROWN-IBP have succeeded in providing certified robustness with efficient per-batch training complexity, there are several challenges faced by these certified robust training methods. First, they usually use a long warmup schedule with hundreds or thousands epochs to increase the perturbation radius for SOTA performance and are thus still costly. Second, the convergence of IBP training remains unknown. In this paper, we identify two important issues related to slow warmup schedule for IBP training, namely exploded bounds at initialization, and the imbalance in ReLU activation states. These two issues make certified training difficult and unstable, and thereby long warmup schedules were needed in prior works. We proposed improvements to mitigate these issues and we are able to obtain \textbf{65.03\%} verified error on CIFAR-10 ($\epsilon=\frac{8}{255}$) using very short training schedules. For the convergence problem, we show that for a randomly initialized two-layer ReLU neural network with logistic loss, with sufficiently small perturbation radius and large network width, gradient descent for IBP training can converge to zero training robust error with a linear convergence rate with a high probability, and at this convergence state the robustness certification by IBP can accurately reflect the true robustness of the network.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View