Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks?

Abstract

The safety and reliability of deep learning systems necessitate a compelling demonstration of their encoded behavior. Unfortunately, testing metrics developed for traditional software systems are poorly suited for measuring how well this behavior has been modeled. Recent work in this area has produced a popular solution in neuron coverage, which is inspired by the notion of traditional code coverage. Neuron coverage measures the proportion of neurons activated in a neural network as a proxy for the effectiveness of a test suite.

In this work, we systematically investigate the relationship between neuron coverage and three important test suite properties: defect detection, naturalness, and output diversity. We also propose a novel regularization technique to efficiently induce higher neuron coverage and use it to extend two well-known adversarial attack algorithms. Contrary to expectation, our study finds that increasing neuron coverage actually makes it harder to generate effective test suites. For most of our experimental configurations, higher neuron coverage meant fewer defects detected, less natural tests, and more biased class preferences that harmed output diversity. Our results invoke skepticism that neuron coverage is a meaningful measure for testing deep neural networks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View