Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Electronic Theses and Dissertations bannerUC Davis

On the Adversarial Robustness of Machine Learning Algorithms

Abstract

Machine learning has been ubiquitously used in our daily lives. On the one hand, the success of machine learning depends on the availability of a large amount of data. On the other hand, the diverse data sources make a machine learning system harder to get very high quality data. What makes it worse is that there might be a malicious adversary who can deliberately modify the data or add poisoning data to corrupt the learning system. This imposes a great threat to the applications that are safety and security critical, for example, drug discovery, medical image analysis, and self-driving cars. Hence, it is necessary and urgent to investigate the behavior of machine learning under adversarial attacks. In thisdissertation, we examine the adversarial robustness of three commonly used machine learning algorithms: linear regression, LASSO based feature selection, and principal component analysis (PCA).

In the first part, we study the adversarial robustness of linear regression. We assume there is an adversary in the linear regression system. The adversary tries to suppress or promote one of the regression coefficients. To obtain this goal, the adversary adds poisoning data samples or directly modifies the feature matrix of the original data. In the first scenario that the adversary intends to manipulate one of the regression coefficients by adding one carefully designed poisoning data, we derive the optimal form of the poisoning data. We also introduce a semidefinite relaxation method to design the poisoning data when the adversary tries to modify one of the regression coefficients while minimizing the changes of other regression coefficients. Finally, we propose an alternating optimization method to design the rank-one modification of the feature matrix.

In the second part, we extend the linear regression to LASSO based feature selection and study the best strategy to modify the feature matrix or response values to mislead the learning system to select the wrong features. We formulate this problem as a bi-level optimization problem. As the ℓ1 regularizer is not continuously differentiable, we use asmooth approximation of the ℓ1 norm function and employ the interior point method to solve the LASSO problem and find the gradient information. Finally, we utilize the projected gradient descent method to design the modification strategy.

In the last part, we consider the adversarial robustness of the subspace learning problem. We examine the optimal modification strategy under the energy constraints to delude the PCA based subspace learning algorithm. Firstly, we derive the optimal rank-one attack strategy to modify the original data in order to maximize the subspace distance between the original one and the one after modification. Further, we do not constrict the rank of the modification and find the optimal modification strategy.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View