Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Machine Learning and Security in Adversarial Settings

Abstract

Recent advancements in Machine Learning (ML) and growing computing power have led to the increased use of ML-based systems in security-critical applications such as face recognition, fingerprint identification, and malware detection, as well as in high-stakes applications like autonomous driving. However, as these systems become more prevalent, it is crucial to consider their risks and limitations carefully and to develop robust and secure systems that can withstand attacks.

In this dissertation, I employ theoretical analysis and empirical evaluation to advance the understanding at the intersection of Machine Learning and Computer Security. Specifically, I present novel ML-based approaches to address security-related problems, such as fake review detection and malware classification, and analyze the limitations of existing ML-based malware classifiers proposed in academia and industry. Additionally, I investigate the threat of poisoning attacks against ML systems and propose three attacks: (1) Bullseye Polytope, a clean-label poisoning attack against transfer learning; (2) VenoMave, a poisoning attack against Automatic Speech Recognition; and (3) TrojanPuzzle, a poisoning attack against large language models of programming code.

Overall, this dissertation contributes to a deeper understanding of the challenges and opportunities at the intersection of Machine Learning and Computer Security and offers insights into building more secure and resilient ML-based systems.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View