To operate safely, autonomous vehicles (AVs) rely on external sensors such as cameras, light detection and ranging (LiDAR) technology, and radar. These sensors pair with machine learning-based perception modules that interpret the surrounding environment and enable the AV to act accordingly. Perception modules are the “eyes and ears” of the vehicle and are vulnerable to cybersecurity attacks. The most critical and practical threats, however, arise from physical attacks that do not require access to the AV’s internal systems. The risks of these types of attacks are still unknown. To advance the field in this area, we conducted the first ever quantitative risk assessment for physical adversarial attacks on AVs. First, we identified relevant attack vectors, or types of cyber security attacks, targeting AV perception modules. Next, we conducted an in-depth analysis of the stages of an attack. Finally, we used these exercises to identify risk metrics and perform a subsequent computation of risk scores for different attack vectors. Through this process, we were able to quantitatively rank the real-life risks posed by different attack vectors identified in existing research. This analysis provides a framework for comprehensive risk analysis to ensure the safety of AVs on our roadways.