- Main
Real-Time Adaptation of Visual Perception
- Rathi, Avadhesh
- Advisor(s): Kim, Hyoseung
Abstract
Autonomous driving has taken a leap in recent years due to the significant improvements in convolutional neural networks and advanced video processing algorithms. Despite these advancements, the criticality of the application has been of major concern as any error can lead to loss of life. When designing an autonomous vehicle, amongst all the different stages of perception, planning and control, perception takes the most amount of time. Understanding the scene accurately and in time is important and has been a challenge due to computationally heavy algorithms and machine learning models used. Recent studies have focused on this issue and proposed various approaches that utilize multiple sensors and expensive setups for perception. However, these systems do not adapt and scale to utilize the underlying resources to their fullest.
In this thesis, we present a simple yet effective approach that focuses on reducing the latency for real-time visual perception in autonomous vehicles. We take input from a single camera and perform lane and object detection. By dividing the input frame into critical and non-critical regions and utilizing both CPU and GPU resources for the workloads, we obtain both fast and accurate results. In addition, we propose an adaptive scaling algorithm that tunes the input image resolution based on the processing time to ensure a real-time processing timeline. To test our approach, we build a small prototype of the car using Jetson Nano equipped with a wide-angle camera and a motor driver to control four DC motors on each wheel. We conduct a case study on this prototype using a real-world dataset and compare it against conventional approaches. The results suggest that our proposed system recognizes the objects in the frame with minimal latency and good accuracy as compared to the other approaches.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-