Skip to main content
eScholarship
Open Access Publications from the University of California

Learning, Modeling, and Understanding Vehicle Surround Using Multi-Modal Sensing /

  • Author(s): Sivaraman, Sayanan
  • et al.
Abstract

This dissertation seeks to enable intelligent vehicles to see, to infer context, and to understand the on-road environment. We provide a review of the literature in on- road vision-based vehicle detection, tracking, and behavior understanding. Placing vision-based vehicle detection in the context of sensor-based on-road surround analysis, we discuss monocular, stereo-vision, and active sensor-vision fusion for on-road vehicle detection. We discuss vehicle tracking in the monocular and stereo- vision domains, analyzing filtering, estimation, and dynamical models. We introduce relevant terminology for treatment of on-road behavior, and provide perspective on future research directions in the field. We introduce a general active learning framework for on-road vehicle detection and tracking. Active learning consists of initial training, query of informative samples, and retraining, yielding improved performance with data efficiency. In this work, active learning reduces false positives by an order of magnitude. The generality of active learning for vehicle detection is demonstrated via learning experiments performed with detectors based on Histogram of Oriented Gradient features and SVM classification [HOG-SVM], and Haar-like features and Adaboost Classification [Haar-Adaboost] . Learning approaches are assessed in terms of the time spent annotating, data required, recall, and precision. We introduce a synergistic approach to integrated lane and vehicle tracking for driver assistance. Integration improves lane tracking accuracy in dense traffic, while reducing vehicle tracking false positives. Further, system integration yields lane-level localization, providing higher-level context. We introduce vehicle detection by independent parts for urban driver assistance, for detecting oncoming, preceding, side-view, and partially occluded vehicles in urban driving. The full system is real-time capable, and compares favorably with state-of- the-art vehicle detectors, while operating 30 times as fast. We present a novel probabilistic compact representation of the on-road environment, the Dynamic Probabilistic Drivability Map (DPDM), and demonstrate its utility for predictive lane change and merge [LCM] driver assistance during highway and urban driving. A general, flexible, probabilistic representation, the DPDM readily integrates data from a variety of sensing modalities, functioning as a platform for sensor-equipped intelligent vehicles. Based on the DPDM, the real-time LCM system recommends the required acceleration and timing to safely merge or change lanes with minimum cost

Main Content
Current View