Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

LIDAR, Camera and Inertial Sensors Based Navigation Techniques for Advanced Intelligent Transportation System Applications

Abstract

During the past decade, numerous research has been carried out in in-vehicle navigation and positioning. All the approaches are trying to solve two problems: "where am I?", and "where are they?", i.e., the automatic vehicle positioning, and the surrounding vehicle detection and tracking.

Among a variety of sensor based systems, computer vision-based approaches have been one of the most popular and promising techniques, however it suffers from intensity variations, narrow fields of view, and low-accuracy depth information. Laser Ranging Sensor (LIDAR) is another attractive technology due to its high accuracy in ranging, its wide-area view, and low data-processing requirements. However, a major challenge for LIDAR-based systems is that their reliability depends on the distance and reflectivity of different objects. Moreover, LIDAR often suffers from noise issues, making it difficult to distinguish between different kinds of objects. In this dissertation, we address several fundamental problems in integrating LIDAR and camera systems for better navigation and positioning solutions. As part of the research, we present a sensor fusion system to solve the "where are they" problem. The calibration of the sensor fusion system as well as the vehicle detection and tracking algorithms are proposed to determine the states of surrounding vehicles. The "where am I" solution focuses on the integration of LIDAR and inertial sensors for advanced vehicle positioning. Moreover, a vehicle tracking approach is presented for freeway traffic surveillance system.

Sensor fusion techniques have been used for years to combine sensory data from disparate sources. In this dissertation, a tightly coupled LIDAR/CV integrated system is introduced. LIDAR and camera calibration is the key component of sensor fusion system. A unique multi-planar LIDAR and computer vision calibration algorithm has been developed, which requires that the camera and LIDAR observe a planar pattern at different positions and orientations. Geometric constraints of the different 'views' of the LIDAR and camera images are resolved as the coordinate transformation and rotation coefficients.

The proposed sensor fusion system is utilized for mobile platform based vehicle detection and tracking. The LIDAR sensor estimates possible vehicle positions. Different Regions of Interest (ROIs) in the imagery are defined based on the LIDAR object hypotheses. An Adaboost object classifier is then utilized to detect the vehicle in ROIs. Finally, the vehicle's position and dimensions are derived from both the LIDAR and image data. Experimental results are presented to illustrate that this LIDAR/CV system is reliable.

In addition, an autonomous positioning solution for urban environment is provided in this dissertation. The positioning solution is derived by combining measurements from both LIDAR and inertial sensors, i.e., LIDAR, gyros and accelerometers. The inertial sensors provide the angular velocities as well as the accelerations of the vehicle, while LIDAR detects the landmark structures (posts and surfaces). In our implementation the positioning is performed in known environment, i.e., the map information is assumed to be a priori information. Extended Kalman Filter (EKF) is implemented in the positioning estimation.

This dissertation also presents a vehicle tracking approach used in a traffic surveillance system. One of the key challenges with freeway vehicle tracking is dealing with high density traffic, where occlusion often leads to foreground splitting and merging errors. We propose a real-time multi-vehicle tracking approach, which combines both local feature tracking and a global color probability model. In our approach, the corner features are tracked to provide position estimates of moving objects. Then a color probability is calculated in the occluded area to determine which object each pixel belongs to. This approach has been proved to be scalable to both stationary surveillance video and moving camera video.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View