Robust Vehicle State Estimation for Improved Traffic Sensing and Management
As traffic congestion continues to increase, it is critical to better monitor and manage traffic to maximize throughput while maintaining a high degree of safety. In terms of traffic sensing on our roadways, many areas in the United States now have infrastructure-based sensing systems that collect traffic information from embedded loop sensors (e.g. the Caltrans Performance Monitoring System (PeMS)); however, these sensors are expensive to install and are spatially sparse, limiting their use to estimating macroscopic traffic parameters such as average speed, density, and flow at relatively long time intervals (e.g., 5 minutes). In contrast, many Intelligent Transportation System (ITS) applications can benefit from temporally high-resolution (i.e., second-by-second) individual vehicle state estimation, allowing for better traffic management and vehicle safety applications.
This dissertation focuses on new methods that have been developed to estimate high-resolution vehicle state information from both stationary, infrastructure-based sensors, as well as on-board state estimation. In terms of infrastructure-based sensing, an increasing number of roadside cameras are now being utilized by Traffic Management Centers (TMCs) to monitor traffic conditions. Video from these cameras can be used to determine high-resolution vehicle state information. As part of this dissertation, new methods have been developed to detect and track vehicles from a stationary monocular video to determine not only velocity trajectory information, but also vehicle pose and structure. As a result, it is possible to track segmented vehicles from stationary cameras that have low vantage points where occlusion is commonly occurs. A two stage approach is used: the first stage consists of vehicle segmentation which extracts information necessary to initialize a vehicle tracking method. A modified meta-heuristic algorithm based on the genetic algorithm is utilized which not only finds the initial vehicle's pose, but also extracts an abstract 3D structure and metric dimension of the vehicle which can provide important information to classify and/or identify vehicles. The second stage is centered around a vehicle tracking method which tracks the vehicle and reports its state through time. In this method, each vehicle is represented as a rectangular cuboid and a particle filter is utilized to estimate the vehicle's state based on features extracted from the initial vehicle's pose and structure.
In addition to this off-board infrastructure approach, estimating real-time vehicle state information on-board a vehicle (e.g., vehicle localization) is valuable to a variety of ITS applications. Traditional on-board localization methods that rely only on the Global Navigation Satellite System (GNSS) or on sensor fusion solutions derived from satellite vehicle measurements aiding an Inertial Navigation System (INS) often do not work well in crowded urban environments where buildings and trees can block the line-of-sight to satellites along the vehicle's lateral direction (i.e., creating an "urban canyon"). A new robust localization method is proposed which performs sensor fusion between computer vision measurements, pseudo-range and Doppler measurements from GNSS, as well as measurements from an Inertial Measurement Unit (IMU). This method uses traffic light location data (i.e., mapped landmarks measured through a priori observations) taking advantage of existing infrastructure that is abundant within suburban/urban environments, and is easily detected by color vision sensors in both day and night conditions. A tightly coupled estimation process is employed to use observables from satellite signals as well as known feature observables from a camera to correct an INS which is formulated as an Extended Kalman Filter (EKF). A traffic light detection method is also outlined where the projected feature uncertainty ellipse is utilized to perform data association between a predicted feature and a set of detected features. An important component in this localization method depends on mapped visual features (landmarks) such as traffic lights, traffic signs, and features on large structures. An offline surveying method is proposed using an integrated camera, carrier-phase DGPS, and INS to survey landmarks. This method returns not only the positions of the visual features but also the uncertainty of the features' positions in the form of a covariance matrix. This method is verified on visual features that are surveyed using carrier-phase DGPS. Together, these approaches satisfy positioning requirements of high accuracy, availability, and continuity at low-cost;
Finally, a new imaging sensor that combines lenses and mirrors (catadioptric sensors) with a common optical axis to perceive different information around a vehicle is also considered in this dissertation which can be advantageous to probe vehicles as well as for vehicle safety applications. The imaging sensor consists of a central view catadioptric system that returns a panoramic view of the surrounding area using a hyperbolic mirror and a non-central catadioptric system that returns a wide perspective view of the ground plane. Together, these catadioptric sensors provide over 60% of the spherical view, with two image views around the vehicle. By configuring the sensors to be co-axial with a displacement along the optical axis, the sensors retain their field of view with minimal occlusion. This new imaging sensor has applications in driver assistance systems and automated vehicles where the vision information can be used for vehicle classification, lane detection, as well as depth recovery.