Enhanced Depth Map in Low Light Conditions for RGB Cameras
Most existing depth estimation methods predict depth for daytime images and do not perform well in low-light situations due to lack of clear environmental features, glare, overexposure, and noise. This is problematic for safe autonomous driving as pedestrian and guardrail detection at night is challenging and poses potentially life-threatening situations. This thesis addresses this problem by improving image quality of disparity maps obtained in low-light using a preprocessing method. We introduce an algorithm that combines a defogging method, which enhances night images and improves luminance, with generative adversarial or fully-convolutional networks to accurately learn the correct disparity prediction. The experiments show that the proposed method outperforms state-of-the-art methods which do not use additional preprocessing.