Images degraded by light scattering and absorption such as hazy, sandstorm, and underwater images often suffer from color distortion and low contrast because of light traveling through turbid media. This can prevent systems that operate outdoors in different lighting conditions from functioning properly, for example, video surveillance systems, autopilot systems and intelligent transportation systems, which include automatic license plate recognition, automatic traffic counting, etc. Therefore, it is desirable to develop an effective method to restore color and enhance contrast for these images. This thesis presents novel work to advance research on image restoration and enhancement for such images.
To enhance or restore such a degraded image, the image formation model is often used to describe it as a ``clear" image blended with an ambient light based on the scene transmission computed using the scene depth from the camera. The transmission describes the portion of the scene radiance which is not scattered or absorbed and which reaches the camera. By reversing the image formation process, one can attain the scene radiance from a degraded image, which is a ``clear" image. However, it involves solving an ill-posed and under-constrained problem because we need to estimate both the ambient light and scene transmission from a single degraded image.
To attack this problem, we proposed to use image blurriness to estimate ambient light and scene depth for underwater images. Furthermore, we extended it by combining light absorption and blurriness to estimate scene depth for underwater scenes in different lighting conditions and color tones. For any images degraded by light scattering and absorption, not limited to underwater ones, we proposed a generalization of the common dark channel prior approach for ambient light and transmission estimation. Additionally, adaptive color correction is incorporated into the image formation model for removing color casts while restoring contrast. Based on the experimental results, our proposed algorithms outperform, both subjectively and objectively, other state-of-the-art algorithms based on the image formation model.