The objective of this research is to advance the state of the art in image matching algorithms, especially with regard to input image pairs that include dramatically inconsistent appearance (e.g., different sensor modalities, significant intensity/color changes, different times such as day/night and years apart, etc.). We denote this range of input as disparate input. To handle disparate input, one should be able to capture the underlying aspects not affected by superficial changes to appearance.
To this end, we present a novel image descriptor based on the distribution of line segments in an image; we call it DUDE (DUality Descriptor). By exploiting line-point duality, DUDE descriptors are computationally efficient and robust to unstable line segment detection. Our experiments show that DUDE can provide more true-positive correspondences for challenging disparate datasets.
Beyond traditional image matching, we have designed an effective autograding system for multiview engineering drawings that also uses DUDE to improve its performance. The autograding system needs to be able to compare drawings that may include appearance changes due to students' mistakes, but also needs to differentiate between allowable and erroneous translation and/or scale changes.
In the addition to hand-crafted descriptors, this research also investigates data-driven descriptors generated by new deep learning based approaches. Due to the lack of labeled disparate imagery datasets, it is still challenging to effectively target disparate input using deep learning approaches. Therefore we introduce an aggressive data augmentation strategy called Artificial Intensity Remapping (AIR). By applying AIR to standard datasets, one can obtain models that are more effective for registration of disparate data. Finally, we compare the DUDE descriptor to a deep learning based descriptor powered by AIR.