Physics-based Learning for Large-scale Computational Imaging
In computational imaging systems (e.g. tomographic systems, computational optics, magnetic resonance imaging) the acquisition of data and reconstruction of images are co-designed to retrieve information which is not traditionally accessible. The performance of such systems is characterized by how information is encoded to (forward process) and decoded from (inverse problem) the measurements. Recently, critical aspects of these systems, such as their signal prior, have been optimized using deep neural networks formed from unrolling the iterations of a physics-based image reconstruction.
In this dissertation, I will detail my work, physics-based learned design, to optimize the performance of the entire computational imaging system by jointly learning aspects of its experimental design and computational reconstruction. As an application, I introduce how the LED-array microscope performs super-resolved quantitative phase imaging and demonstrate how physics-based learning can optimize a reduced set of measurements without sacrificing performance to enable the imaging of live fast moving biology.
In this dissertation’s latter half, I will discuss how to overcome some of the computational challenges encountered in applying physics-based learning concepts to large-scale computational imaging systems. I will describe my work, memory-efficient learning, that makes physics-based learning for large-scale systems feasible on commercially-available graphics processing units. I demonstrate this method on two large-scale real-world systems: 3D multi-channel compressed sensing MRI and super-resolution optical microscopy.