- Main
Optics and Algorithms for Designing Miniature Computational Cameras and Microscopes
- Yanny, Kyrollos
- Advisor(s): Waller, Laura
Abstract
Traditional cameras and microscopes are often optimized to produce sharp 2D images of the object. These 2D images miss important information about the world (e.g. depth and spectrum). Access to this information can make a significant impact on fields such as neuroscience, medicine, and robotics. For example, volumetric neural imaging in freely moving animals requires compact head-mountable 3D microscopes and tumor classification in tissue benefits from access to spectral information. Modifications that enable capturing these extra dimensions often result in bulky, expensive, and complex imaging setups. In this dissertation, I focus on designing compact single-shot computational imaging systems that can capture high dimensional information (depth and spectrum) about the world. This is achieved by using a multiplexing optic as the image capture hardware and formulating image recovery as a convex optimization problem. First, I discuss designing a single-shot compact miniature 3D fluorescence microscope, termed Miniscope3D. By placing an optimized multifocal phase mask at the objective’s exit pupil, 3D fluorescence intensity is encoded into a single 2D measurement and the 3D volume can be recovered by solving a sparsity constrained inverse problem. This enables a 2.76 micron lateral and 15 micron axial resolution across 900x700x390 micron cubed volume at 40 volumes per second in a device smaller than a U.S. quarter. Second, I discuss designing a single-shot hyperspectral camera, termed Spectral DiffuserCam, by combining a diffuser with a tiled spectral filter array. This enables recovering a hyperspectral volume with higher spatial resolution than the spectral filter alone. The system is compact, flexible, and can be designed with contiguous or non-contiguous spectral filters tailored to a given application. Finally, the iterative reconstruction methods generally used for compressed sensing take thousands of iterations to converge and rely on hand-tuned priors. I discuss a deep learning architecture, termed MultiWienerNet, that uses multiple differentiable Wiener filters paired with a convolutional neural network to take into account the system’s spatially-varying point spread functions. The result is a 625-1600X increase in speed compared to iterative methods with spatially-varying models and better reconstruction quality than deep learning methods that assume shift invariance.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-