Free-moving Omnidirectional 3D Gamma-ray Imaging and Localization
- Author(s): Hellfeld, Daniel
- Advisor(s): Vetter, Kai
- et al.
The ability to localize and map the distribution of gamma-ray emitting radionuclides in 3D has applications in medical imaging, nuclear contamination remediation, and nuclear security and safeguards. The deployment of freely moving detection systems, such as hand-held instruments or ground/aerial-based vehicles, are critical in overcoming the inverse square law and complex shielding scenarios. Using auxiliary contextually-aware sensors, capable of perceiving spatiotemporal characteristics of the environment, these systems can simultaneously generate 3D maps of the surroundings and track the position and orientation of the gamma-ray sensitive detectors in the scene. The fusion of contextual scene data and gamma-ray detector data to facilitate real-time 3D gamma-ray image reconstruction has previously been demonstrated with mobile germanium and CdZnTe-based Compton cameras for gamma-ray energies ranging from a few hundred keV to several MeV. This concept is applied here for lower energy (50-400 keV) gamma-rays using an active coded mask imaging modality. The platform for demonstration is the Portable Radiation Imaging Spectroscopy and Mapping (PRISM) system, which is a hand-held spherical active coded array of many 1 cm3 coplanar-grid CdZnTe detectors designed for omnidirectional coded mask and Compton imaging and uniform directional sensitivity. This work presents the design, development, and coded mask optimization of PRISM, as well as the methodologies developed for real-time reconstruction using a scene data constrained, GPU-accelerated, list-mode maximum likelihood expectation maximization (ML-EM) algorithm. Experimental results from several measurements in the lab and in the field are shown.
A novel approach to 3D gamma-ray image reconstruction for scenarios where sparsity in the source distribution may be assumed, for example radiological source search, is also presented. While the generality of ML-EM enables use in a wide variety of scenarios, it is susceptible to overfitting, limited by the discretization of spatial coordinates, and can be computationally expensive. A more well-conditioned Point-Source Localization (PSL) approach is formulated as an optimization problem where both position and source intensity are continuous variables. This formulation is then extended and generalized to an iterative algorithm for sparse parametric 3D image reconstruction called Additive Point-Source Localization (APSL), where the image is considered the sum of multiple point-sources whose position and intensity are continuous in nature. APSL mitigates overfitting in its iterative bottom-up nature and statistically-founded stopping criteria and, because of the inherent point-source assumption and continuous variables, results in images with improved accuracy and interpretability as compared with ML-EM. A set of simulated source search scenarios using a single non-directional detector is considered to demonstrate the concept and compare ML-EM and APSL. Experimental results using a nearly isotropic, contextually-aware, LaBr3 detector system are then presented, finding improved localization accuracy and computational efficiency with APSL.