A free-hand scanning approach to medical imaging allows for flexible, lightweight probes to image intricate anatomies for modalities such as fluorescence lifetime imaging (FLIm), optical coherence tomography (OCT) and ultrasound. While very promising, this approach faces several key challenges including tissue motion during imaging, varying lighting conditions in the surgical field, and sparse sampling of the tissue surface. These challenges limit the coregistration accuracy and interpretability of the acquired imaging data. Here we report FLImBrush as a robust method for the localization and visualization of intraoperative free-hand fiber optic fluorescence lifetime imaging (FLIm). FLImBrush builds upon an existing method while employing deep learning-based image segmentation, block-matching based motion correction, and interpolation-based visualization to address the aforementioned challenges. Current results demonstrate that FLImBrush can provide accurate localization of FLIm point-measurements while producing interpretable and complete visualizations of FLIm data acquired from a tissue surface. Each of the main processing steps was shown to be capable of real-time processing (> 30 frames per second), highlighting the feasibility of FLImBrush for intraoperative imaging and surgical guidance. Current findings show the feasibility of integrating FLImBrush into a range of surgical applications including cancer margins assessment during head and neck surgery.