DREEM: A Deep Learning System for Tracking Biological Agents at Any Spatiotemporal Scale
- Prasad, Aaditya
- Advisor(s): Manor, Uri
Abstract
Analyzing the dynamics underlying various neural phenotypes is key to understanding the systems, cellular, and subcellular processes that guide them. However, while there exist many robust methods for object detection in biological settings, such as SLEAP and CellPose, there lacks a universal approach for linking these detections through time. Specifically, given the complex, variable imaging conditions that most biological videography occurs under, current deep learning approaches which exploit only local information may not be suitable for this setting. This is because biological videos contain edge-cases that are not typically seen in the applications that most multiple object tracking (MOT) approaches are designed for such as pedestrian and automotive tracking. Here, we introduce DREEM (DREEM Reconstructs Every Entity’s Motion), a deep learning framework which leverages a transformer-based architecture to directly learn the associations between objects in a large temporal context. We demonstrate that DREEM enables the training of state-of-the-art models for biological MOT. Then, we show that DREEM is sample efficient and can transfer seamlessly across a variety of diverse settings, enabling use in a wide variety of fields. Our code and pretrained models will be released at https://github.com/talmolab/biogtr