Explainable Deep Learning for Biomedical Time Series Classification
- Author(s): Ivaturi, Praharsh
- Advisor(s): Cottrell, Garrison W
- et al.
Through recent advances in wearable medical devices and subsequent explosion of biological data, deep learning has emerged as a promising approach for the automatic analysis of biomedical time series signals. However, currently deep learning models work as black boxes, and most efforts to explain classification decisions are 1) designed for image classification, 2) only produce local explanations or 3) trade off accuracy for explainability by learning a symbolic interpretable model. In this study, we introduce a post hoc explainability framework for deep networks in the clinical world, which provides model explanations at both global and local levels. Global explanations help to get a birds-eye view of how a model behaves and whether it aligns with the expectations of clinical experts. Local explanations can be used to confer useful information about the model’s behavior for a specific input by highlighting the most important regions or features. We present a comprehensive analysis of this framework through the important clinical problem of detection of atrial fibrillation from single-lead electrocardiography signals.