Identifying anomalous human pose data is crucial to many emerging data-driven artificial intelligence systems. For instance, patient behavior monitoring systems can analyze patient behavior based on patient movement and pose predictions. Although pose tracking methods have improved over the years, anomalous pose estimates, even if infrequent, can result in troublesome events, such as error information on the patient behaviors, which can lead to false diagnosis and requires human labor intensive processes to identify those anomalous poses. This cost could be mitigated by correcting or identifying anomalous pose estimates in an automated fashion. Thus, we present an anomaly analysis framework for clinical human pose estimates to address these concerns.
In this study, we define anomalous human pose estimates by a thresholded euclidean distance between manually labeled joints and computer vision based predictions of joint locations. For our study, we annotated and analyzed a new human pose dataset from a clinical setting to study the subject-wise sensitivity and accuracy of anomaly detection on our proposed variational autoencoder (VAEs) } based frameworks. For our study, we performed anomaly analysis and detection based on our frameworks with PatientPose , a 2D pose estimator designed for the clinic setting. We demonstrate a strategy to correct anomalous to improve pose estimation accuracy and quantify and consider design-tradeoffs for our anomalous pose detection method. We also compare our method with classic anomaly detection methods such as Isolation Forest and One-Class Support Vector Machine (OC-SVM) with time-domain input. The outcome of this study will provide an out-of-the-box anomaly detection methods for clinical human pose data estimation frameworks and empower follow up research and systems development with imperfect human pose data.