Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Towards Fair and Interpretable AI Healthcare predictive Models: from wearable sensors to causal graphs

Abstract

The rapid expansion of data in the healthcare sector has highlighted the need for powerful and user-friendly artificial intelligence (AI) techniques in the medical field. Although AI toolkits have transformed various areas such as image recognition and natural language processing, their integration into healthcare has been relatively slow. Patient records contain data from a variety of sources, including electronic health records, medical imaging, wearable and ambient biosensors, lab results, and genomics, with the aim of capturing the intricacies of patient health conditions. However, the complex, diverse, and high-dimensional nature of medical datasets creates unique challenges for data analysis and limits the effectiveness and practicality of existing solutions. Additionally, ethical and legal concerns regarding the introduction of medical AI models exist, such as the potential model bias against some minority groups in society, lack of interpretability of some AI algorithms, data privacy problems. Hence, further research is necessary on the development and deployment of medical AI models. In this thesis, we mainly focuses on using machine learning and causal inference to solve applied research problems based on healthcare data for fair and trustworthy medical AI models.

The first part of our work involves utilizing machine learning models and statistical toolkits to construct predictive risk models from a patented remote patient monitoring system. The models are based on a comprehensive set of features that are derived from wearable sensors and bluetooth beacons. These features provide a clear storyline of the daily activities of the frail population in rehabilitation settings. Additionally, we suggest a deep transfer learning framework to classify arrhythmia heartbeat. The proposed method involves fine-tuning a general-purpose image classifier, ResNet-18, with the MIT-BIH arrhythmia dataset. We managed to train the proposed arrhythmia classifier in accordance with the AAMI EC57 standard to ensure that there was no data leakage during model development. The next aspect of my work in healthcare analytics focuses on imbalanced learning where classes are not equally represented in the medical dataset. This issue can be challenging for machine learning classifiers, often leading to biased predictions favoring the majority class and low accuracy for the minority class. To address this issue, we have introduced a new approach that utilizes a weighted oversampling technique and ensemble boosting method to enhance the accuracy of minority data while maintaining accuracy for the majority class.

The second part of our work mainly focuses on using causal inference to develop fair and interpretable machine learning models. By incorporating causality, the model's interpretability and performance can be improved. Causal relationships are often represented in directed acyclic graphs (DAGs) known as causal graphs, which allow researchers to identify the causes of the outcome variables and eliminate irrelevant factors during modeling through visual inspection. In this thesis, we developed a causal discovery algorithm that identifies causal relationships in healthcare datasets with high dimensionality. The proposed algorithm treats causal discovery as a continuous constrained optimization problem with a polynomial constraint. The optimization objective function evaluates the fit of the data to the estimated causal graph, while the constraint ensures that there is no cycle in the estimated graph.

Another aspect of this thesis involves building a causal model to estimate the conversion rate (CVR) in e-commerce recommender systems. This task is particularly challenging in industrial settings due to two major issues: user self-selection leading to selection bias, and data sparsity resulting from rare click events. Our work addresses these challenges by leveraging inverse propensity weight techniques to adjust for selection bias in the final estimation. Additionally, our methods are based on the multi-task learning framework, which can mitigate the impact of data sparsity.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View