This thesis will focus on two major applications of machine learning in healthcare and will be divided into two sections.
In the first, we discuss our work extracting information from pathology reports across cancers at UCSF. Personalized healthcare is at the frontier of machine learning and medicine and has the potential to revolutionize patient care. A major resource for personalized healthcare is the large amounts of medical textual data and our ability to leverage such data depends on how accurately we can extract information. As a result, there has been much interest in automated text analytics and information extraction methods to tackle healthcare text data which have been used in health informatics, precision medicine, and clinical research. Implementing such extraction systems in practice is a challenge, since many automated extraction systems rely on large amounts of annotated textual data to perform adequately well. However, annotating healthcare text is a largely manual effort, a time-consuming and expensive process that requires training and medical knowledge. It is thus difficult to obtain sufficient amounts of annotated data across a variety of clinical domains. Consequently, while deep learning has been shown to be extremely powerful in natural language processing, it can occasionally underperform in biomedical applications due to the lower number of labeled examples. Therefore, it is of considerable practical importance to develop methods in biomedical natural language processing that perform well in the absence of large amounts of labeled data. In this work, we build natural language processing systems for extracting information from pathology reports across cancers at UCSF, investigating practical problems in the deployment of such systems as well as developing methods that require fewer data points, which leads to systems that perform as well as the state of the art while only requiring 40\% of the labeled data. Beyond natural language processing in healthcare, we also develop machine learning methods for COVID-19 county level death predictions.
For the second section, we discuss models for forecasting short-term death predictions from COVID-19. As the COVID-19 outbreak evolves, accurate forecasting continues to play an extremely important role in informing policy decisions. In this paper, we present our continuous curation of a large data repository containing COVID-19 information from a range of sources. We use this data to develop predictions and corresponding prediction intervals for the short-term trajectory of COVID-19 cumulative death counts at the county-level in the United States up to two weeks ahead. Using data from January 22 to June 20, 2020, we develop and combine multiple forecasts using ensembling techniques, resulting in an ensemble we refer to as Combined Linear and Exponential Predictors (CLEP).
Our individual predictors include county-specific exponential and linear predictors, a shared exponential predictor that pools data together across counties, an expanded shared exponential predictor that uses data from neighboring counties, and a demographics-based shared exponential predictor. We use prediction errors from the past five days to assess the uncertainty of our death predictions, resulting in generally-applicable prediction intervals, Maximum (absolute) Error Prediction Intervals (MEPI). MEPI achieves a coverage rate of more than 94\% when averaged across counties for predicting cumulative recorded death counts two weeks in the future.
Our forecasts are currently being used by the non-profit organization, Response4Life, to determine the medical supply need for individual hospitals and have directly contributed to the distribution of medical supplies across the country. We hope that our forecasts and data repository at can help guide necessary county-specific decision-making and help counties prepare for their continued fight against COVID-19.