Using knowledge encoded in graphical disease models to support context -sensitive visualization of medical data
Given the large quantity of diverse, heterogeneous data in a typical patient record, users spend much of their time and effort finding relevant information to help accomplish their tasks. One of the greatest problems in today’s healthcare environment is matching the increased capability of gathering patient data with a comparable ability to understand, analyze, and act rationally upon this information. This dissertation attempts to bridge this gap by presenting methods for creating context-sensitive visualizations using graphical disease models. Building upon past efforts in Bayesian belief networks and explanation generation, this work explores how the model’s variables, defined relationships, and probabilities are used to identify which data elements in the patient record are important and how the information should be presented for a particular context. A data model, called the visual dictionary, is used to integrate contextual information from graphical disease models and other knowledge sources (e.g., ontologies and user/task models) to generate instructions for laying out patient data in a graphical user interface. These concepts are implemented in two separate applications that demonstrate how context-sensitive visualizations can: 1) be applied towards helping users query large biomedical repositories; and 2) generate an integrated, longitudinal view of a multimedia patient record. The applications were used to evaluate the feasibility of using graphical disease models to retrieve relevant documents and to obtain feedback on the adaptive interfaces through pilot usability studies. Initial results from the pilot studies were positive overall. Developing context-sensitive visualizations that facilitate users with querying these models and understanding the results is a significant step towards using collected data to improve patient care at the bedside.