Skip to main content
Open Access Publications from the University of California

A Quantitative Investigation into the Design Trade-offs in Decision Support Systems

  • Author(s): Schaffer, James Austin
  • Advisor(s): Höllerer, Tobias
  • et al.

Users frequently make decisions about which information systems they incorporate into their information analysis and they abandon tools that they perceive as untrustworthy or ineffective. Decision support systems - automated agents that provide complex algorithms - are often effective but simultaneously opaque; meanwhile, simple tools are transparent and predictable but limited in their usefulness. Tool creators have responded by increasing transparency (via explanation) and customizability (via control parameters) of complex algorithms or by improving the effectiveness of simple algorithms (such as adding personalization to keyword search). Unfortunately, requiring user input or attention requires cognitive bandwidth, which could hurt performance in time-sensitive operations. Simultaneously, improving the performance of algorithms typically makes the underlying computations more complex, reducing predictability, increasing potential mistrust, and sometimes resulting in user performance degradation. Ideally, software engineers could create systems that accommodate human cognition, however, not all of the factors that affect decision making in human-agent interaction (HAI) are known.

In this work, we conduct a quantitative investigation into the role of human insight, awareness of system operations, cognitive load, and trust in the context of decision support systems. We conduct several experiments with different task parameters that shed light on the relationship between human cognition and the availability of system explanation/control under varying degrees of algorithm error. Human decision making behavior is quantified in terms of which information tools are used, which information is incorporated, and domain decision success. The measurement of intermediate cognitive variables allows for the testing of mediation effects, which facilitates the explanation of effects related to system explanation, control, and error. Key findings are 1) a simple, reliable, domain independent profiling test can predict human decision behavior in the HAI context, 2) correct user beliefs about information systems mediate the effects of system explanations to predict adherence to advice, and 3) explanations from and control over complex algorithms increase trust, satisfaction, interaction, and adherence, but they also cause humans to form incorrect beliefs about data.

Main Content
Current View