In the generation where progression in technology have propelled the research of human-machine intelligent systems, it is becoming increasingly important to study the fundamental principles behind human behaviors from a computational point of view. This thesis aims to use advanced technologies, combined with advanced modeling methodologies and modern control algorithms, to study the principles behind the modeling of human decision making for two purposes. First, to use computational modeling frameworks to better understand the mechanisms and factors which affect decision making in different problem contexts, both from the controls design and psychology perspective. Second, to introduce a unifying framework for integrating human policies into controller design in order to improve the performance of human-machine intelligent systems. Models are grouped into either the direct method or the optimization-based method. Direct methods map observations to decisions directly through stored function maps and are associated with lower-level, reflexive and repetitive behaviors. Optimization-based methods, associated with higher-level planning behaviors, require extra cognitive effort to generate state predictions in search for a solution that satisfies a set of criterions.
To support the proposed modeling frameworks, both game-based and real-world experiments are conducted with the aid of advanced test apparatus and sensor technologies. Driving experiments on real roads and simulators explored driver behavior in everyday driving on the highway, extreme driving on slippery surfaces, and distracted driving with obstacles. Game-based experiments involving a projectile and a dual-task game were performed to collect consistent data in a controlled setting, and also designed to parallel the driving contexts in the real-world.
Results from the experiments showed that in extreme driving, a piecewise-affine switched model with two modes was used to differentiate the behavior in the linear and saturation region of the tire. Simulations from a model predictive approach also showed that drivers need to be aware of the nonlinearity in the tire dynamics in order to follow a learned reference trajectory. Similarly, results from the projectile game also revealed that subjects adopted switched strategies due to the nonlinearity and uncertainties involved in the problem. In particular, subjects used switched strategies depending on whether it was an old or new scenarios. In the old scenario, subjects likely used a linear feedback strategy mapping errors directly to the change in control input. In the new scenario, subjects used an optimization-based strategy which minimized a combination of time to hit target and change in the control inputs in order to minimize the effect of uncertainty on the state trajectories.
To validate the conjecture that humans perform mental simulations of states for the optimization-based algorithms, eye-tracking glasses were used to better estimate the cognitive states of the subjects. Eye-tracking on the driver during curve negotiation showed switching between a far-point and a near-point, and eye-tracking on the juggler revealed an early shift in gaze to the predicted apex of the ball. Both results supported how humans will perform mental simulations with some form of a forward model.
In addition to the continuous decisions associated with the previously discussed applications, the discrete decision making of attention allocation in a dual task problem was also investigated and modeled as a Markov decision process (MDP). Simulation and inverse reinforcement learning showed that subjects first adopted a conservative approach and later converged to a riskier strategy as they interacted with better certainty in the game. A similar framework was applied to a real texting while driving context on the highway. Results from eye-tracking showed that attention duration on the phone decreased as vehicle speed increased, which agrees with the predictions of the MDP model.
Lastly, to integrate the advanced modeling methodologies into intelligent systems, the framework of model predictive control was modified to include driver models in the predictions. Controller intervention is minimized such that the semi-autonomous vehicle behaves more like the driver. This has led to shared control algorithms with better integration of human models.