In schizophrenia (SZ) and psychotic illnesses, negative symptoms (i.e., avolition, anhedonia) contribute to profound social and role impairment and are largely unresponsive to existing pharmacological and psychotherapeutic interventions. One specific process, reinforcement learning (RL), defined as mapping outcomes to certain actions to guide decision-making and behavior based on feedback, has been repeatedly implicated in the etiology of negative symptoms in psychotic illness. Some evidence suggests that schizophrenia is characterized by difficulty learning from positive but not negative feedback, deficits in learning initial associations between stimuli and certain outcomes, and deficits in making decisions under ambiguity (i.e., when probabilities of adverse outcomes are unknown). However, existing work is limited by contradictory findings about whether initial learning of associations is impaired, modeling methods that do not fully account for asymmetries in learning, and inconsistent evidence linking deficits to actual symptomatology in participants.
The goal of this dissertation was to address these limitations in the literature and rigorously investigate negative symptoms, reward-guided decision-making and RL deficits in psychotic illness. I endeavored to characterize moderators of deficit severity and symptom severity across the full spectrum of psychotic presentations. To this end, I adopted a dimensional approach that ensured variability in patient samples through use of a psychosis spectrum sample in Study 1, investigation of possible shared and distinct RL deficits in schizophrenia and bipolar disorder in Study 2, and exploration of how white matter integrity in the brain may be a meaningful predictor of variability in RL in Study 3. In Study 1, I demonstrated that when making decisions under ambiguity, individuals with psychosis can learn to differentiate high risk/low reward from low risk/high reward contexts; however, severity of negative symptoms is associated with a failure to maximize rewards in low-risk situations. In Study 2, I employed a computational RL model that accounts for asymmetries in integrating positive and negative feedback, as well as retention of the values of specific choices over time. While individuals with psychosis are seemingly acquiring initial associations, there appear to be differences in how they use feedback to modify future behaviors. Negative symptoms moderate this difference, such that increased severity is associated greater weighting of negative feedback and lesser weighting of positive feedback. In Study 3, I explored the relationship between computational RL parameters from Study 2 and white matter connectivity in frontoparietal and corticostriatal circuits, two RL-associated circuits; though I did not find any associations between RL parameters and structural brain connectivity, I highlight the need to provide other relevant circuits and provide avenues for future investigation of neural contributions to reinforcement learning. The link between the work presented in this dissertation and broader implications for etiological frameworks for psychotic illness is also discussed.