Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

UC Berkeley Electronic Theses and Dissertations

Cover page of Digital Twins as Testbeds for Iterative Simulated Neutronics Feedback Controller Development

Digital Twins as Testbeds for Iterative Simulated Neutronics Feedback Controller Development

(2024)

Before a new nuclear reactor design can be constructed and operated, its safety must bedemonstrated using models that are validated with integral effects test (IET) data. However, because scaled integral effects tests are electrically heated, they do not exhibit nuclear reactor feedback phenomena. To replicate the nuclear transient response in electrically heated IETs, we require simulated neutronics feedback (SNF) controllers. Such SNF controllers can then be used to provide SNF capabilities for IET facilities such as the Compact Integral Effects Test (CIET) at the University of California, Berkeley (UC Berkeley). However, developing SNF controllers for IET facilities is non-trivial. To expedite development, we present the use of Digital Twins as testbeds for iterative SNF controller development. In particular, we use a Digital Twin of the Heater within CIET as a testbed for SNF Controller Development. This Digital Twin with SNF Capabilty is run as an OPC-UA server and client written almost entirely in Rust using Free and Open Source (FOSS) code. We then validate the Digital Twin with experimental data in literature. We also verify the transfer function simulation and Proportional, Integral and Derivative (PID) controllers written in Rust using Scilab. Moreover, we demonstrate use of data driven surrogate models (transfer functions) to construct SNF controllers in contrast to using the traditional Point Reactor Kinetics Equations (PRKE) models with the hope that they can account for the effect of spatially dependent neutron flux on reactor feedback. To construct the first surrogate models in this work, we use transient data from a representative arbitrary Fluoride Salt Cooled High Temperature Reactor (FHR) model constructed using OpenMC and GeN-Foam. Using the Digital Twin as a testbed, two design iterations of the SNF controller were developed using the data driven surrogate model. Compared to the potential development time taken in using physical experiments, using the digital twin testbed for SNF controller development resulted in a significant time saving. We hope that the approaches used in this dissertation can expedite testing and reduce expenditure for licensing novel Gen IV nuclear reactor designs.

Cover page of Adaptive cellular strategies to improve commodity chemical production in Escherichia coli

Adaptive cellular strategies to improve commodity chemical production in Escherichia coli

(2023)

Biology holds an amazing propensity for chemistry. Living systems continuously carry out a vast plethora of chemical reactions within a complex network, known as metabolism, to sustain growth and improve evolutionary fitness. Metabolic engineers seek to utilize this aptitude for chemistry by creating biological catalysts for chemical production from renewable feedstocks. Biological catalysts offer an eco-friendly, and in some cases, superior, alternative to petrochemical-based chemical production.In this work, we examine biological catalysts designed for the production of two C4 commodity chemicals, n-butanol and (R)-1,3-butanediol. These catalysts are strains of Escherichia coli containing constructed biosynthetic pathways. Leveraging an anaerobic growth selection, laboratory adaptive evolution identified several mutant strains with improved phenotypes. We set out to understand the mechanism by which these adaptive mutations confer improved production. Through detailed analysis of n-butanol fermentation, we discovered that the parent strain for our evolution was unable to support sustained anaerobic growth via n-butanol fermentation, potentially due to metabolic burden associated with overexpression of the pathway enzymes. Further experimentation suggested that the mutations arose as a strategy to relieve metabolic burden through decreased expression of our biosynthetic pathway. The results of this study highlight the importance of balanced pathway expression when designing biological catalysts. We then shifted our focus to design a microbial catalyst for production of polyhydroxyalkanoates (PHAs) containing unsaturated monomers. Sites of unsaturation provide functional handles for downstream chemical modification. We devised a metabolic strategy to convert two non-canonical amino acids with unsaturated functional groups to their respective 2-hydroxy acids and activate these acids as coenzyme A thioesters for polymerization within E. coli. We identified and tested candidate enzymes for the appropriate activities in vitro and successfully showed that our identified enzymes can form a functional biosynthetic pathway. These experiments lay the groundwork for creation of a microbial catalyst capable of generating PHAs with unsaturated functional groups using glucose as a carbon source.

Cover page of Understanding implicit sensorimotor adaptation as a process of kinesthetic re-alignment

Understanding implicit sensorimotor adaptation as a process of kinesthetic re-alignment

(2023)

From elementary skills such as walking and talking, to complex ones such as playing tennis or music, humans are remarkably adept at learning to use their bodies in a coordinated manner. However, these abilities can be fragile: Many neurological conditions can compromise motor performance and learning. Understanding how the brain produces skilled movement will not only elucidate principles of learning but can also optimize rehabilitation interventions for individuals with movement disorders.

Motor learning is not a unitary operation but relies on multiple learning processes (Kim, Avraham, and Ivry 2020; Krakauer et al. 2019). For example, reinforcement learning helps us select rewarding actions (Dayan and Daw 2008), use-dependent learning helps us rapidly execute well-practiced actions (Verstynen and Sabes 2011; Classen et al. 1998), and sensorimotor adaptation keeps our movements well-calibrated in response to changes in the body and environment (Helmholtz 1924; Stratton 1896). In addition, recent work has highlighted how these implicit processes may be complemented by explicit processes (Codol, Holland, and Galea 2018; Collins and Frank 2012; Marinovic et al. 2017; Jonathan S. Tsay, Kim, Saxena, et al. 2022). For example, when asked to move in a novel environment in which the visual feedback is altered (e.g., prism glasses), participants may adopt a re-aiming strategy to nullify the perturbation. Unlike implicit forms of learning, explicit processes allow for rapid changes in performance (Kim, Avraham, and Ivry 2020; Krakauer et al. 2019; Inoue et al. 2015; Smith, Ghazizadeh, and Shadmehr 2006; Schween et al. 2020; Daniel M. Wolpert and Flanagan 2016; Facchin et al. 2019). The joint operation of multiple learning processes has made it difficult to characterize features inherent to each process. To address this, new analytical methods have been recently developed to isolate individual components (Brudner et al. 2016; Jonathan S. Tsay, Haith, Ivry, et al. 2022; Marinovic et al. 2017; Yang, Cowan, and Haith 2021), providing new opportunities to revisit classic problems in sensorimotor learning: What is the critical signal driving learning for different processes? Are there limits to plasticity, and does this vary between processes? How does the quality of sensory feedback impact different components of motor learning?

I exploit these methods in this dissertation to revisit the mechanisms at play in sensorimotor adaptation. Implicit adaptation has been framed as an iterative process designed to minimize sensory prediction error, the mismatch between a desired and experienced sensory outcome (Donchin, Francis, and Shadmehr 2003; R. Morehead and Smith 2017; Albert et al. 2022, 2021; Herzfeld et al. 2014; Kim et al. 2018; Thoroughman and Shadmehr 2000). Traditionally, the focus has been on how visual sensory prediction errors are used to modify a visuomotor map, ensuring that future movements are more accurate. According to this visuo-centric view, the upper bound of implicit adaptation represents a point of equilibrium, one at which the trial-by-trial change in hand position in response to a visual error is counterbalanced by a trial-by-trial decay (‘forgetting’) of this modified visuomotor map back to its baseline, default state.

Despite its appeal, the visuo-centric view is an oversimplification. The brain exploits information from all of our senses, not only from vision (Ernst and Banks 2002; Van Beers, Sittig, and Gon 1999; Chancel, Ehrsson, and Ma 2022; Sober and Sabes 2005, 2003). This insight, paired with the empirical data outlined in this dissertation, have inspired a new, ‘kinesthetic re-alignment’ model of implicit adaptation (Jonathan S. Tsay, Kim, Haith, et al. 2022). By this view, implicit adaptation is an iterative process designed to minimize a ‘kinesthetic’ sensory prediction error, the misalignment between the perceived heading angle and the movement goal. The perceived hand position is a composite signal, reflecting the seen hand position (via visual afferents), the felt hand position (via peripheral proprioceptive afferents based on mechanoreceptors from muscles, joints, and skin), the predicted hand position (via the efferent motor command), and the movement goal (via a prior belief that the movement will be successful). Implicit adaptation will cease when the kinesthetic error is nullified, that is, when the perceived hand position and the movement goal are re-aligned. (Footnote: Whereas we had used ‘proprioception' in our published work featured in this dissertation, we will adopt the term “kinesthesia” here in the Abstract given that the perceived hand is a composite kinesthetic representation that encompasses both central beliefs and peripheral senses (Proske and Gandevia 2012)).

In Chapter 1, I tested a core assumption held by studies of implicit sensorimotor adaptation, namely that the perceived hand position is at the target (subject to random noise). Specifically, we used a novel visuomotor task that isolated implicit adaptation and probed kinesthesia in a fine-grain manner (i.e., the participant’s perceived heading position on each trial). Whereas participants exhibited robust implicit adaptation (i.e., changes in hand position away from the target in the opposite direction of the visual error), their perceived hand position remained near the target. However, to our surprise, the position reports exhibited a non-monotonic function over the course of adaptation: The participants initially perceived their hand to be biased towards the perturbed visual feedback, mis-aligned with the movement goal. However, over time the reports shifted away from the perturbed visual feedback, re-aligning back to the target. Together, these data not only revealed unappreciated kinesthetic changes that arise during learning but also seeded the idea for a kinesthetic re-alignment perspective of implicit adaptation.

In Chapter 2, I evaluate whether there is the relationship between kinesthetic perception and implicit adaptation, one that would not be predicted by visuocentric models. By using two visuomotor tasks that isolated implicit adaptation and probed kinesthesia, we discovered that participants who have greater kinesthetic biases towards the perturbed visual feedback and greater baseline kinesthetic uncertainty exhibited greater implicit adaptation. As such, these data provided evidence for new, unexplained kinesthetic constraints on the extent of implicit adaptation, supporting the notion that kinesthetic perception plays a critical role in implicit adaptation. The empirical results from the previous chapters led us to develop a new, kinesthetic re-alignment model of implicit adaptation. I will formalize this model in Chapter 3, demonstrating how it readily explains the non-monotonic time course of perceived hand position during implicit adaptation (Chapter 1 and the relationship between kinesthetic biases/uncertainty with the extent of implicit adaptation (Chapter 2). Moreover, I will demonstrate how the kinesthetic re-alignment model is also able to capture a myriad of observations not accounted for by a visuo-centric view of adaptation. Taken together, the kinesthetic re-alignment model brings us one step closer to a more holistic view of motor adaptation, a perspective that formalizes how our high-level beliefs and low-level senses inform where we are positioned and how we are to adapt.

Cover page of Photorealistic Reconstruction from First Principles

Photorealistic Reconstruction from First Principles

(2023)

In computational imaging, inverse problems describe the general process of turning measurements into images using algorithms: images from sound waves in sonar, spin orientations in magnetic resonance imaging, or X-ray absorption in computed tomography.Today, the two dominant algorithmic approaches for solving inverse problems are compressed sensing and deep learning. Compressed sensing leverages convex optimization and comes with strong theoretical guarantees of correct reconstruction, but requires linear measurements and substantial processor memory, both of which limit its applicability to many imaging modalities. In contrast, deep learning methods leverage nonconvex optimization and neural networks, allowing them to use nonlinear measurements and limited memory. However, they can be unreliable, and are difficult to inspect, analyze, and predict when they will produce correct reconstructions.

In this dissertation, we focus on an inverse problem central to computer vision and graphics: given calibrated photographs of a scene, recover the optical density and view-dependent color of every point in the scene. For this problem, we take steps to bridge the best aspects of compressed sensing and deep learning: (i) combining an explicit, non-neural scene representation with optimization through a nonlinear forward model, (ii) reducing memory requirements through a compressed representation that retains aspects of interpretability, and extends to dynamic scenes, and (iii) presenting a preliminary convergence analysis that suggests faithful reconstruction under our modeling.

Cover page of Solar Flux: Remaking landscapes, labor, and environmental politics in California

Solar Flux: Remaking landscapes, labor, and environmental politics in California

(2023)

From 2015-2020, massive booms in solar power and high speed rail reconstructed landscapes across California's San Joaquin Valley. Globally rare alliances of construction unions with environmental justice and immigrant movements won breakthroughs in regional politics. How did construction workers reshape their power in response to the booms–and what formed their politics in this extraordinary direction? This dissertation argues that construction worker power hinged on unions' capacity to reproduce the workforce for urgent landscape transformations, while labor alliances were driven by shared political exclusion and common household struggles over social reproduction of the region's working class, Mexican-American majority. Drawing on five years of ethnographic and archival research, I compare the Fresno-Madera region, where these construction labor-immigrant-environmental justice alliances prevailed at crucial moments, to the Bakersfield region just to the south, where limited household ties, unstable overall employment, and conflicts over oil fractured potential coalitions. In conversation with environmental justice, Marxist feminist, and Marxist geography approaches–including Gramscian interpretations of Clyde Woods, Ruth Wilson Gilmore, and Matthew Huber–I develop a theory of environmental leverage, explaining how landscape transformation and the labor involved can challenge or entrench hegemony. The breakthroughs made by San Joaquin alliances in winning municipal office, jobsite power, and infrastructure redistribution help show how working and oppressed people can build pressing climate transitions by their own blueprints.

Cover page of Inferring species distributions from semi-structured biodiversity observations

Inferring species distributions from semi-structured biodiversity observations

(2023)

Estimating the spatiotemporal distributions of species and understanding how variation in those distributions is explained by the environment are central goals in ecology. Observations of animals generated by participatory science (or "citizen science") are an increasingly important resource for ecologists interested in estimating species distributions because they are high-volume and high-resolution. However, statistical inference with these data is more challenging than inference with data collected under standardized sampling, because participatory science observations contain substantial unmeasured variation in sampling effort and observer behavior. Ecologists need tools and methodological guidance that support the estimation of computationally efficient, flexible statistical models useful for robust inference with participatory science data. In this dissertation, I advance the field of species distribution modeling with participatory science data via contributions across three chapters. First, I present a new software tool, nimbleEcology, that supports the efficient and flexible estimation of hierarchical ecological models, alongside a brief review of the use of such models in ecology and three worked examples of model estimation. Second, I undertake a comparison of two modeling approaches useful for estimating relative abundance from participatory science data, making practical recommendations for model selection. Finally, I apply these methodological developments to data obtained from an important participatory science dataset, eBird, to investigate how common birds respond to drought in California's Central Valley ecoregion. This project demonstrates the application of modeling principles to an important ecological case study and produces new evidence to characterize critical dimensions of birds' drought responses.

Cover page of Chemistry and Physics of Graphite in Fluoride Salt Reactors

Chemistry and Physics of Graphite in Fluoride Salt Reactors

(2023)

Graphite is a ubiquitous material in nuclear engineering. Within Generation IV designs, graphite serves as a reflector or fuel element material in Fluoride-Salt-Cooled High-Temperature Reactors (FHRs), Molten Salt Reactors (MSRs), and High-Temperature Gas Reactors (HTGRs). Graphite versatility in nuclear systems stems from its unique combination of mechanical, thermal, chemical, and neutronic properties. These properties are influenced by operational parameters like temperature, radiation, and chemical environment. In FHRs and MSRs, graphite can interact with the salt through multiple mechanisms, including salt-infiltration in graphite pores, chemical reactions with salt constituents, and tribo-chemical wear. The goal of this Ph.D. dissertation is to investigate mechanisms of interaction of fluoride salts with graphite in FHRs and assess their impact on salt reactor engineering. Chemical interactions between the salt and graphite are studied by exposing a graphite sample to 2LiF-BeF2 (FLiBe) salt and to the cover gas above the salt at 700°C for 240 hours. Chemical and microstructural characterization of the samples highlights formation of two types of C-F bonds upon exposure, with different degrees and mechanisms of fluorination upon salt and gas exposure. Infiltration of salt in graphite pores is examined by reviewing literature on infiltration and its effect and by studying salt wetting on graphite. Contact angles for salt on graphite are measured under variable conditions of graphite surface finish and salt chemistry, and used to predict salt infiltration. Wear and friction of graphite-graphite contacts at conditions relevant to pebble-bed FHR operation is studied through tribology experiments in argon and in FLiBe. Characterization via SEM/EDS, polarized light microscopy, and Raman spectroscopy is employed to seek a mechanistic understanding. Different mechanisms of lubrication are observed in the tests: in argon, graphite is observed to self-lubricate by forming a tribo-film that remains stable at high temperature in argon; in FLiBe, boundary lubrication is observed and postulated to be associated with C-F bond formation at graphite crystallite edges.

The interactions between graphite and tritium are studied. Tritium production rates in FHRs are quantified to be three orders of magnitude larger compared to light water reactors. A literature review is performed to investigate the thermodynamics and kinetics of the hydrogen-graphite interaction; the findings are employed to develop an improved model for hydrogen uptake and transport in graphite, which is used to extract tritium transport parameters from experimental studies.The experiments conducted in this dissertation indicate that the presence of the salt impacts graphite engineering performance in the reactor and after discharge in multiple ways, from providing increased lubrication to impacting graphite surface chemistry. As a further development, exploration of other areas where the salt could have an effect, including evolution of oxidation and graphite reactive sites upon neutron irradiation, in the presence of salt-exposure, is recommended.

Cover page of Statistical Machine Learning for Reliable Hypothesis Generation in Biomedical Problems

Statistical Machine Learning for Reliable Hypothesis Generation in Biomedical Problems

(2023)

Given the ever-growing volume and variety of biomedical data, principled analyses of these rich datasets offer an exciting opportunity to accelerate the scientific discovery process. Here, we advance our goal of extracting reliable scientific hypotheses from such data through (I) the in-context development of interpretable statistical machine learning methods, (II) the demonstration of responsible data science in practice, and (III) the dissemination of open-source software and data for reliable data science.

Throughout this dissertation, we build heavily upon the Predictability, Computability, and Stability (PCS) framework and documentation for veridical (trustworthy) data science (Yu and Kumbier, 2020) to improve the reliability of our scientific conclusions. This framework advocates for the use of predictability as a reality check, computability as an important consideration in algorithmic design and data collection, and stability as a minimum requirement for reproducibility and interpretability in knowledge-seeking and decision-making. Moreover, it calls on the need for transparent documentation of decisions made throughout the data science pipeline.

In Part I, we highlight two statistical machine learning methods, developed within the context of grounded biomedical problems and guided by the PCS framework. First, in Chapter 2, we investigate genetic and epistatic drivers of cardiac hypertrophy in hope of obtaining a more complete understanding of the disease architecture. To this end, we develop a data-driven recommendation system, named the low-signal signed iterative random forest (lo-siRF), to identify candidate genes and gene-gene interactions that are both predictive and stable across various model and data perturbations. We then phenotypically validate these genes and gene-gene interactions via gene-silencing experiments and investigate potential mechanistic explanations for the demonstrated epistases. This leads to a hypothesis in which the identified genes interact through mediating the variable binding of transcription factors that are essential for cardiac contractile function and metabolism. Second, the practical utility of random forests and interpretability tools, not only in the search for epistasis but in a wide range of scientific problems, motivates the need for reliable tree-based feature importance measures. In Chapter 3, we demonstrate that the mean decrease in impurity (MDI), arguably the most popular random forest feature importance measure, suffers from well-known biases including against highly-correlated and low-entropy features. To overcome these drawbacks, we develop a novel feature importance framework, MDI+, which leverages a connection between MDI and the R-squared value from linear regression. We show that MDI+ improves the reliability and stability of feature importance rankings across an extensive range of data-inspired simulations and two real-data case studies on drug response prediction and breast cancer subtype prediction.

In Part II, we further expand on the theme of reliable data science and demonstrate it in practice through two collaborative projects in cancer -omics. In Chapters 4 and 5, we incorporate principles from the PCS framework while working in close collaboration with scientists and clinicians to identify stable and predictive biomarkers in drug response prediction and the early detection of pancreatic cancer, respectively.

Finally, in Part III, we introduce open-source software and data to promote and facilitate the broader adoption of reliable, transparent data science for statisticians and substantive researchers. In particular, we highlight three tools that support our goals: (1) simChef, an R package to simplify the creation of tidy, high-quality simulation studies (Chapter 6); (2) vdocs, an interactive virtual lab notebook in R to seamlessly implement, document, and justify human judgment calls throughout the data science pipeline in accordance with the PCS framework (Chapter 7); and (3) a COVID-19 data repository that aided community-wide data science efforts during the height of the pandemic (Chapter 8).

Cover page of Politics of Belonging: Families and Communities Building Power to Transform Schools

Politics of Belonging: Families and Communities Building Power to Transform Schools

(2023)

The past decade of California’s education policy landscape has been shaped by two significant events and the interaction between them. First, the Local Control Funding Formula (LCFF), signed into law in 2013, shifted the way that the state distributes money to local school districts and implemented mandatory stakeholder engagement in allocating the funds. Second, the COVID-19 pandemic was a shock to the public school system, and though there is ample research underscoring how difficult it is to change institutions, this crisis may have created the necessary conditions for enacting consequential change. The LCFF created the potential to restructure relationships between multiple stakeholders—district leadership, school site-level administrators, families, and communities—and we can examine how groups navigated a landscape of nascent education finance reform and implemented a new process of state-mandated stakeholder engagement. The COVID-19 crisis represented a much deeper and more destabilizing shock to the relationships between families and schools. During the initial onset of emergency stay-at-home orders, the global pandemic blurred the division between home and school—living rooms became classrooms with many parents and caregivers acting as de facto teaching assistants and school coordinators. These two events offer the opportunity to study ways that the relationships between families and educators may have changed and to evaluate the extent to which these changes have allowed families to influence local education policy discussions and share in decision-making.

This dissertation project consists of three substantive chapters and uses qualitative methods to examine how, if at all, a process of state-mandated stakeholder engagement in district-wide decision-making builds power for families to influence local education policy. Additionally, I illustrate how engaging in this process impacts both the micro-level experience of the individual and the organizational level of the district. Drawing on theories that examine institutional stability and change, collective action, and the role of families in schools, the overarching questions guiding my research are the following: (a) In what ways, if any, have school finance and accountability reform changed the balance of power between families and district administrators?; (b) In what ways, if any, has state-mandated stakeholder engagement expanded participation in decision-making and the process by which decisions are made?; and (c) How has participating in mandatory stakeholder engagement during various crises and shocks shifted the role and influence of families and communities in district-wide planning and decision-making?

In Chapter 1, I conduct a research synthesis that analyzes the literature on family and community engagement, with a focus on the policies and practices that empower diverse stakeholders to participate in discussions and decision-making related to education policy. The synthesis is guided by a framework used to map school-community literature along two dimensions—social stance and power and control. These dimensions help identify the extent to which families claim ownership of physical or symbolic spaces of engagement, author and control the agenda for engagement, and co-construct or shift the norms and beliefs of the education system. Based on a review of the literature, I conclude that conflict, not collaboration, is the status quo and that rather than mitigating conflict, family engagement may create structures and support venues for open negotiation of power. Additionally, although when families own engagement spaces and author agendas, they build political power to challenge status quo policies, there is minimal evidence to suggest they shift the norms and values of the existing education system.

The case study in Chapter 2 is a micro-level analysis of the parents who participated in a district-wide advisory committee; the chapter presents the motivations that drove parents to act collectively as they sought to impact the planning process. Drawing from interview data and parents’ reports, I investigate how parents conceptualized and framed what it means to build power to influence change and to engage in the process and how this framing contributed to the collective identity and shared understandings of the parent members of the advisory committee. Based on participant observation and semi-structured interviews conducted in a diverse urban school district in California, this study shows how families engage in local-level decision-making and build power to influence the policies and institutions that structure their lives; the findings speak to the limitations and affordances of state-mandated stakeholder engagement.

Finally, in Chapter 3, I conduct a field-level analysis of a diverse urban school district in California to explore the implementation of school finance and accountability reform and the influence of democratic participation in expanding inclusion within policy discussions and to identify potential shifts in the balance of power between stakeholder groups seeking to impact district-wide planning. Based on participant observations, semi-structured interviews, and document analysis, my findings describe how school reform created and protected a relatively vague structure and process of mandated stakeholder engagement. It is because this engagement was codified into law that when conditions were ripe, the community could push and exert force. Therefore, while the law did not guarantee community power, it codified a process and created potential for collective action to push back against the status quo.

Cover page of Understanding the Relationship between Correctional Officer Job Demands, Job Resources, & Decision-Making: Embracing Public Management Perspectives to Improve the Administration of Justice

Understanding the Relationship between Correctional Officer Job Demands, Job Resources, & Decision-Making: Embracing Public Management Perspectives to Improve the Administration of Justice

(2023)

This dissertation includes four essays, each of which speak to the importance of embracing a public management perspective in understanding the ways in which correctional officers play a critical role in the administration of justice.

Chapter 1 includes a systematic review of the literature on factors associated with violence in carceral settings, calling for greater inclusion of public management perspectives. While there are several prominent theories on what is associated with or causes violence in carceral settings, much of this work is dominated by importation theory and has been driven by analyses on limited sets of data in specific geographic contexts and with mainly individual-level factors situated largely within importation theory. This paper focuses especially on the lack of incorporation of management perspectives in the study of carceral violence. Through scraping Google Scholar results, I find that much of the literature is driven by individual-level data only, which cannot fully account for the context in which individuals are incarcerated, studies from the geographic context of the United States, largely published in criminal justice journals, and seldomly controls for staff-specific factors (i.e., disregards many crucial factors related to institutional management.) Implications for the future study of carceral violence and the limitations of the current body of evidence and our ability to develop effective solutions to carceral violence are discussed.

Chapter 2 includes co-authored work, analyzing survey data from correctional officers, focusing on how the coping mechanisms correctional officers employ to manage work-related stress, or how coping mechanisms affect workplace outcomes. To address these questions, we utilize original survey data about California correctional officers. We draw on the Stress Process Paradigm to model the relationship between exposure to violence and mental health, the impact of occupational stress on the development of coping mechanisms, and whether differential coping mechanism utilization impacts officers’ levels of cynicism and desire to leave corrections. Our findings suggest that emotion-focused coping (e.g., having someone to talk to) is associated with lower intentions to leave correctional employment, while the opposite is true for avoidant coping (i.e., alcohol abuse). These insights shed light on the problem of officer turnover and retention and provide potential direction to policymakers and practitioners seeking to create an effective, healthy workforce.

Chapter 3 includes co-authored work, focusing on the role of hierarchy in correctional officer decision-making. Hierarchy exists within bureaucratic agencies for several reasons, including to foster employee accountability. However, with hierarchy comes rigidity, and in times of emergency, this can stymie effective, expedient organizational response. Existing literature has examined the implications of hierarchy in emergency management, but limited work exists to understand hierarchy’s impacts on frontline worker decision-making during crises. In this paper, we contribute to this literature through an exploratory examination of the role of hierarchy on officer decision-making in a state prison system during the COVID-19 pandemic. As bureaucrats with the most direct interaction with incarcerated individuals, the decisions officers make have profound consequences for well-being of incarcerated people. Drawing on 50 interviews conducted amongst prison staff and incarcerated people, we utilize an expanded definition of hierarchy, one that reflects the ways in which power is granted and imposed both formally and informally. We find that correctional hierarchy is pervasive and complex, influencing officer decision-making through varying their perceived level of autonomy, despite the reality that, as street-level bureaucrats, they themselves are policymakers. Our results suggest that, to some extent, in contexts within which the imposition of hierarchy is reduced, officers autonomy may be bolstered, and this may improve their decision-making, particularly in ways that may leave incarcerated individuals under their care better-off.

Finally, Chapter 4, also including co-authored work, focuses on burnout among officers. Though correlational evidence links predictors of burnout to service delivery, limited causal evidence exists on how to improve officer well-being and how that impacts interactions with incarcerated individuals. In collaboration with a mid-sized U.S. Sheriff Department, we report results from a large-scale field experiment aimed at reducing burnout (n = 712). In an eight-week intervention, the treatment group was nudged to anonymously share experiences with others on a common platform (peer support), whereas the control was nudged to reflect on their experiences individually on a solo-access platform. Our findings suggest that peer support not only improved well-being and belonging amongst correctional officers, but also significantly improved their perceptions of incarcerated individuals. We fail to find significant differences in turnover or incident involvement, the latter of which is measured as both direct and indirect involvement in incidents within the jail or detention center. Thus, this study contributes to a burgeoning literature on how investments in public servants can causally improve well-being and perceptions of those they serve.