Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

UC Irvine Electronic Theses and Dissertations

Cover page of Molecular Simulation Guided Protein Engineering and Drug Discovery

Molecular Simulation Guided Protein Engineering and Drug Discovery

(2022)

Targeted protein-ligand binding interactions drive the metabolic processes essential for life and biochemical manufacturing. Binding interactions between enzymes and small molecules are mediated by the sum of weak, non-covalent interactions including: hydrophobic packing, steric effects, electrostatics, and hydrogen-bonding. Characterization of these interactions is limited by the difficulty in obtaining high resolution structural data of the active binding poses. Furthermore, static models from crystallography are unable to capture the dynamic conformational changes that occur during the transition from the protein unbound to bound states. By resolving how these transitory contacts affect protein function, we accelerate the design of enzymes with target activities and discovery of small molecule inhibitors.

We investigate protein-ligand interactions from two directions: 1) From the perspective of protein engineering in answering the question, what mutations should be made in a protein’s amino acid sequence to enhance its binding affinity toward a target ligand. 2) From the field of drug design, how can we accurately predict the absolute binding free energies of small molecules. This work demonstrates how computational methods utilizing physical model- ing can be applied in combination with high-throughput, directed-evolution experiments to advance biomolecular design.

Molecular dynamics (MD) simulations account for the effects of atomic flexibility and explicit solvent that are key to biomolecular interactions. In Chapter 1, we review the basis of free energy calculations based on the Molecular Mechanics Poisson Boltzmann Surface Area (MM-PBSA), Linear Interaction Energy (LIE), and alchemical simulation approaches in drug development. We perform absolute alchemical simulations in Chapter 2 with inhibitors targeting the Urokinase Plasminogen Activator (UPA) system and analyze how a range of simulation parameters such as counter-ion concentration and alternative binding pocket protonation states impact the binding free energy predictions. We improve predictive accuracy by adapting the protocol to utilize the continuum PBSA solvent model with charge polarization corrections through scaling of the solute dielectric.

In Chapter 3, we describe current approaches to engineering proteins for altered redox cofactor specificity, which has industrial value in specific delivery of electron energy and reduction of feedstock costs in biomanufacturing. We integrate molecular modeling with site-saturated mutagenesis to efficiently navigate protein sequence space with Escherichia coli glyceraldehyde 3-phosphate dehydrogenase (Ec gapA) to enable utilization of the artificial redox cofactor nicotinamide mononucleotide (NMN+) in Chapter 4. Lastly, we investigate how mutations fine-tune oxygenase conformational dynamics to modify substrate specificity and turnover in Chapter 5.

Metabolic pathway engineering with enzymes specific for NMN/H provides direct control over electron flow in living organisms. Application of our developed molecular modeling tools will improve the accuracy and speed of MD simulations, facilitating routine usage to reduce the costs required to construct and screen protein variants, expedite identification of potential pharmaceuticals, and allow study of dynamic biomolecular interactions that are inaccessible through experiment.

Cover page of Diffusion to Densities: Using Diffusion-Weighted Imaging to Study Gray Matter Microstructure.

Diffusion to Densities: Using Diffusion-Weighted Imaging to Study Gray Matter Microstructure.

(2022)

Title: Diffusion to Densities: Using Diffusion-Weighted Imaging to Study Gray Matter Microstructure.Name: Hamsanandini Radhakrishnan Degree: Doctor of Philosophy University: University of California, Irvine Year: 2022 Committee Chair: Dr. Craig Stark

The brain goes through a large set of structural changes at the onset of aging, resulting in sometimes devastating cognitive and behavioral consequences. Targeting these changes at an early stage is key to protecting against later cognitive decline or even pathology. However, studying tissue microstructure in the brain non-invasively is not trivial, especially in humans. Most of our non-invasive metrics derived from neuroimaging can detect only large-scale changes like gross atrophy or cortical thinning, which are usually only observable when it is too late to intervene. Diffusion imaging, popularized for studying white matter microstructure, has recently advanced to the stage that it might be sensitive to gray matter cytoarchitectural properties as well. However, these diffusion metrics, especially the newer ones derived from biophysical modelling techniques like Neurite Orientation Dispersion and Density Imaging (NODDI), have not been adequately evaluated, especially in the context of cognitive aging.

In this thesis, with a series of both human and animal studies, we aim to fill some of these gaps in knowledge, focusing mainly on cognitive aging in the hippocampus. We first identify a novel aging biomarker in the dentate gyrus, that might be partially mediating aging-related cognitive decline. We then show that a combination of diffusion metrics is far better than traditional MRI metrics in predicting age or cognition associated properties. We also demonstrate that these metrics can also be used as non-invasive probes to measure the efficiency of intervention studies designed to protect against aging-related structural changes! Finally, we establish a pipeline to estimate cellular properties non-invasively through the diffusion metrics alone. These results together not only shine light on the power of diffusion MRI to study gray matter changes in aging, but also present a framework to extend this method to other domains.

Cover page of The Effects of Global Economic and Cultural Integration on the Environment

The Effects of Global Economic and Cultural Integration on the Environment

(2022)

Perhaps the greatest challenge facing the international community is the environmental problem. The three chapters in this dissertation investigate the global cultural and economic processes shaping ambient air pollution and the emission of various forms of greenhouse gases. The first chapter investigates the issue of ambient air pollution. This form of emission is associated with increased preventable deaths, mortality, and asthma complications. The second and third chapter investigate two types of greenhouse gas emissions. The second chapter analyzes nitrous oxide emissions, which are an extremely potent greenhouse gas emission and contribute to stratospheric ozone depletion. The third chapter analyses carbon dioxide emissions. Each chapter investigates the extent various aspects of globalization help explain cross-national and longitudinal variation in these emissions. The first chapter analyzes the effect of global cultural processes across a country’s position in the stratified world economy. The second and third chapter analyze the effect of the world economy, with a particular focus on foreign direct investment. Using fixed effects panel regression models in all three analyses, I find in the first chapter that the effect of world culture on ambient air pollution is contingent on a country’s position in the world-system. In the second and third chapter, I find that foreign capital penetration is positively associated with nitrous oxide emissions and carbon dioxide emissions, respectively. Each of these analyses engage longstanding scholarly dialogues regarding the effects of globalization on environmental change.

Cover page of Toward dark energy: DESI & LSST

Toward dark energy: DESI & LSST

(2022)

One of the most compelling problems in physics today is understanding the nature of dark energy, a mysterious component driving the current accelerated cosmic expansion. The Dark Energy Spectroscopic Instrument (DESI) and the Rubin Observatory Legacy Survey of Space and Time (LSST) are Stage-IV Department of Energy (DOE) projects aimed at better understanding the nature of dark energy and its influence on the evolution of the universe. While DESI is a spectroscopic survey, and LSST provides multi-band photometry, their observations are complementary and can be combined to improve measurements of cosmological parameters.

One area of synergy lies in estimating the redshifts of extragalactic sources. The overlap between the DESI and LSST footprints is approximately 4,000 square degrees. While DESI will have a lower density of galaxies per square degree, having spectra for these targets will help to improve constraints on measured redshift distributions. The first part of the thesis will focus on developing and testing a state-of-the-art photometric redshift estimation algorithm on simulated LSST data. The algorithm employs a hierarchical Bayesian framework to simultaneously incorporate photometric, spectroscopic, and clustering information to constrain redshift probability distributions of populations of galaxies, as well as provide redshift estimates of their individual members. Once data from LSST arrives, this method can be tested and refined through training on real LSST targets whose counterparts lie within the DESI footprint. This will ultimately improve redshift estimates for other targets in LSST by providing spectroscopic prior information, and will be especially useful in the context of tomographic weak gravitational lensing, which derives a significant amount of uncertainty from imprecise redshift estimates.

Another crucial step in limiting weak lensing systematics involves understanding and mitigating image artifacts in the camera. This is important for identifying blended objects, as well as pinpointing biases in shear measurements. The third chapter of the thesis focuses on studying the systematics of the LSST instrument response by investigating anomalies in calibration sequences and developing testing software to analyze irregularities in bias frames. Making reliable, quantitative measurements that can be compared to requirements at the 1% level is necessary to avoid systematic biases in weak lensing shape measurements, which are often of the same order as the sensor distortions.

The second half of the thesis is devoted to developing software pipelines in preparation for the DESI survey. The fourth chapter discusses using a Gaussian mixture model (GMM) to characterize galaxy magnitudes and colors from DESI targeting data for the purpose of generating mock spectra. Results from the GMM are compared to density estimates for these features using extreme deconvolution, which simultaneously models the data and the noise to provide error-deconvolved distribution functions.

One of the final stages in the processing of mock spectra involves accounting for the noise contributions due to the atmosphere and the spectrograph response. The last chapter is devoted to reconfiguring a DESI software package that simulates this response to produce synthetic spectra for Lyman-alpha studies in the Extended Baryon Oscillation Survey (eBOSS). The original configuration is then used to validate the DESI sky model by comparing real sky brightnesses with simulated brightnesses generated under similar observing conditions.

Cover page of Deep Learning Algorithms for Accelerating Fluid Simulations

Deep Learning Algorithms for Accelerating Fluid Simulations

(2022)

Computational fluid dynamics (CFD) is the de-facto method for solving the Navier-Stokes equations, the set of partial differential equations that describe most laminar and turbulent flow problems. Solving this system of equations requires extensive computational resources; hence significant progress for scaling CFD simulations has been made with advancements in high-performance computing. However, the CFD community has mainly focused on developing high-order accurate methods instead of designing algorithms that harness the full potential of the new hardware. Moreover, current CFD solvers do not effectively utilize heterogeneous systems, where graphics processing units (GPUs) accelerate multi-core central processing units. At the same time, deep learning (DL) algorithms, whose training and inference stages map well to GPUs, have revolutionized the fields of computer vision and natural language processing. In this dissertation, we explore and propose novel algorithms to improve the performance and productivity of CFD solvers using DL.

First, we present CFDNet, a new convolutional neural network-based framework that accelerates laminar and turbulent flow simulations. Early works on DL+CFD approaches proposed surrogates that predict the flow field without any guarantees of satisfying the physical laws. Instead, we design CFDNet as an accelerator that reaches the same convergence guarantees as traditional first principles-based methods with fewer iterations. As a result, CFDNet achieves 1.9 − 7.4× speedups without compromising the quality of the solution of the physical solver in both laminar and turbulent flow problems for different configurations (such as channel flow and flow around an airfoil). CFDNet is the first DL-based accelerator for fluid simulations and presents three advantages: (a) it can be used in tandem with other acceleration techniques, such as multigrid solvers and parallelization, (b) it is amenable to any time-marching scheme, and (c) it is a DL module that can be plugged into any existing physical solver.

Like classical DL algorithms, CFDNet relies on training on large-scale datasets. Hence, it becomes impractical for high-resolution problems due to computationally prohibitive data collection and training. To overcome this limitation, we employ the idea of transfer learn- ing (that is, reusing a model trained with a large number of samples for a task where data is scarce) and propose SURFNet: a transfer learning-based framework to accelerate high-resolution simulations. SURFNet performs data collection and training mostly at low resolution (64 × 256) while being evaluated at high resolutions (up to 2048 × 2048), improving the scalability of DL algorithms for CFD. SURFNet achieves a constant 2× acceleration across different unseen-during-training flow configurations (such as symmetric and non-symmetric airfoils), and resolutions, showcasing resolution-invariance up to 2048 × 2048 spatial resolutions - significantly larger than those attempted in the literature.

SURFNet accelerates fluid simulations based on uniform meshes. However, since different regions of the domain present different flow complexity, we do not require uniform numerical accuracy throughout the domain. Adaptive mesh refinement (AMR) is an iterative technique that refines the mesh only in those regions that require higher numerical accuracy, and CFD solvers use it extensively for scalability. We propose ADARNet, a DL algorithm that predicts a non-uniform output and decides the final resolution of different domain regions in a single shot. Hence, ADARNet marries the advantages of DL (one-shot prediction) and AMR solvers (non-uniform refinement) to present a novel algorithm that outperforms both. Due to ADARNet’s ability to super-resolve only regions of interest, it predicts the same target 1024 × 1024 spatial resolution 7 − 28.5× faster than state-of-the-art DL methods (which perform uniform super-resolution) and reduces the memory usage by 4.4 − 7.7×, showcasing improved scalability.

CFDNet, SURFNet, and ADARNet are hybrid DL-CFD frameworks that collectively im- prove the state-of-the-art. First, CFDNet is a DL-based accelerator for iterative numerical schemes. Second, SURFNet scales CFDNet to high resolutions and allows acceleration of real-world aerospace design scenarios. Third, ADARNet is a direct method for AMR that of- fers high-resolution accuracy with significantly less compute and memory resources. The code for these frameworks is open-source and can be found in: https://github.com/oobiols/staidy.git

Cover page of FPGA-Optimized Neural Network for Cloud Detection from Satellite Images

FPGA-Optimized Neural Network for Cloud Detection from Satellite Images

(2022)

This thesis presents a highly compact neural network model optimized for FPGAimplementations, targeting real-time cloud detection from RGB satellite images. Our model uses an encoder and decoder structure without skip connections, and uses piecewise linear activation functions for low-resource hardware implementations. Detecting Clouds from images using deep learning has made a lot of progress with image recognition and computer vision, at the cost of intensive computation requirements. Due to the challenge of the complexity of state-of-the art neural networks, these networks often cannot be used to perform real-time processing on edge nodes. Hardware accelerators such as FPGAs can be helpful, but naively porting neural network models without considering hardware characteristics can result in inefficient use of hardware resources and high power consumption. In this thesis, I modify a highly compact neural network for detecting clouds, C-Unet++, for efficient hardware implementation on low-power FPGAs. The modified model has a slightly different model structure, is quantized for integer operations, and also uses piecewise linear activation functions to reduce eventual FPGA resource requirements. The model is trained using the Cloud-38 dataset of RGB satellite images. The accuracy of 32-bit floating point is 93.767%. The 16-bit quantized model achieved an accuracy of 89.856%

Cover page of Theory and Applications of Exceptional Points of Degeneracy in Microwave and High-Power Devices

Theory and Applications of Exceptional Points of Degeneracy in Microwave and High-Power Devices

(2022)

Electromagnetic (EM) structures are crucial for high-speed communications and radar systems. Enhancing the performance of such components sometimes is a game changer for some applications that require unique features such as ultra-high sensitivity to perturbations, high output power, precise oscillation frequency, or high-power conversion efficiency. The performance of EM components is often limited by the regime of operation. This dissertation focuses on a new class of EM devices, whose architecture relies on dispersion engineering principles exploiting the so-called exceptional points of degeneracy (EPD) operational condition. The use of EPD regime allows to push the boundaries of the performance for some devices such as, for example, millimeter and terahertz frequencies high-power sources.

EPD is a singularity point at which two or more spectral components of the EM field spatial distribution coalesce. In this dissertation, the degeneracy conditions in microwave, optical and electron beam devices are investigated where the remarkable physical properties of such devices, operating at the EPD regime, are studied.

We have discovered an EPD that is induced in a system made of a linear electron beam interacting with an electromagnetic guided mode in a vacuum tube. This enables a degenerate synchronous regime in backward wave oscillators (BWOs) where power is extracted in distrusted fashion rather than at the end of the structure. The proposed concept is applied to BWOs operating at X-band and millimeter wave frequencies. We demonstrate using particle in cell simulations (PIC) that EPD-BWOs have much higher output power and power conversion efficiency compared to standard BWOs.

Finally, we propose a method that finds the eigenmodes in the interactive system of a travelling-wave tube (TWT). The proposed solver is based on accurate PIC simulations of finite length hot structure. The determination of wavenumbers and eigenvectors of the hot modes supported in a TWTs is useful to study hot-mode degeneracy conditions in hot slow wave structure. Furthermore, the proposed method is applied to study electron beams in tunnels with complicated geometries, with the goal of estimating the reduced plasma frequency and understanding the degeneracy conditions.

Cover page of Metabolic Systems Dyshomeostases Characterize Alzheimer’s Disease: Diverse Plasma Metabolomic Evidence from LOAD, DS-AD, and ADAD

Metabolic Systems Dyshomeostases Characterize Alzheimer’s Disease: Diverse Plasma Metabolomic Evidence from LOAD, DS-AD, and ADAD

(2022)

Alzheimer’s disease (AD) has proven remarkably refractory to proposed and approved therapies, none of which has strongly demonstrated the capability to halt, sustainedly decelerate, or reverse cognitive decline in emerging disease. Although much translational research in AD has targeted amyloid plaque and tau proteopathies, burgeoning metabolomics technologies in the past decade have enabled the large-scale survey of the peripheral plasma metabolome in these vulnerably aging individuals. This is advantageous because substantial evidence exists that AD can be described as a complex biological system of peripherally evident, metabolic dyshomeostases in the process of abnormal cognitive aging. It is substantially less clear, however, how personalized-medicine-relevant individual differences in AD etiology and cognitive staging map (as jeopardized CNS-peripheral axes) onto this diversity of interconnected and embedded metabolic networks.

To explore this question, sporadic late-onset AD (LOAD) participants at the preclinical stage of disease were profiled using genome-scale metabolic network modeling over features of the plasma metabolome altered relative to controls. This revealed a dysmetabolic signature (including lipids) which significantly overlapped with that of an independent cohort of preclinical LOAD participants. Further experiments in Down syndrome AD (DS-AD) suggested a similar alteration of lipids in manifest disease, but also central carbon metabolites vital to cellular bioenergetic homeostasis. To more closely examine this peripheral dysmetabolic heterogeneity in more comparable cognitive terms, Preclinical LOAD and preclinical familial, autosomal dominant AD (ADAD) plasma were compared and found to demonstrate modest, significant overlap. To assess the specificity of this finding, preclinical plasma was also compared to that of those with objective cognitive deficits across both LOAD and ADAD. This again demonstrated significant, modest pathway overlap, and similar metabolic pathways emerged from correlational analyses between metabolomic features and estimated mutation carrier years until diagnosis.

Because of this highly complex degree of residually non-shared, semantically dense information in the plasma metabolome across individual, clinical differences in AD, these biochemicals were mapped to inferred metabolic topics from de novo metabolic network modeling using natural language processing (NLP) approaches. Through these same topics, pairwise AD phenotypic comparisons were thus proportionally associated with clusters of biochemicals and enzymes. The fitted, metabolic Topic 4 intriguingly implicated hexosamine/aminoglycan metabolism, which was particularly pronounced in comparisons involving “supernormal,” older adults in the highest percentiles of resilient cognitive aging. In continuing to explore these clinical phenotypic- peripheral metabolic mappings in the peripheral metabolome, these efforts will afford increasingly precise, semantic level insights into the biochemical diversity of AD pathobiology. In addition to informing further, targeted mechanistic research, this will also translationally nominate contextually rich, empirically ascertained biomarker and therapeutic target candidates.

Cover page of Essays on Quantitative Macroeconomics and Monetary Policy

Essays on Quantitative Macroeconomics and Monetary Policy

(2022)

This dissertation contains three chapters on empirical macroeconomics and monetary policy. In Chapter 1, I test the forecast performance of a small-scale Dynamic Stochastic General Equilibrium (DSGE) model with sentiment shocks. I relax the benchmark assumption of rational expectations and assume instead that economic agents behave in a near-rational fashion: every period they learn and update their beliefs using a constant gain learning algorithm. Sentiment shocks are captured by exploiting observed data on expectations and are defined as the deviations from the model implied expectations due to exogenous waves of pessimism or optimism. The forecast evaluation is accomplished by comparing the root mean squared prediction error of the canonical 3-equation New Keynesian model at different horizons and under different expectation assumptions: rational expectations, learning, and learning with sentiment. The results show that the model with learning and sentiment shocks is not only able to compete with the other two alternatives, but it is generally better to forecast the output gap and the inflation rate. In Chapter 2, I use a small open economy DSGE model to investigate how Mexico's central bank has conducted its monetary policy in the period 1995-2019. The main objective of the paper is to document the systematic changes in the Bank of Mexico's reaction function by analyzing possible shifts in the parameters of the policy rule. The central bank's policy is modeled using a Taylor rule that relates the nominal interest rate to output, inflation, and the exchange rate. I employ Bayesian computational techniques and conduct rolling-window estimations to explicitly show the transition of the policy coefficients over the sample period. Furthermore, the paper examines the macroeconomic implications of these changes through rolling-window impulse-response functions. The results suggest that the Bank of Mexico's response to inflation has been steady since 1995, while the response to output and the exchange rate has decreased and stabilized after 2002. In Chapter 3, I reconsider whether monetary policy in small open economies responds to exchange rates by studying possible parameter instabilities in a DSGE model. The main focus of the paper is to revisit preceding evidence on the response to exchange rate movements by the Bank of England and determine if its reaction function has remained constant throughout the sample. To this end, I estimate a small open economy general equilibrium model using Bayesian econometric techniques over rolling windows. I find overwhelming evidence of shifts in several parameters, including those related to the policy rule. Furthermore, posterior odds tests reveal a time-varying response to exchange-rate fluctuations by the monetary authorities. The results favor the model with the nominal exchange rate embedded in the policy rule for the initial subsamples. However, the evidence steadily evolves across windows and ultimately changes to prefer the model specification with no exchange rate. The paper also documents evident variations in the model dynamics derived by the instability of parameters via rolling-window impulse response functions and variance decomposition analysis.

Cover page of Community fosters resiliency and growth in plants and scientists

Community fosters resiliency and growth in plants and scientists

(2022)

Climate change and environmental degradation resulting from anthropogenic activities disproportionately affect vulnerable and marginalized populations. Yet, these same populations are often excluded as participants and audiences from science communication and engagement efforts. Thus, we must provide resources and opportunities in science communication spaces that are accessible, inclusive, and co-created in collaboration with vulnerable communities. As a lesson learned from plants and microbes, we must work in partnership with marginalized populations to truly understand and develop effective systems of change to combat the effects of climate change.

Plants do not respond to a change in the environment in isolation—microbiomes that contain mutualists and pathogens are ubiquitous in nature. The influence of such interactions is poorly constrained in our understanding of ecological and evolutionary responses of plants to climate change. To investigate how plant-soil microbiomes and drought interact, I conducted a greenhouse experiment with two California grassland plants, Stipa pulchra and Phacelia parryi, exposed to soil inocula collected from a long-term water manipulation experiment in a natural setting. In a greenhouse, we varied soil moisture and hypothesized that the long-term history of drought provided by soil inocula results in “drought-tuned” soil microbial communities that alter subsequent plant growth under water-limited conditions. For my first chapter, we found watering treatment and soil treatment interacted for S. pulchra, such that plants exhibited greater drought tolerance when grown with drought-tuned microbes than with microbes associated with ambient water availability. No significant interaction was present for P. parryi, but plants exposed to high and low water treatments both yielded reduced total plant biomass when grown with drought-tuned microbes. These results help us better understand how the plant-soil microbe interactions can direct plant growth patterns and highlight the importance of appropriate eco-evolutionary contexts to be considered in understanding species response to climate change.

Similarly, marginalized identities in STEM cannot navigate academic spaces in isolation. To support and empower people from marginalized communities in STEM, it is critical for universities and scientific societies to consider how to make their science communication and policy training spaces accessible and engaging to broad audiences, including to scientists with a wide array of educational backgrounds and social identities (i.e: race, ethnicity, sexual orientation, immigration status, disabilities, etc.). To create inclusive training spaces, and truly foster a sense of community within STEM, it is critical to go beyond simply acknowledging or accommodating people of different backgrounds, and instead, intentionally create training spaces that are designed to be accessible and attainable to everyone from the outset. Therefore, I co-founded Reclaiming STEM, a workshop centering science communication and science policy training specifically for marginalized scientists (LGBTQ+, POC, femmes, disabled people, first-generation, etc.).

For my second chapter, I present our workshop model grounded in evidence-based practices, present the main themes and key takeaways from the past five years of the Reclaiming STEM workshops, and share lessons we learned from attendee reflections. For my third chapter, I analyzed over 700 applications for our workshop to understand how marginalized populations use their identities in science communication. I found that based on applicants' experiences in STEM, they wanted to foster a sense of STEM belonging to their own communities through using emotion and identity centered styles of science communication. These findings highlight a critical need to overhaul current science communication training programs to account for marginalized participants' needs and communication goals.