Skip to main content
eScholarship
Open Access Publications from the University of California

This series is automatically populated with publications deposited by UCLA Henry Samueli School of Engineering and Applied Science Department of Civil and Environmental Engineering researchers in accordance with the University of California’s open access policies. For more information see Open Access Policy Deposits and the UC Publication Management System.

Cover page of Factors and Processes Affecting Delta Levee System Vulnerability

Factors and Processes Affecting Delta Levee System Vulnerability

(2016)

We appraised factors and processes related to human activities and high water, subsidence, and seismicity. Farming and drainage of peat soils caused subsidence, which contributed to levee internal failures. Subsidence rates decreased with time, but still contributed to levee instability. Modeling changes in seepage and static slope instability suggests an increased probability of failure with decreasing peat thickness. Additional data is needed to assess the spatial and temporal effects of subsidence from peat thinning and deformation. Large-scale, state investment in levee upgrades (> $700 million since the mid-1970s) has increased conformance with applicable standards; however, accounts conflict about corresponding reductions in the number of failures.

Modeling and history suggest that projected increases in high-flow frequency associated with climate change will increase the rate of levee failures. Quantifying this increased threat requires further research. A reappraisal of seismic threats resulted in updated ground motion estimates for multiple faults and earthquake-occurrence frequencies. Estimated ground motions are large enough to induce failure. The immediate seismic threat, liquefaction, is the sudden loss of strength from an increase in the pressure of the pore fluid and the corresponding loss of inter-particle contact forces. However, levees damaged during an earthquake that do not immediately fail may eventually breach. Key sources of uncertainty include occurrence frequencies and magnitudes, localized ground motions, and data for liquefaction potential.

Estimates of the consequences of future levee failure range up to multiple billions of dollars. Analysis of future risks will benefit from improved description of levee upgrades and strength as well as consideration of subsidence, the effects of climate change, and earthquake threats. Levee habitat ecosystem benefits in this highly altered system are few. Better recognition and coordination is needed among the creation of high-value habitat, levee needs, and costs and benefits of levee improvements and breaches.

Cover page of An Integrated Framework for Infectious Disease Control Using Mathematical Modeling and Deep Learning.

An Integrated Framework for Infectious Disease Control Using Mathematical Modeling and Deep Learning.

(2025)

Infectious diseases are a major global public health concern. Precise modeling and prediction methods are essential to develop effective strategies for disease control. However, data imbalance and the presence of noise and intensity inhomogeneity make disease detection more challenging. Goal: In this article, a novel infectious disease pattern prediction system is proposed by integrating deterministic and stochastic model benefits with the benefits of the deep learning model. Results: The combined benefits yield improvement in the performance of solution prediction. Moreover, the objective is also to investigate the influence of time delay on infection rates and rates associated with vaccination. Conclusions: In this proposed framework, at first, the global stability at disease free equilibrium is effectively analysed using Routh-Haurwitz criteria and Lyapunov method, and the endemic equilibrium is analysed using non-linear Volterra integral equations in the infectious disease model. Unlike the existing model, emphasis is given to suggesting a model that is capable of investigating stability while considering the effect of vaccination and migration rate. Next, the influence of vaccination on the rate of infection is effectively predicted using an efficient deep learning model by employing the long-term dependencies in sequential data. Thus making the prediction more accurate.

Proteomics insights into the fungal-mediated bioremediation of environmental contaminants

(2024)

As anthropogenic activities continue to introduce various contaminants into the environment, the need for effective monitoring and bioremediation strategies is critical. Fungi, with their diverse enzymatic arsenal, offer promising solutions for the biotransformation of many pollutants. While conventional research reports on ligninolytic, oxidoreductive, and cytochrome P450 (CYP) enzymes, the vast potential of fungi, with approximately 10 345 protein sequences per species, remains largely untapped. This review describes recent advancements in fungal proteomics instruments as well as software and highlights their detoxification mechanisms and biochemical pathways. Additionally, it highlights lesser-known fungal enzymes with potential applications in environmental biotechnology. By reviewing the benefits and challenges associated with proteomics tools, we hope to summarize and promote the studies of fungi and fungal proteins relevant in the environment.

Cover page of ZeroCAL: Eliminating Carbon Dioxide Emissions from Limestones Decomposition to Decarbonize Cement Production.

ZeroCAL: Eliminating Carbon Dioxide Emissions from Limestones Decomposition to Decarbonize Cement Production.

(2024)

Limestone (calcite, CaCO3) is an abundant and cost-effective source of calcium oxide (CaO) for cement and lime production. However, the thermochemical decomposition of limestone (∼800 °C, 1 bar) to produce lime (CaO) results in substantial carbon dioxide (CO2(g)) emissions and energy use, i.e., ∼1 tonne [t] of CO2 and ∼1.4 MWh per t of CaO produced. Here, we describe a new pathway to use CaCO3 as a Ca source to make hydrated lime (portlandite, Ca(OH)2) at ambient conditions (p, T)-while nearly eliminating process CO2(g) emissions (as low as 1.5 mol. % of the CO2 in the precursor CaCO3, equivalent to 9 kg of CO2(g) per t of Ca(OH)2)-within an aqueous flow-electrolysis/pH-swing process that coproduces hydrogen (H2(g)) and oxygen (O2(g)). Because Ca(OH)2 is a zero-carbon precursor for cement and lime production, this approach represents a significant advancement in the production of zero-carbon cement. The Zero CArbon Lime (ZeroCAL) process includes dissolution, separation/recovery, and electrolysis stages according to the following steps: (Step 1) chelator (e.g., ethylenediaminetetraacetic acid, EDTA)-promoted dissolution of CaCO3 and complexation of Ca2+ under basic (>pH 9) conditions, (Step 2a) Ca enrichment and separation using nanofiltration (NF), which allows separation of the Ca-EDTA complex from the accompanying bicarbonate (HCO3 -) species, (Step 2b) acidity-promoted decomplexation of Ca from EDTA, which allows near-complete chelator recovery and the formation of a Ca-enriched stream, and (Step 3) rapid precipitation of Ca(OH)2 from the Ca-enriched stream using electrolytically produced alkalinity. These reactions can be conducted in a seawater matrix yielding coproducts including hydrochloric acid (HCl) and sodium bicarbonate (NaHCO3), resulting from electrolysis and limestone dissolution, respectively. Careful analysis of the reaction stoichiometries and energy balances indicates that approximately 1.35 t of CaCO3, 1.09 t of water, 0.79 t of sodium chloride (NaCl), and ∼2 MWh of electrical energy are required to produce 1 t of Ca(OH)2, with significant opportunity for process intensification. This approach has major implications for decarbonizing cement production within a paradigm that emphasizes the use of existing cement plants and electrification of industrial operations, while also creating approaches for alkalinity production that enable cost-effective and scalable CO2 mineralization via Ca(OH)2 carbonation.

Cover page of Implications of mHVSR Spatial Variability on Site Response Predictability

Implications of mHVSR Spatial Variability on Site Response Predictability

(2024)

One-dimensional ground response analyses (GRA) can introduce model error to site response estimates when wave propagation is not dominated by vertically propagating shear waves. We identify sites suitable for GRA based on microtremor horizontal-to-vertical spectral ratios (mHVSRs). We analyzed 300 microtremor recordings from 17 vertical array sites in California, comparing mHVSRs at varying spatial separations. We find that low mHVSR spatial correlation, as measured using Longest Common Subsequence, tends to occur at vertical array sites that are poorly modeled by GRA. Conversely, stronger mHVSR correlations tend to occur at sites where GRA is relatively effective.

Cover page of Monolithic Polyepoxide Membranes for Nanofiltration Applications and Sustainable Membrane Manufacture.

Monolithic Polyepoxide Membranes for Nanofiltration Applications and Sustainable Membrane Manufacture.

(2024)

The present work details the development of carbon fiber-reinforced epoxy membranes with excellent rejection of small-molecule dyes. It is a proof-of-concept for a more sustainable membrane design incorporating carbon fibers, and their recycling and reuse. 4,4-methylenebis(cyclohexylamine) (MBCHA) polymerized with either bisphenol-A-diglycidyl ether (BADGE) or tetraphenolethane tetraglycidylether (EPON Resin 1031) in polyethylene glycol (PEG) were used to make monolithic membranes reinforced by nonwoven carbon fibers. Membrane pore sizes were tuned by adjusting the molecular weight of the PEG used in the initial polymerization. Membranes made of BADGE-MBCHA showed rejection of Rose Bengal approaching 100%, while tuning the pore sizes substantially increased the rejection of Methylene Blue from ~65% to nearly 100%. The membrane with the best permselectivity was made of EPON-MBCHA polymerized in PEG 300. It has an average DI flux of 4.48 LMH/bar and an average rejection of 99.6% and 99.8% for Rose Bengal and Methylene Blue dyes, respectively. Degradation in 1.1 M sodium hypochlorite enabled the retrieval of the carbon fiber from the epoxy matrix, suggesting that the monolithic membranes could be recycled to retrieve high-value products rather than downcycled for incineration or used as a lower selectivity membrane. The mechanism for epoxy degradation is hypothesized to be part chemical and part physical due to intense swelling stress leading to erosion that leaves behind undamaged carbon fibers. The retrieved fibers were successfully used to make another membrane exhibiting similar performance to those made with pristine fibers.

Cover page of A maching learning-based analysis of liquefaction input factors using the Next Generation Liquefaction database

A maching learning-based analysis of liquefaction input factors using the Next Generation Liquefaction database

(2024)

Liquefaction triggering is typically predicted using fully-empirical and/or semi-empirical models. Hence, such models are heavily reliant upon available liquefaction (and/or lack thereof) case history data. These predictive models are based on a variety of factors, describing the demand (i.e., the cyclic stress ratio, CSR in existing legacy models) and the capacity (i.e., the cyclic resistance ratio, CRR). However, the degree to which these factors truly affect models’ performance is unknown. To explore this aspect and quantitatively rank the importance of liquefaction input model parameters, we leverage a Random Forest Machine Learning (ML) approach using two methods: (1) a feature importance metric based on the Gini impurity index, and (2) a SHapley Additive exPlanations (SHAP)-based approach. Both approaches were employed using typical input factors used in legacy liquefaction triggering models based on cone penetration test (CPT) data. Such analyses were performed using all reviewed (i.e., fully vetted) data in the Next Generation Liquefaction (NGL) database. Our analysis then separately explores the impact on resulting models of seven input parameters. We show that the most important input parameters are: (1) the peak ground acceleration, (2) the soil behavior type index, and (3) the earthquake magnitude (which serves as a proxy for duration in such models). The input parameters with the lowest importance are the total and the effective vertical stresses. A limitation of this analysis is that the ML model used does not allow for extrapolation beyond the range of the data. As a result, for input parameters with narrow distributions of the data (i.e., somewhat limited parameter space), a lower ranking could be associated with such limited availability of a wide range of values, rather than being related to actual low importance. This limitation likely accounts for the low importance attached to stress-related input parameters since legacy case histories are generally related to shallower (<10m) depths.

CEERS: 7.7 μm PAH Star Formation Rate Calibration with JWST MIRI

(2024)

We test the relationship between UV-derived star formation rates (SFRs) and the 7.7 μm polycyclic aromatic hydrocarbon luminosities from the integrated emission of galaxies at z ∼ 0-2. We utilize multiband photometry covering 0.2-160 μm from the Hubble Space Telescope, CFHT, JWST, Spitzer, and Herschel for galaxies in the Cosmic Evolution Early Release Science (CEERS) Survey. We perform spectral energy distribution (SED) modeling of these data to measure dust-corrected far-UV (FUV) luminosities, L FUV, and UV-derived SFRs. We then fit SED models to the JWST/MIRI 7.7-21 μm CEERS data to derive rest-frame 7.7 μm luminosities, L 770, using the average flux density in the rest-frame MIRI F770W bandpass. We observe a correlation between L 770 and L FUV, where log L 770 ∝ ( 1.27 ± 0.04 ) log L FUV . L 770 diverges from this relation for galaxies at lower metallicities, lower dust obscuration, and for galaxies dominated by evolved stellar populations. We derive a “single-wavelength” SFR calibration for L 770 that has a scatter from model estimated SFRs (σ ΔSFR) of 0.24 dex. We derive a “multiwavelength” calibration for the linear combination of the observed FUV luminosity (uncorrected for dust) and the rest-frame 7.7 μm luminosity, which has a scatter of σ ΔSFR = 0.21 dex. The relatively small decrease in σ suggests this is near the systematic accuracy of the total SFRs using either calibration. These results demonstrate that the rest-frame 7.7 μm emission constrained by JWST/MIRI is a tracer of the SFR for distant galaxies to this accuracy, provided the galaxies are dominated by star formation with moderate-to-high levels of attenuation and metallicity.