Skip to main content
eScholarship
Open Access Publications from the University of California

LBL Publications

Lawrence Berkeley National Laboratory (Berkeley Lab) has been a leader in science and engineering research for more than 70 years. Located on a 200 acre site in the hills above the Berkeley campus of the University of California, overlooking the San Francisco Bay, Berkeley Lab is a U.S. Department of Energy (DOE) National Laboratory managed by the University of California. It has an annual budget of nearly $480 million (FY2002) and employs a staff of about 4,300, including more than a thousand students.

Berkeley Lab conducts unclassified research across a wide range of scientific disciplines with key efforts in fundamental studies of the universe; quantitative biology; nanoscience; new energy systems and environmental solutions; and the use of integrated computing as a tool for discovery. It is organized into 17 scientific divisions and hosts four DOE national user facilities. Details on Berkeley Lab's divisions and user facilities can be viewed here.

Total Cost of Ownership and Evaluation of Google Cloud Resources for the ATLAS Experiment at the LHC

(2025)

Abstract: The ATLAS Google Project was established as part of an ongoing evaluation of the use of commercial clouds by the ATLAS Collaboration, in anticipation of the potential future adoption of such resources by WLCG grid sites to fulfil or complement their computing pledges. Seamless integration of Google cloud resources into the worldwide ATLAS distributed computing infrastructure was achieved at large scale and for an extended period of time, and hence cloud resources are shown to be an effective mechanism to provide additional, flexible computing capacity to ATLAS. For the first time a total cost of ownership analysis has been performed, to identify the dominant cost drivers and explore effective mechanisms for cost control. Network usage significantly impacts the costs of certain ATLAS workflows, underscoring the importance of implementing such mechanisms. Resource bursting has been successfully demonstrated, whilst exposing the true cost of this type of activity. A follow-up to the project is underway to investigate methods for improving the integration of cloud resources in data-intensive distributed computing environments and reducing costs related to network connectivity, which represents the primary expense when extensively utilising cloud resources.

Cover page of Unlikelihood of a phonon mechanism for the high-temperature superconductivity in La3Ni2O7

Unlikelihood of a phonon mechanism for the high-temperature superconductivity in La3Ni2O7

(2025)

The discovery of ~80 K superconductivity in nickelate La3Ni2O7 under pressure has ignited intense interest. Here, we present a comprehensive first-principles study of the electron-phonon (e-ph) coupling in La3Ni2O7 and its implications on the observed superconductivity. Our results conclude that the e-ph coupling is too weak (with a coupling constant λ ≲ 0.5) to account for the high Tc, albeit interesting many-electron correlation effects exist. While Coulomb interactions (via GW self-energy and Hubbard U) enhance the e-ph coupling strength, electron doping (oxygen vacancies) introduces no major changes. Additionally, different structural phases display varying characteristics near the Fermi level, but do not alter the conclusion. The e-ph coupling landscape of La3Ni2O7 is intrinsically different from that of infinite-layer nickelates. These findings suggest that a phonon-mediated mechanism is unlikely to be responsible for the observed superconductivity in La3Ni2O7, pointing instead to an unconventional nature.

Cover page of Realizing tunable Fermi level in SnTe by defect control

Realizing tunable Fermi level in SnTe by defect control

(2025)

The tuning of the Fermi level in tin telluride, a topological crystalline insulator, is essential for accessing its unique surface states and optimizing its electronic properties for applications such as spintronics and quantum computing. In this study, we demonstrate that the Fermi level in tin telluride can be effectively modulated by controlling the tin concentration during chemical vapor deposition synthesis. By introducing tin-rich conditions, we observed a blue shift in the X-ray photoelectron spectroscopy core-level peaks of both tin and tellurium, indicating an upward shift in the Fermi level. This shift is corroborated by a decrease in work function values measured via ultraviolet photoelectron spectroscopy, confirming the suppression of Sn vacancies. Our findings provide a low-cost, scalable method to achieve tunable Fermi levels in tin telluride, offering a significant advancement in the development of materials with tailored electronic properties for next-generation technological applications. .

Assessing the behavioral realism of energy system models in light of the consumer adoption literature

(2025)

Effective policymaking to achieve net zero greenhouse gas emissions demands an understanding of the complex drivers of, and barriers to, consumer adoption behavior via behaviorally realistic energy system models. Existing models tend to oversimplify by focusing on homogenized financial factors while neglecting consumer heterogeneity and non-monetary influences. This study develops and applies a comprehensive framework for evaluating the behavioral realism of consumer adoption models, informed by the adoption literature. It introduces a typology for factors influencing low-carbon technology adoption decisions: monetary and nonmonetary factors relating to household characteristics, psychology, technological attributes, and contextual conditions. Next, reviews of the consumer adoption and decision-making literature identify the most influential adoption factor categories for distributed solar photovoltaics, electric vehicles, and air-source heat pumps. Finally, the extent to which a selection of energy system models accounts for these adoption factors is assessed. Existing models predominantly emphasize the economic aspects of technology, which are generally identified as the most important factors. Where the models fall short — in considering moderately important factor categories — sector-specific and agent-based models can offer more behaviorally realistic insights. This study sheds light on which types of factors are most important for consumer adoption decisions and investigates how well current models rise to the challenge of behavioral realism. The end-to-end analysis presented enables internally consistent comparisons across models and energy technologies. This research advances timely conversations on consumer adoption. It could inform more behaviorally realistic energy system modeling, and thereby more effective decarbonization policymaking.

Broad range material-to-system screening of metal–organic frameworks for hydrogen storage using machine learning

(2025)

Hydrogen is pivotal in the transition to sustainable energy systems, playing major roles in power generation and industrial applications. Metal–organic frameworks (MOFs) have emerged as promising mediums for efficient hydrogen storage. However, identifying potential candidates for deployment is challenging due to the vast number of currently available synthesized MOFs. This study integrates molecular simulations, machine learning, and techno-economic analysis to evaluate the performance of MOFs across broad operation conditions for hydrogen storage applications. While previous screenings of MOF databases have predominantly emphasized high hydrogen capacities under cryogenic conditions, this study reveals that optimal temperatures and pressures for cost minimization depend on the raw price of the MOF. Specifically, when MOFs are priced at $15/kg, among the 9720 MOFs tested, 9692 MOFs achieve the lowest cost at temperatures between 170 K and 250 K and a pressure of 150 bar. Under these optimal conditions, 362 MOFs deliver a lower levelized cost of storage than 350 bar compressed gas hydrogen storage. Furthermore, this study reveals key material properties that result in low system cost, such as high surface areas (>3000 m2/g), large void fractions (>0.78), and large pore volumes (>1.1 cm3/g).

Cover page of A novel approach for large-scale wind energy potential assessment

A novel approach for large-scale wind energy potential assessment

(2025)

Increasing wind energy generation is central to grid decarbonization, yet methods to estimate wind energy potential are not standardized, leading to inconsistencies and even skewed results. This study aims to improve the fidelity of wind energy potential estimates through an approach that integrates geospatial analysis and machine learning (i.e., Gaussian process regression). We demonstrate this approach to assess the spatial distribution of wind energy capacity potential in the Contiguous United States (CONUS). We find that the capacity-based power density ranges from 1.70 MW/km2 (25th percentile) to 3.88 MW/km2 (75th percentile) for existing wind farms in the CONUS. The value is lower in agricultural areas (2.73 ± 0.02 MW/km2, mean ± 95 % confidence interval) and higher in other land cover types (3.30± 0.03 MW/km2). Notably, advancements in turbine manufacturing could reduce power density in areas with lower wind speeds by adopting low specific-power turbines, but improve power density in areas with higher wind speeds (>8.35 m/s at 120m above the ground), highlighting opportunities for repowering existing wind farms. Wind energy potential is shaped by wind resource quality and is regionally characterized by land cover and physical conditions, revealing significant capacity potential in the Great Plains and Upper Texas. The results indicate that areas previously identified as hot spots using existing approaches (e.g., the west of the Rocky Mountains) may have a limited capacity potential due to low wind resource quality. Improvements in methodology and capacity potential estimates in this study could serve as a new basis for future energy systems analysis and planning.

Cover page of Semi-automatic image annotation using 3D LiDAR projections and depth camera data

Semi-automatic image annotation using 3D LiDAR projections and depth camera data

(2025)

Efficient image annotation is necessary to utilize deep learning object recognition neural networks in nuclear safeguards, such as for the detection and localization of target objects like nuclear material containers (NMCs). This capability can help automate the inventory accounting of different types of NMCs within nuclear storage facilities. The conventional manual annotation process is labor-intensive and time-consuming, hindering the rapid deployment of deep learning models for NMC identifications. This paper introduces a novel semi-automatic method for annotating 2D images of nuclear material containers (NMCs) by combining 3D light detection and ranging (LiDAR) data with color and depth camera images collected from a handheld scan system. The annotation pipeline involves an operator manually marking new target objects on a LiDAR-generated map, and projecting these 3D locations to images, thereby automatically creating annotations from the projections. The semi-automatic approach significantly reduces manual efforts and the expertise in image annotation that is required to perform the task, allowing deep learning models to be trained on-site within a few hours. The paper compares the performance of models trained on datasets annotated through various methods, including semi-automatic, manual, and commercial annotation services. The evaluation demonstrates that the semi-automatic annotation method achieves comparable or superior results, with a mean average precision (mAP) above 0.9, showcasing its efficiency in training object recognition models. Additionally, the paper explores the application of the proposed method to instance segmentation, achieving promising results in detecting multiple types of NMCs in various formations.

Cover page of Technoeconomic analysis for near-term scale-up of bioprocesses

Technoeconomic analysis for near-term scale-up of bioprocesses

(2025)

Growing the bioeconomy requires products and pathways that are cost-competitive. Technoeconomic analyses (TEAs) aim to predict the long-term economic viability and often use what are known as nth plant cost and performance parameters. However, as TEA is more widely adopted to inform everything from early-stage research to company and investor decision-making, the nth plant approach is inadequate and risks being misused to inform the early stages of scale-up. Some methods exist for conducting first-of-a-kind/pioneer plant cost analyses, but these receive less attention and have not been critically evaluated. This article explores TEA methods for early-stage scale-up, critically evaluates their applicability to biofuels and bioproducts, and recommends strategies for producing TEA results better suited to guiding prioritization and successful scale-up of bioprocesses.

Cover page of Absorbing boundary conditions in material point method adopting perfectly matched layer theory

Absorbing boundary conditions in material point method adopting perfectly matched layer theory

(2025)

This study focuses on solving the numerical challenges of imposing absorbing boundary conditions for dynamic simulations in the material point method (MPM). To attenuate elastic waves leaving the computational domain, the current work integrates the Perfectly Matched Layer (PML) theory into the implicit MPM framework. The proposed approach introduces absorbing particles surrounding the computational domain that efficiently absorb outgoing waves and reduce reflections, allowing for accurate modeling of wave propagation and its further impact on geotechnical slope stability analysis. The study also includes several benchmark tests to validate the effectiveness of the proposed method, such as several types of impulse loading and symmetric and asymmetric base shaking. The conducted numerical tests also demonstrate the ability to handle large deformation problems, including the failure of elasto-plastic soils under gravity and dynamic excitations. The findings extend the capability of MPM in simulating continuous analysis of earthquake-induced landslides, from shaking to failure.

Machine learning for reactor power monitoring with limited labeled data

(2025)

Real-time reactor power monitoring is critical for a variety of nuclear applications, spanning safety, security, operations, and maintenance. While machine learning methods have shown promise in monitoring reactor power levels, there is limited research on their efficacy in label-starved environments. The goal of this work is to assess the feasibility of classifying nuclear reactor power level using multisource data in scenarios with limited labels. Data were collected using low-resolution multisensors at four nuclear reactor facilities: two large research reactors and two TRIGA reactors. Within each pair, one reactor dataset served as the source and the other as the target in a transfer learning paradigm. Twenty-three supervised models were trained on labeled sequences of magnetic field and acceleration data from each of the target sites. Self-learning and transfer learning methods were applied to the top performing models to assess their classification performance with increasing amounts of labeled data. While reactor power level classification was achieved with a Matthews Correlation Coefficient of up to 0.739 ± 0.003 and 0.622 ± 0.009 with only 400 sequences per power state for the large research reactor and TRIGA target sites, respectively, self-learning and transfer learning leveraging source site data did not improve target classification performance. These findings suggest that alternative methods, such as higher sensitivity sensors, digital twins, or the use of physics-informed models, are required to enable high-performance classification in machine learning approaches to reactor monitoring with a dearth of target ground truth.