Skip to main content
eScholarship
Open Access Publications from the University of California

LBL Publications

Lawrence Berkeley National Laboratory (Berkeley Lab) has been a leader in science and engineering research for more than 70 years. Located on a 200 acre site in the hills above the Berkeley campus of the University of California, overlooking the San Francisco Bay, Berkeley Lab is a U.S. Department of Energy (DOE) National Laboratory managed by the University of California. It has an annual budget of nearly $480 million (FY2002) and employs a staff of about 4,300, including more than a thousand students.

Berkeley Lab conducts unclassified research across a wide range of scientific disciplines with key efforts in fundamental studies of the universe; quantitative biology; nanoscience; new energy systems and environmental solutions; and the use of integrated computing as a tool for discovery. It is organized into 17 scientific divisions and hosts four DOE national user facilities. Details on Berkeley Lab's divisions and user facilities can be viewed here.

Deep Generative Models for Fast Photon Shower Simulation in ATLAS

(2024)

Abstract: The need for large-scale production of highly accurate simulated event samples for the extensive physics programme of the ATLAS experiment at the Large Hadron Collider motivates the development of new simulation techniques. Building on the recent success of deep learning algorithms, variational autoencoders and generative adversarial networks are investigated for modelling the response of the central region of the ATLAS electromagnetic calorimeter to photons of various energies. The properties of synthesised showers are compared with showers from a full detector simulation using geant4. Both variational autoencoders and generative adversarial networks are capable of quickly simulating electromagnetic showers with correct total energies and stochasticity, though the modelling of some shower shape distributions requires more refinement. This feasibility study demonstrates the potential of using such algorithms for ATLAS fast calorimeter simulation in the future and shows a possible way to complement current simulation techniques.

Software Performance of the ATLAS Track Reconstruction for LHC Run 3

(2024)

Charged particle reconstruction in the presence of many simultaneous proton–proton (pp) collisions in the LHC is a challenging task for the ATLAS experiment’s reconstruction software due to the combinatorial complexity. This paper describes the major changes made to adapt the software to reconstruct high-activity collisions with an average of 50 or more simultaneous pp interactions per bunch crossing (pile-up) promptly using the available computing resources. The performance of the key components of the track reconstruction chain and its dependence on pile-up are evaluated, and the improvement achieved compared to the previous software version is quantified. For events with an average of 60pp collisions per bunch crossing, the updated track reconstruction is twice as fast as the previous version, without significant reduction in reconstruction efficiency and while reducing the rate of combinatorial fake tracks by more than a factor two.

Artificial Intelligence for the Electron Ion Collider (AI4EIC)

(2024)

The Electron-Ion Collider (EIC), a state-of-the-art facility for studying the strong force, is expected to begin commissioning its first experiments in 2028. This is an opportune time for artificial intelligence (AI) to be included from the start at this facility and in all phases that lead up to the experiments. The second annual workshop organized by the AI4EIC working group, which recently took place, centered on exploring all current and prospective application areas of AI for the EIC. This workshop is not only beneficial for the EIC, but also provides valuable insights for the newly established ePIC collaboration at EIC. This paper summarizes the different activities and R&D projects covered across the sessions of the workshop and provides an overview of the goals, approaches and strategies regarding AI/ML in the EIC community, as well as cutting-edge techniques currently studied in other experiments.

Cover page of A high order cut-cell method for solving the shallow-shelf equations

A high order cut-cell method for solving the shallow-shelf equations

(2024)

In this paper we present a novel method for solving the shallow-shelf equations in the presence of grounding lines. The shallow-self equations are a two-dimensional system of nonlinear elliptic PDEs with variable coefficients that are discontinuous across the grounding line, which we treat as a sharp interface between grounded and floating ice. The grounding line is “reconstructed” from ice thickness and basal topography data to provide necessary geometric information for our cut-cell, finite volume discretization. Our discretization enforces jump conditions across the grounding line and achieves high-order accuracy using stencils constructed with a weighted least-squares method. We demonstrate second and fourth order convergence of the velocity field, driving stress, and reconstructed geometric information.

Integration of genome-scale metabolic model with biorefinery process model reveals market-competitive carbon-negative sustainable aviation fuel utilizing microbial cell mass lipids and biogenic CO2

(2024)

Producing scalable, economically viable, low-carbon biofuels or biochemicals hinges on more efficient bioconversion processes. While microbial conversion can offer robust solutions, the native microbial growth process often redirects a large fraction of carbon to CO2 and cell mass. By integrating genome-scale metabolic models with techno-economic and life cycle assessment models, this study analyzes the effects of converting cell mass lipids to hydrocarbon fuels, and CO2 to methanol on the facility’s costs and life-cycle carbon footprint. Results show that upgrading microbial lipids or both microbial lipids and CO2 using renewable hydrogen produces carbon-negative bisabolene. Additionally, on-site electrolytic hydrogen production offers a supply of pure oxygen to use in place of air for bioconversion and fuel combustion in the boiler. To reach cost parity with conventional jet fuel, renewable hydrogen needs to be produced at less than $2.2 to $3.1/kg, with a bisabolene yield of 80% of the theoretical yield, along with cell mass and CO2 yields of 22 wt% and 54 wt%, respectively. The economic combination of cell mass, CO2, and bisabolene yields demonstrated in this study provides practical insights for prioritizing research, selecting suitable hosts, and determining necessary engineered production levels.

The present and future of QCD

(2024)

This White Paper presents an overview of the current status and future perspective of QCD research, based on the community inputs and scientific conclusions from the 2022 Hot and Cold QCD Town Meeting. We present the progress made in the last decade toward a deep understanding of both the fundamental structure of the sub-atomic matter of nucleon and nucleus in cold QCD, and the hot QCD matter in heavy ion collisions. We identify key questions of QCD research and plausible paths to obtaining answers to those questions in the near future, hence defining priorities of our research over the coming decades.

Measurement of the Z boson invisible width at s = 13 TeV with the ATLAS detector

(2024)

A measurement of the invisible width of the Z boson using events with jets and missing transverse momentum is presented using 37 fb−1 of 13 TeV proton–proton data collected by the ATLAS detector in 2015 and 2016. The ratio of Z→inv to Z→ℓℓ events, where inv refers to non-detected particles and ℓ is either an electron or a muon, is measured and corrected for detector effects. Events with at least one energetic central jet with pT≥110 GeV are selected for both the Z→inv and Z→ℓℓ final states to obtain a similar phase space in the ratio. The invisible width is measured to be 506±2(stat.)±12(syst.) MeV and is the single most precise recoil-based measurement. The result is in agreement with the most precise determination from LEP and the Standard Model prediction based on three neutrino generations.

Cover page of Defining weather scenarios for simulation-based assessment of thermal resilience of buildings under current and future climates: A case study in Brazil

Defining weather scenarios for simulation-based assessment of thermal resilience of buildings under current and future climates: A case study in Brazil

(2024)

In response to increasingly severe weather conditions, optimization of building performance and investment provides an opportunity to consider co-benefits of thermal resilience during energy efficiency retrofits. This work aims to assess thermal resilience of buildings using building performance simulation to evaluate the indoor overheating risk under nine weather scenarios, considering historical (2010s), mid-term future (2050s), and long-term future (2090s) typical meteorological years, and heat wave years. Such an analysis is based on resilience profiles that combine six integrated indicators. A case study with a district of 92 buildings in Brazil was conducted, and a combination of strategies to improve thermal resilience was identified. Results reflect the necessity of planning for resilience in the context of climate change. This is because strategies recommended under current conditions might not be ideal in the future. Therefore, an adaptable design should be prioritized. Cooling energy consumption could increase by 48 % by the 2050s, while excessive overheating issues could reach 37 % of the buildings. Simple passive strategies can significantly reduce the heat stress. A comprehensive thermal resilience analysis should ultimately be accompanied by a thorough reflection on the stakeholders’ objectives, available resources, and planning horizon, as well as the risks assumed for not being resilient.

Cover page of Energy flexibility quantification of a tropical net-zero office building using physically consistent neural network-based model predictive control

Energy flexibility quantification of a tropical net-zero office building using physically consistent neural network-based model predictive control

(2024)

Building energy flexibility plays a critical role in demand-side management for reducing utility costs for building owners and sustainable, reliable, and smart grids. Realizing building energy flexibility in tropical regions requires solar photovoltaics and energy storage systems. However, quantifying the energy flexibility of buildings utilizing such technologies in tropical regions has yet to be explored, and a robust control sequence is needed for this scenario. Hence, this work presents a case study to evaluate the building energy flexibility controls and operations of a net-zero energy office building in Singapore. The case study utilizes a data-driven energy flexibility quantification workflow and employs a novel data-driven model predictive control (MPC) framework based on the physically consistent neural network (PCNN) model to optimize the building energy flexibility. To the best of our knowledge, this is the first instance that PCNN is applied to a mathematical MPC setting, and the stability of the system is formally proved. Three scenarios are evaluated and compared: the default regulated flat tariff, a real-time pricing mechanism, and an on-site battery energy storage system (BESS). Our findings indicate that incorporating real-time pricing into the MPC framework could be more beneficial to leverage building energy flexibility for control decisions than the flat-rate approach. Moreover, adding BESS to the on-site PV generation improved the building self-sufficiency and the PV self-consumption by 17% and 20%, respectively. This integration also addresses model mismatch issues within the MPC framework, thus ensuring a more reliable local energy supply. Future research can leverage the proposed PCNN-MPC framework for different data-driven energy flexibility quantification types.

Cover page of Anthropogenic heat from buildings in Los Angeles County: A simulation framework and assessment

Anthropogenic heat from buildings in Los Angeles County: A simulation framework and assessment

(2024)

Anthropogenic heat (AH), i.e., waste heat from buildings to the ambient environment, increases urban air temperature and contributes to the urban heat island effect, which leads to more air-conditioning energy use and higher associated waste heat during summer, forming a positive feedback loop. This study used a bottom-up simulation approach to develop a dataset of the annual hourly AH profiles for 1.7 million buildings in Los Angeles (LA) County for the year 2018 aggregated at three spatial resolutions: 450 m, 12 km, and the census tract. Building AH exhibits strong seasonal and diurnal patterns, as well as large spatial variations across the urban areas. Building AH peaks in May and reaches a maximum of 878 W/m2 within one of several AH hotspots in the region. Among the three major AH components (surface convection, heat rejection from HVAC systems, and zonal air exchange), the surface convection component is the largest, accounting for 78% of the total building AH across LA County. Higher AH is attributed to large building density, a high percentage of industrial buildings, and older building stock. While AH peaks during the day, the resulting ambient temperature increases are much larger during the night. During the July 2018 heatwave in LA County, building AH (excluding the surface component) leads to a daily maximum ambient temperature increase of up to 0.6 °C and a daily minimum ambient temperature increase of up to 2.9 °C. It is recommended that reducing summer building AH should be considered by policy makers in developing mitigation measures for cities to transition to clean energy while improving heat resilience.