Skip to main content
eScholarship
Open Access Publications from the University of California

LBL Publications

Lawrence Berkeley National Laboratory (Berkeley Lab) has been a leader in science and engineering research for more than 70 years. Located on a 200 acre site in the hills above the Berkeley campus of the University of California, overlooking the San Francisco Bay, Berkeley Lab is a U.S. Department of Energy (DOE) National Laboratory managed by the University of California. It has an annual budget of nearly $480 million (FY2002) and employs a staff of about 4,300, including more than a thousand students.

Berkeley Lab conducts unclassified research across a wide range of scientific disciplines with key efforts in fundamental studies of the universe; quantitative biology; nanoscience; new energy systems and environmental solutions; and the use of integrated computing as a tool for discovery. It is organized into 17 scientific divisions and hosts four DOE national user facilities. Details on Berkeley Lab's divisions and user facilities can be viewed here.

Deep Generative Models for Fast Photon Shower Simulation in ATLAS

(2024)

Abstract: The need for large-scale production of highly accurate simulated event samples for the extensive physics programme of the ATLAS experiment at the Large Hadron Collider motivates the development of new simulation techniques. Building on the recent success of deep learning algorithms, variational autoencoders and generative adversarial networks are investigated for modelling the response of the central region of the ATLAS electromagnetic calorimeter to photons of various energies. The properties of synthesised showers are compared with showers from a full detector simulation using geant4. Both variational autoencoders and generative adversarial networks are capable of quickly simulating electromagnetic showers with correct total energies and stochasticity, though the modelling of some shower shape distributions requires more refinement. This feasibility study demonstrates the potential of using such algorithms for ATLAS fast calorimeter simulation in the future and shows a possible way to complement current simulation techniques.

Artificial Intelligence for the Electron Ion Collider (AI4EIC)

(2024)

The Electron-Ion Collider (EIC), a state-of-the-art facility for studying the strong force, is expected to begin commissioning its first experiments in 2028. This is an opportune time for artificial intelligence (AI) to be included from the start at this facility and in all phases that lead up to the experiments. The second annual workshop organized by the AI4EIC working group, which recently took place, centered on exploring all current and prospective application areas of AI for the EIC. This workshop is not only beneficial for the EIC, but also provides valuable insights for the newly established ePIC collaboration at EIC. This paper summarizes the different activities and R&D projects covered across the sessions of the workshop and provides an overview of the goals, approaches and strategies regarding AI/ML in the EIC community, as well as cutting-edge techniques currently studied in other experiments.

Software Performance of the ATLAS Track Reconstruction for LHC Run 3

(2024)

Charged particle reconstruction in the presence of many simultaneous proton–proton (pp) collisions in the LHC is a challenging task for the ATLAS experiment’s reconstruction software due to the combinatorial complexity. This paper describes the major changes made to adapt the software to reconstruct high-activity collisions with an average of 50 or more simultaneous pp interactions per bunch crossing (pile-up) promptly using the available computing resources. The performance of the key components of the track reconstruction chain and its dependence on pile-up are evaluated, and the improvement achieved compared to the previous software version is quantified. For events with an average of 60pp collisions per bunch crossing, the updated track reconstruction is twice as fast as the previous version, without significant reduction in reconstruction efficiency and while reducing the rate of combinatorial fake tracks by more than a factor two.

The present and future of QCD

(2024)

This White Paper presents an overview of the current status and future perspective of QCD research, based on the community inputs and scientific conclusions from the 2022 Hot and Cold QCD Town Meeting. We present the progress made in the last decade toward a deep understanding of both the fundamental structure of the sub-atomic matter of nucleon and nucleus in cold QCD, and the hot QCD matter in heavy ion collisions. We identify key questions of QCD research and plausible paths to obtaining answers to those questions in the near future, hence defining priorities of our research over the coming decades.

Defining weather scenarios for simulation-based assessment of thermal resilience of buildings under current and future climates: A case study in Brazil

(2024)

In response to increasingly severe weather conditions, optimization of building performance and investment provides an opportunity to consider co-benefits of thermal resilience during energy efficiency retrofits. This work aims to assess thermal resilience of buildings using building performance simulation to evaluate the indoor overheating risk under nine weather scenarios, considering historical (2010s), mid-term future (2050s), and long-term future (2090s) typical meteorological years, and heat wave years. Such an analysis is based on resilience profiles that combine six integrated indicators. A case study with a district of 92 buildings in Brazil was conducted, and a combination of strategies to improve thermal resilience was identified. Results reflect the necessity of planning for resilience in the context of climate change. This is because strategies recommended under current conditions might not be ideal in the future. Therefore, an adaptable design should be prioritized. Cooling energy consumption could increase by 48 % by the 2050s, while excessive overheating issues could reach 37 % of the buildings. Simple passive strategies can significantly reduce the heat stress. A comprehensive thermal resilience analysis should ultimately be accompanied by a thorough reflection on the stakeholders’ objectives, available resources, and planning horizon, as well as the risks assumed for not being resilient.

Energy flexibility quantification of a tropical net-zero office building using physically consistent neural network-based model predictive control

(2024)

Building energy flexibility plays a critical role in demand-side management for reducing utility costs for building owners and sustainable, reliable, and smart grids. Realizing building energy flexibility in tropical regions requires solar photovoltaics and energy storage systems. However, quantifying the energy flexibility of buildings utilizing such technologies in tropical regions has yet to be explored, and a robust control sequence is needed for this scenario. Hence, this work presents a case study to evaluate the building energy flexibility controls and operations of a net-zero energy office building in Singapore. The case study utilizes a data-driven energy flexibility quantification workflow and employs a novel data-driven model predictive control (MPC) framework based on the physically consistent neural network (PCNN) model to optimize the building energy flexibility. To the best of our knowledge, this is the first instance that PCNN is applied to a mathematical MPC setting, and the stability of the system is formally proved. Three scenarios are evaluated and compared: the default regulated flat tariff, a real-time pricing mechanism, and an on-site battery energy storage system (BESS). Our findings indicate that incorporating real-time pricing into the MPC framework could be more beneficial to leverage building energy flexibility for control decisions than the flat-rate approach. Moreover, adding BESS to the on-site PV generation improved the building self-sufficiency and the PV self-consumption by 17% and 20%, respectively. This integration also addresses model mismatch issues within the MPC framework, thus ensuring a more reliable local energy supply. Future research can leverage the proposed PCNN-MPC framework for different data-driven energy flexibility quantification types.

Cover page of Anthropogenic heat from buildings in Los Angeles County: A simulation framework and assessment

Anthropogenic heat from buildings in Los Angeles County: A simulation framework and assessment

(2024)

Anthropogenic heat (AH), i.e., waste heat from buildings to the ambient environment, increases urban air temperature and contributes to the urban heat island effect, which leads to more air-conditioning energy use and higher associated waste heat during summer, forming a positive feedback loop. This study used a bottom-up simulation approach to develop a dataset of the annual hourly AH profiles for 1.7 million buildings in Los Angeles (LA) County for the year 2018 aggregated at three spatial resolutions: 450 m, 12 km, and the census tract. Building AH exhibits strong seasonal and diurnal patterns, as well as large spatial variations across the urban areas. Building AH peaks in May and reaches a maximum of 878 W/m2 within one of several AH hotspots in the region. Among the three major AH components (surface convection, heat rejection from HVAC systems, and zonal air exchange), the surface convection component is the largest, accounting for 78% of the total building AH across LA County. Higher AH is attributed to large building density, a high percentage of industrial buildings, and older building stock. While AH peaks during the day, the resulting ambient temperature increases are much larger during the night. During the July 2018 heatwave in LA County, building AH (excluding the surface component) leads to a daily maximum ambient temperature increase of up to 0.6 °C and a daily minimum ambient temperature increase of up to 2.9 °C. It is recommended that reducing summer building AH should be considered by policy makers in developing mitigation measures for cities to transition to clean energy while improving heat resilience.

Cover page of Estimating geographic variation of infection fatality ratios during epidemics.

Estimating geographic variation of infection fatality ratios during epidemics.

(2024)

OBJECTIVES: We aim to estimate geographic variability in total numbers of infections and infection fatality ratios (IFR; the number of deaths caused by an infection per 1,000 infected people) when the availability and quality of data on disease burden are limited during an epidemic. METHODS: We develop a noncentral hypergeometric framework that accounts for differential probabilities of positive tests and reflects the fact that symptomatic people are more likely to seek testing. We demonstrate the robustness, accuracy, and precision of this framework, and apply it to the United States (U.S.) COVID-19 pandemic to estimate county-level SARS-CoV-2 IFRs. RESULTS: The estimators for the numbers of infections and IFRs showed high accuracy and precision; for instance, when applied to simulated validation data sets, across counties, Pearson correlation coefficients between estimator means and true values were 0.996 and 0.928, respectively, and they showed strong robustness to model misspecification. Applying the county-level estimators to the real, unsimulated COVID-19 data spanning April 1, 2020 to September 30, 2020 from across the U.S., we found that IFRs varied from 0 to 44.69, with a standard deviation of 3.55 and a median of 2.14. CONCLUSIONS: The proposed estimation framework can be used to identify geographic variation in IFRs across settings.

Cover page of Cross-sectoral assessment of CO2 capture from U.S. industrial flue gases for fuels and chemicals manufacture

Cross-sectoral assessment of CO2 capture from U.S. industrial flue gases for fuels and chemicals manufacture

(2024)

Although CO2 impacts the environment negatively, it can be a valuable resource due to its carbon content. The U.S. industry emits over 825 Mt of CO2 annually, with an expected increase in the future. This article analyzes 27 different technology combinations for capturing and using CO2 for industrial feedstocks, including the production of synthetic methane, methanol, and Fischer-Tropsch fuels. The study also estimates and compares the energy requirements for capturing and converting CO2 from 16 different industrial sources, as well as the energy requirements for hydrogen production through state-of-the-art and emerging electrolyzer technologies. Additionally, the study develops a combined scenario that outlines a design for applying an inclusive approach to achieve net-zero CO2 emissions in the industrial sector, incorporating multiple decarbonization measures. The results suggest that the use of CO2 for methane production has the potential to replace all natural gas demands considered in the base case and combined scenarios. However, using CO2 utilization-based Fischer-Tropsch products alone to replace naphtha feedstock and transportation fuels is not sufficient to achieve complete decarbonization in the studied end uses. CO2 utilization-based methanol could potentially substitute for several times the current U.S. methanol production and meet the current global demand for methanol. Moreover, the study conducts an economic analysis to estimate the costs of CO2 utilization, which vary for different industrial sectors and depend on the technologies employed. Overall, this study provides valuable information for policymakers and industry stakeholders who are striving to develop effective strategies to decarbonize the industrial sector.