- Nagy, Zoltan;
- Henze, Gregor;
- Dey, Sourav;
- Arroyo, Javier;
- Helsen, Lieve;
- Zhang, Xiangyu;
- Chen, Bingqing;
- Amasyali, Kadir;
- Kurte, Kuldeep;
- Zamzam, Ahmed;
- Zandi, Helia;
- Drgoňa, Ján;
- Quintana, Matias;
- McCullogh, Steven;
- Park, June Young;
- Li, Han;
- Hong, Tianzhen;
- Brandi, Silvio;
- Pinto, Giuseppe;
- Capozzoli, Alfonso;
- Vrabie, Draguna;
- Bergés, Mario;
- Nweye, Kingsley;
- Marzullo, Thibault;
- Bernstein, Andrey
As buildings account for approximately 40% of global energy consumption and associated greenhouse gas emissions, their role in decarbonizing the power grid is crucial. The increased integration of variable energy sources, such as renewables, introduces uncertainties and unprecedented flexibilities, necessitating buildings to adapt their energy demand to enhance grid resiliency. Consequently, buildings must transition from passive energy consumers to active grid assets, providing demand flexibility and energy elasticity while maintaining occupant comfort and health. This fundamental shift demands advanced optimal control methods to manage escalating energy demand and avert power outages. Reinforcement learning (RL) emerges as a promising method to address these challenges. In this paper, we explore ten questions related to the application of RL in buildings, specifically targeting flexible energy management. We consider the growing availability of data, advancements in machine learning algorithms, open-source tools, and the practical deployment aspects associated with software and hardware requirements. Our objective is to deliver a comprehensive introduction to RL, present an overview of existing research and accomplishments, underscore the challenges and opportunities, and propose potential future research directions to expedite the adoption of RL for building energy management.