Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Machine Learning of Building Power Load: A Case Study of a LEED Accredited Institutional Building

No data is associated with this publication.
Abstract

As the growing concern over fossil fuel depletion spreads worldwide, various industries have felt the impact, especially energy consumers. In detail, the need for diversity in energy requirements is rising, and a proton-exchange membrane (PEM) fuel cell is an option to meet that need. In terms of environmental protection and carbon emissions, PEM fuel cells can be completely pollution-free, making them a clean power resource. It becomes a potential candidate to replace fossil fuels as energy sources. This renewable energy source helps us think about large-scale, contemporary projects that require a lot of electric energy. Building energy accounts for 40% of our energy use in the US. PEM fuel cells are a promising clean power source for buildings that can help them meet environmental standards and ease worries about fluctuating power needs. It is essential to know how much electricity will be used at a given time to design and optimize a PEM fuel cell system for a building. In this thesis, we show how to use an artificial neural network (ANN) to predict how much electricity a building will use. We think about the different ways to train a multi-layer neural network, the number of epochs, and the number of hidden layers and neurons in the network. This method has been successfully applied as a simulation program for power load prediction. It is found that, firstly, the Bayesian regularization i s more reasonable and accurate, with an accuracy of 94.4%. At the same time, the Levenberg-Marquardt method also performs viii well, with an accuracy slightly lower than the former. The Levenberg-Marquardt method is further studied since it has the best performance in terms of speed for both ANN training and predictions. The gradient descent’s predictions are poor on the accuracy side, with some results having an accuracy of less than 50%. Second, the Bayesian regularization works well even when other parameters, like the number of neurons and epoch, are changed. However, the mean absolute percentage error (MAPE) goes up in all three training functions, but to different degrees. This happens to the MAPE when decreasing the number of neurons or increasing the epoch.

Main Content

This item is under embargo until January 10, 2025.