Skip to main content
eScholarship
Open Access Publications from the University of California

UC Merced

UC Merced Electronic Theses and Dissertations bannerUC Merced

DEEP LEARNING METHODS FOR ENGINEERING APPLICATIONS.

Abstract

Neural networks and deep learning are changing the way that engineering is being practiced. New and more efficient deep learning models are having a large impact in many engineering fields. Common engineering applications of deep learn- ing are prognostics and health assessment of mechanical systems, design of optimal control, and surrogate modeling for computational fluid dynamics.

However, for a deep learning application to be successful there are a number of key decisions that have to be taken. Some of these decisions include the choice of a certain model architecture or topology, hyper-parameter tuning, and a suitable data pre-processing method. Efficiently choosing these key components is a time- consuming task usually entailing a staggering number of possible alternatives. This thesis aims to develop methods that make this process less cumbersome.

In this thesis, we propose a framework for efficiently estimating the remaining useful life (RUL) of mechanical systems. The framework mainly focuses on the data pre-processing stage of a machine learning pipeline. Making use of evolutionary algorithms and strided time windows, the presented framework can help process the data in a way that even simple deep learning models can make good predictions on it. The framework is tested using the C-MAPSS dataset, which consists of data recorded from the sensors of simulated jet engines. The obtained results are competitive compared against some of the recent methods applied to the same dataset.

Furthermore, an algorithm for efficiently selecting a neural network model given a specific problem (classification or regression) is also developed. The al- gorithm, named Automatic Model Selection (AMS), is a modified micro-genetic algorithm that automatically and efficiently finds the most suitable neural network model for a given dataset. The major contributions of this development are a simple list based encoding for neural networks as genotypes in an evolutionary algorithm, new crossover and mutation operators, the introduction of a fitness function that considers both, the accuracy of the model and its complexity and a method to measure the similarity between two neural networks. AMS is evaluated on two dif- ferent datasets. By comparing some models obtained with AMS to state-of-the-art models for each dataset we show that AMS can automatically find efficient neural network models. Furthermore, AMS is computationally efficient and can make use of distributed computing paradigms to further boost its performance.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View