Machine learning in passive ocean acoustics for localizing and characterizing events
Passive acoustics, or the recording of pressure signals from uncontrolled sound sources, is a powerful tool for monitoring man-made and natural sounds in the ocean. Passive acoustics can be used to detect changes in physical processes within the environment, study behavior and movement of marine animals, or observe presence and motion of ocean vessels and vehicles. Advances in ocean instrumentation and data storage have improved the availability and quality of ambient noise recordings, but there is an ongoing effort to improve signal processing algorithms for extracting useful information from the ambient noise. This dissertation uses machine learning as a framework to address problems in underwater passive acoustic signal processing. Statistical learning has been used for decades, but machine learning has recently gained popularity due to the exponential growth of data and its ability to capitalize on these data with efficient GPU computation. The chapters within this dissertation cover two types of problems: characterization and classification of ambient noise, and localization of passive acoustic sources.
First, ambient noise in the eastern Arctic was studied from April to September 2013 using a vertical hydrophone array as it drifted from near the North Pole to north of Fram Strait. Median power spectral estimates and empirical probability density functions (PDFs) along the array transit show a change in the ambient noise levels corresponding to seismic survey airgun occurrence and received level at low frequencies and transient ice noises at high frequencies. Noise contributors were manually identified and included broadband and tonal ice noises, bowhead whale calling, seismic airgun surveys, and earthquake $T$ phases. The bowhead whale or whales detected were believed to belong to the endangered Spitsbergen population and were recorded when the array was as far north as 86$\degree$24'N. Then, ambient noise recorded in a Hawaiian coral reef was analyzed for classification of whale song and fish calls. Using automatically detected acoustic events, two clustering processes were proposed: clustering handpicked acoustic metrics using unsupervised methods, and deep embedded clustering (DEC) to learn latent features and clusters from fixed-length power spectrograms. When compared on simulated signals of fish calls and whale song, the unsupervised clustering methods were confounded by overlap in the handpicked features while DEC identified clusters with fish calls, whale song, and events with simultaneous fish calls and whale song. Both clustering approaches were applied to recordings from directional autonomous seafloor acoustic recorder (DASAR) sensors on a Hawaiian coral reef in February 2020. Next, source localization in ocean acoustics was posed as a machine learning problem in which data-driven methods learned source ranges or direction-of-arrival directly from observed acoustic data. The pressure received by a vertical linear array was preprocessed by constructing a normalized sample covariance matrix (SCM) and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM) and random forests (RF). The FNN, SVM, RF and conventional matched-field processing were applied to recordings from ships in the Noise09 experiment to demonstrate the potential of machine learning for underwater source localization. The source localization problem was extended by examining the relationship between conventional beamforming and linear supervised learning. Then, a nonlinear deep feedforward neural network (FNN) was developed for direction-of-arrival (DOA) estimation for two-source DOA and for $K$-source DOA, where $K$ is unknown. With multiple snapshots, $K$-source FNN achieved resolution and accuracy similar to Multiple Signal Classification (MUSIC) and SBL for an unknown number of sources. The practicality of the deep FNN model was demonstrated on ships in the Swellex96 experimental data.