Deep learning for all:  managing and analyzing underwater and remote sensing imagery on the web using BisQue
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Previously Published Works bannerUC Santa Barbara

Deep learning for all:  managing and analyzing underwater and remote sensing imagery on the web using BisQue

Abstract

Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification, detection and segmentation tasks but are typically computationally expensive and require extensive training on large datasets. Therefore, harnessing distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of complex multi-dimensional underwater and remote sensing imagery and associated data. It is designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and heterogeneous computational environments behind a user friendly web-based interface. BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. We are developing a deep learning service for automated image classification. The service allows training various models with a single click, validating their performance and providing several modes of classification: point classification for percent cover, image partitioning for substrate description and object detection for counting organisms. Semantic segmentation can be used to classify all pixels in an image, allowing estimation of organism size and species interactions. Our experiments on identification of 11 benthic marine organisms within a dataset of 2K images with 200K annotations demonstrate good performance with the overall accuracy of 86% and 4% error. We are now constructing a hierarchical model of more than 300 species on 6K images with 1M annotations.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View