A Learnable Image Compression Scheme for Synthetic Aperture Sonar Imagery
Synthetic aperture sonar (SAS) is an imaging modality which produces high and constant resolution images of the seafloor. These sonars are often mounted to a unmanned underwater vehicle (UUV) to autonomously collect imagery of a prescribed survey area. While a survey is underway, UUV communications back to the operator are often limited due to the use of a low-bandwidth acoustic communications (ACOMMS) channel. Because of this, high-quality SAS imagery is rarely sent over this link due to the lack of an efficient compression scheme to send such information. Creating an efficient SAS image compression scheme provides at least two operational benefits: (1) image chips beamformed and tagged by onboard processing algorithms can be quickly communicated to operators while a survey is ongoing, and (2) cooperative UUVs can exchange salient image chips among themselves to reconcile position ambiguity and obtain a shared reference frame. In this work we propose a learned image compression scheme for SAS imagery using deep neural networks (DNNs). DNNs have already been applied to the image compression problem but almost exclusively for optical imagery. We highlight some important differences between SAS imagery and optical imagery which prevents the simple application of off-the-shelf (OTS) methods like JPEG and WebP to SAS imagery. We propose an image compression scheme which specifically addresses the domain-specific properties of SAS imagery to obtain useful image compression performance on a real-world SAS dataset. We show that we can reduce the bitrate by up to thirty-five percent while still maintaining the same perceptual image quality as OTS codecs.
|Work Title||A Learnable Image Compression Scheme for Synthetic Aperture Sonar Imagery|
|License||In Copyright (Rights Reserved)|
|Publication Date||January 1, 2021|
|Publisher Identifier (DOI)||
|Deposited||September 27, 2022|
This resource is currently not in any collection.