Affiliation:
1. Key Laboratory of Underwater Acoustic Environment, Chinese Academy of Sciences , Beijing 100190, China
Abstract
The you-only-look-once (YOLO) model identifies objects in complex images by framing detection as a regression problem with spatially separated boundaries and class probabilities. Object detection from complex images is somewhat similar to underwater source detection from acoustic data, e.g., time-frequency distributions. Herein, YOLO is modified for joint source detection and azimuth estimation in a multi-interfering underwater acoustic environment. The modified you-only-look-once (M-YOLO) input is a frequency-beam domain (FBD) sample containing the target and multi-interfering spectra at different azimuths, generated from the received data of a towed horizontal line array. M-YOLO processes the whole FBD sample using a single-regression neural network and directly outputs the target-existence probability and spectrum azimuth. Model performance is assessed on both simulated and at-sea data. Simulation results reveal the strong robustness of M-YOLO toward different signal-to-noise ratios and mismatched ocean environments. As tested on the data collected in an actual multi-interfering environment, M-YOLO achieved near-100% target detection and a root mean square error of 0.54° in azimuth estimation.
Publisher
Acoustical Society of America (ASA)
Subject
Acoustics and Ultrasonics,Arts and Humanities (miscellaneous)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献