Affiliation:
1. Hitachi America, Ltd., Santa Clara, California 95054
Abstract
This work proposes a vision-based perception algorithm that combines image-processing-based detection and tracking of aerial objects with convolutional neural networks (CNNs) integrated for classification of general aviation aircraft, multirotor small uncrewed aerial systems (SUAS), fixed-wing SUAS, and birds to enable improved onboard avoidance algorithm decision making. Furthermore, we integrate adversarial learning during the training of the CNNs and evaluate performance with class balanced and imbalanced datasets because this maximizes the utility of resource-expensive flight experiments to collect aviation datasets. We compare our proposed CNN with adversarial learning (CNN+ADVL) model with a state-of-the-art CNN as well as a you only look once (YOLO, v4) model retrained (YOLO v4 aircraft) on the same data. The CNN+ADVL trained on the imbalanced dataset achieves the highest 10-fold cross-validation classification accuracy of 76.2% for aircraft and birds for all ranges while achieving 87.0% aircraft classification accuracy, meeting proposed self-assurance separation distances derived from Federal Aviation Administration (FAA) guidelines. In comparison, the CNNs achieved 74.4% 10-fold cross-validation classification accuracy for aircraft and birds as well as 83.4% accuracy for the aircraft, meeting proposed self-assurance separation distances derived from FAA guidelines. Furthermore, we demonstrate that the integration of adversarial learning improves the classification performance for the perception of aerial objects using a class imbalanced dataset.
Funder
National Aeronautics and Space Administration
Publisher
American Institute of Aeronautics and Astronautics (AIAA)
Subject
Electrical and Electronic Engineering,Computer Science Applications,Aerospace Engineering
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献