Affiliation:
1. Department of Operational Sciences, Air Force Institute of Technology, USA
2. Sensors Directorate, Air Force Research Laboratory, USA
Abstract
Object detection algorithms have reached nearly superhuman levels within the last decade; however, these algorithms require large diverse training data sets to ensure their operational performance matches performance demonstrated during testing. The collection and human labeling of such data sets can be expensive and, in some cases, such as Intelligence, Surveillance and Reconnaissance of rare events it may not even be feasible. This research proposes a novel method for creating additional variability within the training data set by utilizing multiple models of generative adversarial networks producing both high- and low-quality synthetic images of vehicles and inserting those images alongside images of real vehicles into real backgrounds. This research demonstrates a 17.90% increase in mean absolute percentage error, on average, compared to the YOLOv4-Tiny Model trained on the original non-augmented training set as well as a 14.44% average improvement in the average intersection over union rate. In addition, our research adds to a small, but growing, body of literature indicating that the inclusion of low-quality images into training data sets is beneficial to the performance of computer vision models.
Funder
Air Force Research Laboratory
Subject
Engineering (miscellaneous),Modeling and Simulation
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献