Author:
Nguyen Phuoc Thuan,Westerlund Tomi,Peña Queralta Jorge
Abstract
The remarkable growth of unmanned aerial vehicles (UAVs) has also sparked concerns about safety measures during their missions. To advance towards safer autonomous aerial robots, this work presents a vision-based solution to ensuring safe autonomous UAV landings with minimal infrastructure. During docking maneuvers, UAVs pose a hazard to people in the vicinity. In this paper, we propose the use of a single omnidirectional panoramic camera pointing upwards from a landing pad to detect and estimate the position of people around the landing area. The images are processed in real-time in an embedded computer, which communicates with the onboard computer of approaching UAVs to transition between landing, hovering or emergency landing states. While landing, the ground camera also aids in finding an optimal position, which can be required in case of low-battery or when hovering is no longer possible. We use a YOLOv7-based object detection model and a XGBooxt model for localizing nearby people, and the open-source ROS and PX4 frameworks for communication, interfacing, and control of the UAV. We present both simulation and real-world indoor experimental results to show the efficiency of our methods.
Subject
Artificial Intelligence,Computer Science Applications
Reference45 articles.
1. A survey of safe landing zone detection techniques for autonomous unmanned aerial vehicles (UAVs);Alam;Expert Syst. Appl.,2021
2. Embedded vision systems: a review of the literature;Bhowmik,2018
3. High-speed tracking-by-detection without using image information;Bochinski,2017
4. Crowd detection for drone safe landing through fully-convolutional neural networks;Castellano,2020