Affiliation:
1. School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
2. Department of Information Systems, College of Computer and Information Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3. School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
Abstract
Unmanned Aerial Vehicles (UAVs), also known as drones, have advanced greatly in recent years. There are many ways in which drones can be used, including transportation, photography, climate monitoring, and disaster relief. The reason for this is their high level of efficiency and safety in all operations. While the design of drones strives for perfection, it is not yet flawless. When it comes to detecting and preventing collisions, drones still face many challenges. In this context, this paper describes a methodology for developing a drone system that operates autonomously without the need for human intervention. This study applies reinforcement learning algorithms to train a drone to avoid obstacles autonomously in discrete and continuous action spaces based solely on image data. The novelty of this study lies in its comprehensive assessment of the advantages, limitations, and future research directions of obstacle detection and avoidance for drones, using different reinforcement learning techniques. This study compares three different reinforcement learning strategies—namely, Deep Q-Networks (DQN), Proximal Policy Optimization (PPO), and Soft Actor-Critic (SAC)—that can assist in avoiding obstacles, both stationary and moving; however, these strategies have been more successful in drones. The experiment has been carried out in a virtual environment made available by AirSim. Using Unreal Engine 4, the various training and testing scenarios were created for understanding and analyzing the behavior of RL algorithms for drones. According to the training results, SAC outperformed the other two algorithms. PPO was the least successful among the algorithms, indicating that on-policy algorithms are ineffective in extensive 3D environments with dynamic actors. DQN and SAC, two off-policy algorithms, produced encouraging outcomes. However, due to its constrained discrete action space, DQN may not be as advantageous as SAC in narrow pathways and twists. Concerning further findings, when it comes to autonomous drones, off-policy algorithms, such as DQN and SAC, perform more effectively than on-policy algorithms, such as PPO. The findings could have practical implications for the development of safer and more efficient drones in the future.
Funder
Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
Subject
Artificial Intelligence,Computer Science Applications,Aerospace Engineering,Information Systems,Control and Systems Engineering
Reference25 articles.
1. Comparative Review Study of Military and Civilian Unmanned Aerial Vehicles (UAVs);Chaturvedi;INCAS Bull.,2019
2. Applications of Drones in Infrastructures: Challenges and Opportunities;Fan;Int. J. Mech. Mechatron. Eng.,2019
3. Mohsan, S.A.H., Khan, M.A., Noor, F., Ullah, I., and Alsharif, M.H. (2022). Towards the Unmanned Aerial Vehicles (UAVs): A Comprehensive Review. Drones, 6.
4. Autonomous Unmanned Aerial Vehicle navigation using Reinforcement Learning: A systematic review;AlMahamid;Eng. Appl. Artif. Intell.,2022
5. Kan, M.K., Okamoto, S., and Lee, J.H. (2018, January 14–16). Development of drone capable of autonomous flight using GPS. Proceedings of the International MultiConference of Engineers and Computer Scientists Vol II, Hong Kong, China.
Cited by
20 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献