Deep reinforcement learning for drone navigation using sensor data

Author:

Hodge Victoria J.ORCID,Hawkins Richard,Alexander Rob

Abstract

AbstractMobile robots such as unmanned aerial vehicles (drones) can be used for surveillance, monitoring and data collection in buildings, infrastructure and environments. The importance of accurate and multifaceted monitoring is well known to identify problems early and prevent them escalating. This motivates the need for flexible, autonomous and powerful decision-making mobile robots. These systems need to be able to learn through fusing data from multiple sources. Until very recently, they have been task specific. In this paper, we describe a generic navigation algorithm that uses data from sensors on-board the drone to guide the drone to the site of the problem. In hazardous and safety-critical situations, locating problems accurately and rapidly is vital. We use the proximal policy optimisation deep reinforcement learning algorithm coupled with incremental curriculum learning and long short-term memory neural networks to implement our generic and adaptable navigation algorithm. We evaluate different configurations against a heuristic technique to demonstrate its accuracy and efficiency. Finally, we consider how safety of the drone could be assured by assessing how safely the drone would perform using our navigation algorithm in real-world scenarios.

Funder

Innovate UK

Engineering and Physical Sciences Research Council

Publisher

Springer Science and Business Media LLC

Subject

Artificial Intelligence,Software

Reference56 articles.

1. Abadi M et al (2015) TensorFlow: Large-scale machine learning on heterogeneous systems. http://tensorflow.org/. Software available from tensorflow.org

2. Akyildiz IF, Su W, Sankarasubramaniam Y, Cayirci E (2002) Wireless sensor networks: a survey. Comput Netw 38(4):393–422

3. Anderson K, Gaston KJ (2013) Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front Ecol Environ 11(3):138–146

4. Aouf A, Boussaid L, Sakly A (2019) Same fuzzy logic controller for two-wheeled mobile robot navigation in strange environments. J. Robot. 2019:2465219

5. Arulkumaran K, Deisenroth MP, Brundage M, Bharath AA (2017) A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866

Cited by 76 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Autonomous Drones in Urban Navigation: Autoencoder Learning Fusion for Aerodynamics;Journal of Construction Engineering and Management;2024-07

2. Autonomous Navigation of Drones Using Explainable Deep Reinforcement Learning in Foggy Environment;2024 IEEE Students Conference on Engineering and Systems (SCES);2024-06-21

3. Enhancing Drone Security Through Multi-Sensor Anomaly Detection and Machine Learning;SN Computer Science;2024-06-13

4. Autonomous System and AI;Advances in Computational Intelligence and Robotics;2024-05-23

5. Judgmentally adjusted Q-values based on Q-ensemble for offline reinforcement learning;Neural Computing and Applications;2024-05-14

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3