Affiliation:
1. College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China
2. College of Electronic Engineering, Nanjing Xiaozhuang University, Nanjing 211171, China
Abstract
Forest fires occur frequently around the world, causing serious economic losses and human casualties. Deep learning techniques based on convolutional neural networks (CNN) are widely used in the intelligent detection of forest fires. However, CNN-based forest fire target detection models lack global modeling capabilities and cannot fully extract global and contextual information about forest fire targets. CNNs also pay insufficient attention to forest fires and are vulnerable to the interference of invalid features similar to forest fires, resulting in low accuracy of fire detection. In addition, CNN-based forest fire target detection models require a large number of labeled datasets. Manual annotation is often used to annotate the huge amount of forest fire datasets; however, this takes a lot of time. To address these problems, this paper proposes a forest fire detection model, TCA-YOLO, with YOLOv5 as the basic framework. Firstly, we combine the Transformer encoder with its powerful global modeling capability and self-attention mechanism with CNN as a feature extraction network to enhance the extraction of global information on forest fire targets. Secondly, in order to enhance the model’s focus on forest fire targets, we integrate the Coordinate Attention (CA) mechanism. CA not only acquires inter-channel information but also considers direction-related location information, which helps the model to better locate and identify forest fire targets. Integrated adaptively spatial feature fusion (ASFF) technology allows the model to automatically filter out useless information from other layers and efficiently fuse features to suppress the interference of complex backgrounds in the forest area for detection. Finally, semi-supervised learning is used to save a large amount of manual labeling effort. The experimental results show that the average accuracy of TCA-YOLO improves by 5.3 compared with the unimproved YOLOv5. TCA-YOLO also outperformed in detecting forest fire targets in different scenarios. The ability of TCA-YOLO to extract global information on forest fire targets was much improved. Additionally, it could locate forest fire targets more accurately. TCA-YOLO misses fewer forest fire targets and is less likely to be interfered with by forest fire-like targets. TCA-YOLO is also more focused on forest fire targets and better at small-target forest fire detection. FPS reaches 53.7, which means that the detection speed meets the requirements of real-time forest fire detection.
Funder
Key Research and Development plan of Jiangsu Province
Jiangsu Modern Agricultural Machinery Equipment and Technology Demonstration and Promotion Project
Nanjing modern agricultural machinery equipment and technological innovation demonstration projects
National Natural Science Foundation of China
Jiangsu Postdoctoral Research Foundation
Reference40 articles.
1. Fire detection using smoke and gas sensors;Chen;Fire Saf. J.,2007
2. Yu, L., Wang, N., and Meng, X. (2005, January 2–4). Real-time forest fire detection with wireless sensor networks. Proceedings of the International Conference on Wireless Communications, Networking and Mobile Computing, Zhangjiajie, China.
3. Zhang, J., Li, W., Yin, Z., Liu, S., and Guo, X. (2009, January 25–27). Forest fire detection system based on wireless sensor network. Proceedings of the 4th IEEE Conference on Industrial Electronics and Applications, Xi’an, China.
4. The development of UV/IR combination flame detector;Lee;J. KIIS,2001
5. Development of neural network committee machines for automatic forest fire detection using lidar;Fernandes;Pattern Recognit.,2004
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献