Affiliation:
1. College of Information Engineering, Beijing Institute of Petrochemical Technology, Beijing 102617, China
Abstract
A lightweight forest fire detection model based on YOLOv8 is proposed in this paper in response to the problems existing in traditional sensors for forest fire detection. The performance of traditional sensors is easily constrained by hardware computing power, and their adaptability in different environments needs improvement. To balance the accuracy and speed of fire detection, the GhostNetV2 lightweight network is adopted to replace the backbone network for feature extraction of YOLOv8. The Ghost module is utilized to replace traditional convolution operations, conducting feature extraction independently in different dimensional channels, significantly reducing the complexity of the model while maintaining excellent performance. Additionally, an improved CPDCA channel priority attention mechanism is proposed, which extracts spatial features through dilated convolution, thereby reducing computational overhead and enabling the model to focus more on fire targets, achieving more accurate detection. In response to the problem of small targets in fire detection, the Inner IoU loss function is introduced. By adjusting the size of the auxiliary bounding boxes, this function effectively enhances the convergence effect of small target detection, further reducing missed detections, and improving overall detection accuracy. Experimental results indicate that, compared with traditional methods, the algorithm proposed in this paper significantly improves the average precision and FPS of fire detection while maintaining a smaller model size. Through experimental analysis, compared with YOLOv3-tiny, the average precision increased by 5.9% and the frame rate reached 285.3 FPS when the model size was only 4.9 M; compared with Shufflenet, the average precision increased by 2.9%, and the inference speed tripled. Additionally, the algorithm effectively addresses false positives, such as cloud and reflective light, further enhancing the detection of small targets and reducing missed detections.
Funder
Scientific Research Program of Beijing Municipal Commission of Education, Natural Science Foundation of Beijing
Reference72 articles.
1. CTIF World Fire Statistics Center (2024, April 30). World Fire Statistics. Available online: https://ctif.org/world-fire-statistics.
2. (2024, March 31). Department of Agriculture, Water and the Environment, Canberra, Australia; National Indicative Aggregated Fire Extent Dataset, Available online: https://www.agriculture.gov.au/abares/forestsaustralia/forest-data-maps-and-tools/data-by-topic/fire#area-of-native-forest-in-fire-area-by-forest-tenure-and-jurisdiction.
3. A survey on vision-based outdoor smoke detection techniques for environmental safety;Chaturvedi;ISPRS J. Photogramm. Remote Sens.,2022
4. Anđelić, N., Baressi Šegota, S., Lorencin, I., and Car, Z. (2023). The Development of Symbolic Expressions for Fire Detection with Symbolic Classifier Using Sensor Fusion Data. Sensors, 23.
5. Fonollosa, J., Solorzano, A., and Marco, S. (2018). Chemical sensor systems and associated algorithms for fire detection: A review. Sensors, 18.