Affiliation:
1. AmazingX Academy FoShan China
2. Wuhan University of Technology, School of Computer Science and Artificial Intelligence Wuhan China
3. Sanya Science and Education Innovation Park of Wuhan University of Technology Sanya China
4. Xi'an Institute of Optics and Precision Mechanics of CAS Xi'an China
Abstract
AbstractRoad damage detection (RDD) is critical to society's safety and the efficient allocation of resources. Most road damage detection methods which directly adopt various object detection models face some significant challenges due to the characteristics of the RDD task. First, the damaged objects in the road images are highly diverse in scales and difficult to differentiate, making it more challenging than other tasks. Second, existing methods neglect the relationship between the feature distribution and model structure, which makes it difficult for optimization. To address these challenges, this study proposes an efficient dense attention fusion network with channel correlation loss for road damage detection. First, the K‐Means++ algorithm is applied for data preprocessing to optimize the initial cluster centers and improve the model detection accuracy. Second, a dense attention fusion module is proposed to learn spatial‐spectral attention to enhance multi‐scale fusion features and improve the ability of the model to detect damage areas at different scales. Third, the channel correlation loss is adopted in the class prediction process to maintain the separability of intra and inter‐class. The experimental results on the collected RDDA dataset and RDD2022 dataset show that the proposed method achieves state‐of‐the‐art performance.
Funder
National Natural Science Foundation of China
Publisher
Institution of Engineering and Technology (IET)
Subject
Law,Mechanical Engineering,General Environmental Science,Transportation
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Research on Pavement Crack Detection Based on Improved YOLOv5s;2023 International Conference on Machine Vision, Image Processing and Imaging Technology (MVIPIT);2023-09-22