Affiliation:
1. CCCC Second Harbor Engineering Co., Ltd., Wuhan 430040, China
2. CCCC Wuhan Harbor Engineering Design and Research Institute Co., Ltd., Wuhan 430040, China
3. Hubei Provincial Key Laboratory of New Materials, Maintenance and Reinforcement Technology for Marine Structures, Wuhan 430040, China
4. School of Transportation and Logistics Engineering, Wuhan University of Technology, Wuhan 430063, China
Abstract
The appearance quality of fair-faced concrete plays a crucial role in evaluating the engineering quality, as the abundance of small-area bubbles generated during construction diminishes the surface quality of concrete. However, existing methods are plagued by sluggish detection speed and inadequate accuracy. Therefore, this paper proposes an improved method based on YOLOv5 to rapidly and accurately detect small bubble defects on the surface of fair-faced concrete. Firstly, to address the issue of YOLOv5 in generating prior boxes for imbalanced samples, we divide the image preprocessing part into small-, medium-, and large-area intervals corresponding to the number of heads. Additionally, we propose an area-based k-means clustering approach specifically tailored for the anchor boxes within each of these intervals. Moreover, we adjust the number of prior boxes generated by k-means clustering according to the training loss function to adapt to bubbles of different sizes. Then, we introduce the ECA (Efficient Channel Attention) mechanism into the neck part of the model to effectively capture inter-channel interactions and enhance feature representation. Subsequently, we incorporate feature concatenation in the neck part to facilitate the fusion of low-level and high-level features, thereby improving the accuracy and generalization ability of the network. Finally, we construct our own dataset containing 980 images of two classes: cement and bubbles. Comparative experiments are conducted on our dataset using YOLOv5s, YOLOv6s, YOLOxs, and our method. Experimental results demonstrate that the proposed method achieves the highest detection accuracy in terms of mAP0.5, mAP0.75, and mAP0.5:0.95. Compared to YOLOv5s, our method achieves a 7.1% improvement in mAP0.5, a 3.7% improvement in mAP0.75, and a 4.5% improvement in mAP0.5:0.95.