Abstract
The issue of worker safety at construction sites has become increasingly prominent within the construction industry. Safety helmet usage has been shown to reduce accidents among construction workers. However, there are instances when safety helmets are not consistently worn, which may be attributed to a variety of factors. Therefore, an automated system based on computer vision needs to be established to track protective gear appropriate usage. While there have been studies on helmet detection systems, there is a limited amount of research specifically addressing helmet detection. Also, various challenges need to be addressed such as small object miss-detection and occluded helmet detection. To fix these issues, a Deformable Perspective Perception Network (DPPNet) is proposed in this paper. Two modules make up the proposed DPPNet: Background/Image Spatial Fusion (BISF) and Grayscale Background Subtraction (GBS). While the BISF module utilizes channel attention to blend feature maps from a current frame and the background, the GBS submodule in particular incorporates background spatial information into a current frame. Additionally, the DPPNet facilitates occluded and small helmet detection. Excessive training and testing experiments have been performed using the Safety Helmet Wearing Detection (SHWD) Dataset. Experimental results demonstrate the effectiveness of the proposed DPPNet network. The obtained findings exhibit that the suggested module significantly enhances the detection capabilities of small objects. Effective mean average precision results have been obtained on the SHWD dataset coming up to 97.4% of mAP.
Publisher
Engineering, Technology & Applied Science Research
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献