YOLOv8s-NE: Enhancing Object Detection of Small Objects in Nursery Environments Based on Improved YOLOv8
-
Published:2024-08-19
Issue:16
Volume:13
Page:3293
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Amir Supri Bin12, Horio Keiichi1
Affiliation:
1. Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu 808-0196, Japan 2. Department of Information Systems, Hasanuddin University, Makassar 90245, South Sulawesi, Indonesia
Abstract
The primary objective of this research investigation is to examine object detection within the specific environment of a nursery. The nursery environment presents a complex scene with a multitude of objects, varying in size and background. To simulate real-world conditions, we gathered data from a nursery. Our study is centered around the detection of small objects, particularly in nursery settings where objects that include stationery, toys, and small accessories are commonly present. These objects are of significant importance in facilitating cognition of the activities and interactions taking place within the confines of the room. Due to their small size and the possibility of occlusion by other objects or children, precisely detecting these objects is regrettably fraught with inherent challenges. This study introduces YOLOv8s-NE in an effort to enhance the detection of small objects found in the nursery. We improve the standard YOLOv8 by incorporating an extra detection head to effectively for small objects. We replace the C2f module with C2f_DCN to further improve the model’s ability to detect objects of varying sizes that can be deformed or occluded within the image. Furthermore, we introduce NAM attention to focus on the important features and ignore less informative ones, thereby improving the accuracy of our proposed model. We used the five-fold cross-validation approach to split the dataset in order to evaluate the performance of YOLOv8s-NE, thereby facilitating a more comprehensive model evaluation. Our model achieves 34.1% of APs, 45.1% of mAP50:90, and 76.7% of mAP50 detection accuracy at 37.55 FPS on the nursery dataset. In terms of APs, mAP50:90, and mAP50 metrics, our proposed YOLOv8s-NE model outperforms the standard YOLOv8s model, with improvements of 4.6%, 4.7%, and 3.9%, respectively. We apply our proposed YOLOv8s-NE model as a safety system by developing an algorithm to detect objects on top of cabinets that could be potentially risky to children.
Reference34 articles.
1. Li, Q., Hu, S., Shimasaki, K., and Ishii, I. (2023). An Active Multi-Object Ultrafast Tracking System with CNN-Based Hybrid Object Detection. Sensors, 23. 2. Hua, S., Kapoor, M., and Anastasiu, D.C. (2018, January 18–22). Vehicle Tracking and Speed Estimation from Traffic Videos. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA. 3. Domozi, Z., Stojcsics, D., Benhamida, A., Kozlovszky, M., and Molnar, A. (2020, January 2–4). Real Time Object Detection for Aerial Search and Rescue Missions for Missing Persons. Proceedings of the 2020 IEEE 15th International Conference of System of Systems Engineering (SoSE), Budapest, Hungary. 4. Choutri, K., Lagha, M., Meshoul, S., Batouche, M., Bouzidi, F., and Charef, W. (2023). Fire Detection and Geo-Localization Using UAV’s Aerial Images and Yolo-Based Models. Appl. Sci., 13. 5. Jawaharlalnehru, A., Sambandham, T., Sekar, V., Arunnehru, J., Loganathan, V., Kannadasan, R., Khan, A.A., Wechtaisong, C., Haq, M.A., and Alhussen, A. (2022). Target Object Detection from Unmanned Aerial Vehicle (UAV) Images Based on Improved YOLO Algorithm. Electronics, 11.
|
|