Visual-Based Children and Pet Rescue from Suffocation and Incidence of Hyperthermia Death in Enclosed Vehicles
Author:
Moussa Mona M.1ORCID, Shoitan Rasha1, Cho Young-Im2ORCID, Abdallah Mohamed S.34ORCID
Affiliation:
1. Computer and Systems Department, Electronics Research Institute (ERI), Cairo 11843, Egypt 2. Department of Computer Engineering, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam 13415, Republic of Korea 3. Informatics Department, Electronics Research Institute (ERI), Cairo 11843, Egypt 4. AI Laboratory, DeltaX Co., Ltd., 3F, 24 Namdaemun-ro 9-gil, Jung-gu, Seoul 04522, Republic of Korea
Abstract
Over the past several years, many children have died from suffocation due to being left inside a closed vehicle on a sunny day. Various vehicle manufacturers have proposed a variety of technologies to locate an unattended child in a vehicle, including pressure sensors, passive infrared motion sensors, temperature sensors, and microwave sensors. However, these methods have not yet reliably located forgotten children in the vehicle. Recently, visual-based methods have taken the attention of manufacturers after the emergence of deep learning technology. However, the existing methods focus only on the forgotten child and neglect a forgotten pet. Furthermore, their systems only detect the presence of a child in the car with or without their parents. Therefore, this research introduces a visual-based framework to reduce hyperthermia deaths in enclosed vehicles. This visual-based system detects objects inside a vehicle; if the child or pet are without an adult, a notification is sent to the parents. First, a dataset is constructed for vehicle interiors containing children, pets, and adults. The proposed dataset is collected from different online sources, considering varying illumination, skin color, pet type, clothes, and car brands for guaranteed model robustness. Second, blurring, sharpening, brightness, contrast, noise, perspective transform, and fog effect augmentation algorithms are applied to these images to increase the training data. The augmented images are annotated with three classes: child, pet, and adult. This research concentrates on fine-tuning different state-of-the-art real-time detection models to detect objects inside the vehicle: NanoDet, YOLOv6_1, YOLOv6_3, and YOLO7. The simulation results demonstrate that YOLOv6_1 presents significant values with 96% recall, 95% precision, and 95% F1.
Funder
Korea Agency for Technology and Standards Gachon University
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference40 articles.
1. Poon, Y.-S., Lin, C.-C., Liu, Y.-H., and Fan, C.-P. (2022, January 7–9). YOLO-Based Deep Learning Design for In-Cabin Monitoring System with Fisheye-Lens Camera. Proceedings of the 2022 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA. 2. Miwata, M., Tsuneyoshi, M., Ikeda, M., and Barolli, L. (2022). Performance Evaluation of an AI-Based Safety Driving Support System for Detecting Distracted Driving, Springer International Publishing. 3. Othman, W., Kashevnik, A., Ali, A., and Shilov, N. (2022). DriverMVT: In-Cabin Dataset for Driver Monitoring Including Video and Vehicle Telemetry Information. Data, 7. 4. Driver Distraction Identification with an Ensemble of Convolutional Neural Networks;Eraqi;J. Adv. Transp.,2019 5. Khan, S.S., Shen, Z., Sun, H., Patel, A., and Abedi, A. (June, January 31). Modified Supervised Contrastive Learning for Detecting Anomalous Driving Behaviours. Proceedings of the 2022 19th Conference on Robots and Vision (CRV), Toronto, ON, Canada.
|
|