Affiliation:
1. Department of Electronics Engineering, Dong-A University, Busan 49315, Republic of Korea
2. Department of ICT Integrated Ocean Smart and Cities Engineering, Dong-A University, Busan 49315, Republic of Korea
Abstract
Human activity recognition (HAR) performs a vital function in various fields, including healthcare, rehabilitation, elder care, and monitoring. Researchers are using mobile sensor data (i.e., accelerometer, gyroscope) by adapting various machine learning (ML) or deep learning (DL) networks. The advent of DL has enabled automatic high-level feature extraction, which has been effectively leveraged to optimize the performance of HAR systems. In addition, the application of deep-learning techniques has demonstrated success in sensor-based HAR across diverse domains. In this study, a novel methodology for HAR was introduced, which utilizes convolutional neural networks (CNNs). The proposed approach combines features from multiple convolutional stages to generate a more comprehensive feature representation, and an attention mechanism was incorporated to extract more refined features, further enhancing the accuracy of the model. The novelty of this study lies in the integration of feature combinations from multiple stages as well as in proposing a generalized model structure with CBAM modules. This leads to a more informative and effective feature extraction technique by feeding the model with more information in every block operation. This research used spectrograms of the raw signals instead of extracting hand-crafted features through intricate signal processing techniques. The developed model has been assessed on three datasets, including KU-HAR, UCI-HAR, and WISDM datasets. The experimental findings showed that the classification accuracies of the suggested technique on the KU-HAR, UCI-HAR, and WISDM datasets were 96.86%, 93.48%, and 93.89%, respectively. The other evaluation criteria also demonstrate that the proposed methodology is comprehensive and competent compared to previous works.
Funder
Dong-A University research fund
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference36 articles.
1. Valagkouti, I.A., Troussas, C., Krouska, A., Feidakis, M., and Sgouropoulou, C. (2022). Emotion Recognition in Human–Robot Interaction Using the NAO Robot. Computers, 11.
2. Dalal, N., and Triggs, B. (2005, January 20–25). Histograms of oriented gradients for human detection. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA.
3. Distinctive Image Features from Scale-Invariant Keypoints;Lowe;Int. J. Comput. Vis.,2004
4. Speeded-Up Robust Features (SURF);Bay;Comput. Vis. Image Underst.,2008
5. Chen, J., Sun, Y., and Sun, S. (2021). Improving Human Activity Recognition Performance by Data Fusion and Feature Engineering. Sensors, 21.
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献