Abstract
Human activity recognition (HAR) identifies and classifies patterns in image data that represent human activity. It is an essential problem in many fields, such as health care, where it can monitor patients and improve their care. It is also of commercial importance, as people increasingly use smart devices and want to customize services and products based on their features. It has many applications in fields such as artificial intelligence, human-computer interaction, and health care. In particular, it helps develop context-aware systems in pervasive computing. It is used in rehabilitation for functional diagnosis and evaluating health outcomes. Recognizing human activities is also critical in measuring participation, quality of life, and lifestyle. The proposed model aims at automatic recognition of human actions in images. Also, it will recognize an automatic understanding of what actions occur in an image performed by a human. This process is complex due to the many challenges, including variations in human shape and motion, occlusion, cluttered backgrounds, illumination conditions, and viewpoint variations. Deep learning models are particularly well suited to solving HAR tasks, as they can learn complex patterns from large amounts of data. However, training deep learning models can be time-consuming and require a lot of computational resources, making it challenging to develop effective HAR systems. This paper proposes a solution to this problem by using three different deep learning models based on transfer learning, which allows the user to train the model quickly and efficiently. Transfer learning is a method in which a model trained on one task is fine-tuned for a different but related task. These approaches help to reduce the time and computational resources needed to train the model. The proposed model uses convolutional neural networks (CNN) layers inside pre-trained models to extract and classify features from image data into different human activities. Additionally, the suggested model employs transfer learning to quickly fine-tune the CNN's weights for the particular job of HAR by initializing them with those learned from a pre-trained model. By using this method, the model's performance may be enhanced while using less processing power and training time. The process of training and combining numerous models into a single, more accurate prediction is known as ensemble learning. This study used an ensemble technique to combine the predictions of four models: VGG16, RESNET50, EfficientNetB6, and a non-trained CNN model. By using diverse models, we could capture various patterns and features in the time series data, improving our system's overall accuracy. It combines the predictions of these four models with a fusion method called averaging. It involves taking the average predicted scores for each activity across all four models and selecting the activity with the highest average score as the final prediction. This approach can reduce the effects of overfitting, as it allows the models to compensate for each other's errors. As a result, this model's accuracy is further enhanced due to ensemble learning and score-level fusion. Overall, our proposed system represents a more robust and practical approach to human activity recognition than existing models.