SelfHAR

Author:

Tang Chi Ian1,Perez-Pozuelo Ignacio2,Spathis Dimitris1,Brage Soren3,Wareham Nick3,Mascolo Cecilia1

Affiliation:

1. Department of Computer Science and Technology, University of Cambridge, Cambridge, UK

2. Dept of Medicine, University of Cambridge, Cambridge, Cambridgeshire, UK, The Alan Turing Institute, London, UK

3. MRC Epidemiology Unit, School of Clinical Medicine, University of Cambridge, Cambridge, UK

Abstract

Machine learning and deep learning have shown great promise in mobile sensing applications, including Human Activity Recognition. However, the performance of such models in real-world settings largely depends on the availability of large datasets that captures diverse behaviors. Recently, studies in computer vision and natural language processing have shown that leveraging massive amounts of unlabeled data enables performance on par with state-of-the-art supervised models. In this work, we present SelfHAR, a semi-supervised model that effectively learns to leverage unlabeled mobile sensing datasets to complement small labeled datasets. Our approach combines teacher-student self-training, which distills the knowledge of unlabeled and labeled datasets while allowing for data augmentation, and multi-task self-supervision, which learns robust signal-level representations by predicting distorted versions of the input. We evaluated SelfHAR on various HAR datasets and showed state-of-the-art performance over supervised and previous semi-supervised approaches, with up to 12% increase in F1 score using the same number of model parameters at inference. Furthermore, SelfHAR is data-efficient, reaching similar performance using up to 10 times less labeled data compared to supervised approaches. Our work not only achieves state-of-the-art performance in a diverse set of HAR datasets, but also sheds light on how pre-training tasks may affect downstream performance.

Funder

Nokia Bell Labs

GlaxoSmithKline

University of Cambridge

Doris Zimmern Charitable Foundation

Cambridge Trust

Jesus College, University of Cambridge

Engineering and Physical Sciences Research Council

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture,Human-Computer Interaction

Reference67 articles.

1. A public domain dataset for human activity recognition using smartphones;Anguita Davide;Esann,2013

2. Philip Bachman R Devon Hjelm and William Buchwalter. 2019. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems. 15535--15545. Philip Bachman R Devon Hjelm and William Buchwalter. 2019. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems. 15535--15545.

3. Using unlabeled data in a sparse-coding framework for human activity recognition

Cited by 67 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Efficient Human Activity Recognition on Wearable Devices Using Knowledge Distillation Techniques;Electronics;2024-09-11

2. Hand-Crafted Features With a Simple Deep Learning Architecture for Sensor-Based Human Activity Recognition;IEEE Sensors Journal;2024-09-01

3. AS-APF: Encoding time series as images for human activity recognition with SK-based convolutional networks;Transactions of the Institute of Measurement and Control;2024-08-24

4. Using Self-supervised Learning Can Improve Model Fairness;Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining;2024-08-24

5. IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition;Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies;2024-08-22

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3