COCOA

Author:

Deldari Shohreh1,Xue Hao2,Saeed Aaqib3,Smith Daniel V.4,Salim Flora D.5

Affiliation:

1. School of Computing and Technologies, RMIT University, Melbourne, Victoria, Australia

2. School of Computer Science and Engineering, University of New South Wales, Sydeny, NSW, Australia

3. Philips Research, Eindhoven, Netherlands

4. Data61, CSIRO, Hobart, Tasmania, Australia

5. School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia

Abstract

Self-Supervised Learning (SSL) is a new paradigm for learning discriminative representations without labeled data, and has reached comparable or even state-of-the-art results in comparison to supervised counterparts. Contrastive Learning (CL) is one of the most well-known approaches in SSL that attempts to learn general, informative representations of data. CL methods have been mostly developed for applications in computer vision and natural language processing where only a single sensor modality is used. A majority of pervasive computing applications, however, exploit data from a range of different sensor modalities. While existing CL methods are limited to learning from one or two data sources, we propose COCOA (Cross mOdality COntrastive leArning), a self-supervised model that employs a novel objective function to learn quality representations from multisensor data by computing the cross-correlation between different data modalities and minimizing the similarity between irrelevant instances. We evaluate the effectiveness of COCOA against eight recently introduced state-of-the-art self-supervised models, and two supervised baselines across five public datasets. We show that COCOA achieves superior classification performance to all other approaches. Also, COCOA is far more label-efficient than the other baselines including the fully supervised model using only one-tenth of available labeled data.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture,Human-Computer Interaction

Reference95 articles.

1. Hassan Akbari , Linagzhe Yuan , Rui Qian , Wei-Hong Chuang , Shih-Fu Chang , Yin Cui , and Boqing Gong . 2021 . Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. NeurIPS (2021). Hassan Akbari, Linagzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. 2021. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. NeurIPS (2021).

2. Jean-Baptiste Alayrac , Adria Recasens , Rosalia Schneider , Relja Arandjelovic , Jason Ramapuram , Jeffrey De Fauw , Lucas Smaira, Sander Dieleman, and Andrew Zisserman. 2020 . Self-Supervised MultiModal Versatile Networks. NeurIPS 2, 6 (2020). Jean-Baptiste Alayrac, Adria Recasens, Rosalia Schneider, Relja Arandjelovic, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, and Andrew Zisserman. 2020. Self-Supervised MultiModal Versatile Networks. NeurIPS 2, 6 (2020).

3. Humam Alwassel Dhruv Mahajan Bruno Korbar Lorenzo Torresani Bernard Ghanem and Du Tran. 2020. Self-Supervised Learning by Cross-Modal Audio-Video Clustering. In Advances in Neural Information Processing Systems (NeurIPS). Humam Alwassel Dhruv Mahajan Bruno Korbar Lorenzo Torresani Bernard Ghanem and Du Tran. 2020. Self-Supervised Learning by Cross-Modal Audio-Video Clustering. In Advances in Neural Information Processing Systems (NeurIPS).

4. Davide Anguita , Alessandro Ghio , Luca Oneto , Xavier Parra , and Jorge Luis Reyes-Ortiz . 2013 . A public domain dataset for human activity recognition using smartphones .. In Esann , Vol. 3 . Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge Luis Reyes-Ortiz. 2013. A public domain dataset for human activity recognition using smartphones.. In Esann, Vol. 3.

5. Look, Listen and Learn

Cited by 32 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Self-Supervised Representation Learning and Temporal-Spectral Feature Fusion for Bed Occupancy Detection;Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies;2024-08-22

2. Virtual Fusion With Contrastive Learning for Single-Sensor-Based Activity Recognition;IEEE Sensors Journal;2024-08-01

3. Contrastive Sensor Excitation for Generalizable Cross-Person Activity Recognition;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30

4. SoK: Behind the Accuracy of Complex Human Activity Recognition Using Deep Learning;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30

5. MultiHGR: Multi-Task Hand Gesture Recognition with Cross-Modal Wrist-Worn Devices;IEEE INFOCOM 2024 - IEEE Conference on Computer Communications;2024-05-20

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3