Privacy-Enhanced Prototype-based Federated Cross-modal Hashing for Cross-modal Retrieval

Author:

Zuo Ruifan1ORCID,Zheng Chaoqun1ORCID,Li Fengling2ORCID,Zhu Lei3ORCID,Zhang Zheng4ORCID

Affiliation:

1. Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, China

2. University of Technology Sydney, Australian Artificial Intelligence Institute, Faculty of Engineering and Information Technology, Australian

3. Tongji University, School of Electronic and Information Engineering, China

4. Harbin Institute of Technology, Shenzhen Key Laboratory of Visual Object Detection and Recognition, China

Abstract

Cross-modal hashing is widely used for efficient similarity searches, improving data processing efficiency, and reducing storage costs. Existing cross-modal hashing methods primarily focus on centralized training scenarios, where fixed-scale and fixed-category multi-modal data is collected beforehand. However, these methods often face challenges associated with the potential risk of privacy breaches and high data communication costs during data transmission in real-world multimedia retrieval tasks. To tackle these challenges, in this paper, we propose an efficient P rivacy- E nhanced P rototype-based F ederated C ross-modal H ashing (PEPFCH). In PEPFCH, we integrate local and global prototypes in order to effectively capture the distinctive traits of individual clients, while also harnessing the collective intelligence of the entire federated learning system. Moreover, to ensure the security of prototype information and prevent its disclosure during the aggregation process, we use a prototype encryption transmission mechanism to encrypt the prototype information before transmission, making it challenging for attackers to gain access to sensitive data. Additionally, to facilitate personalized federated learning and alleviate the issue of parametric catastrophic forgetting, we establish the image and text hyper-networks for each client and adopt a hyper-network extension strategy to selectively preserve and update previously acquired knowledge when acquiring new concepts or categories. Comprehensive experiments highlight the efficiency and superiority of our proposed method. To enhance research and accessibility, we have publicly released our source codes at: https://github.com/vindahi/PEPFCH .

Publisher

Association for Computing Machinery (ACM)

Reference46 articles.

1. Yuxuan Cai, Wenxiu Ding, Yuxuan Xiao, Zheng Yan, Ximeng Liu, and Zhiguo Wan. 2023. SecFed: A Secure and Efficient Federated Learning Based on Multi-Key Homomorphic Encryption. IEEE Transactions on Dependable and Secure Computing (2023).

2. Cross-modal Graph Matching Network for Image-text Retrieval;Cheng Yuhao;ACM Transactions on Multimedia Computing, Communications, and Applications,2022

3. Supervised Hierarchical Online Hashing for Cross-modal Retrieval;Han Kai;ACM Transactions on Multimedia Computing, Communications, and Applications,2024

4. Multiple instance relation graph reasoning for cross-modal hash retrieval;Hou Chuanwen;Knowledge-Based Systems,2022

5. Unsupervised contrastive cross-modal hashing;Hu Peng;IEEE Transactions on Pattern Analysis and Machine Intelligence,2022

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3