Affiliation:
1. Department of Computer Science Georgia State University Atlanta Georgia USA
2. Department of Information Technology Kennesaw State University Marietta Georgia USA
3. Faculty of Artificial Intelligence and Data Engineering Sangmyung University Seoul South Korea
4. CTO of LSWare Inc. Seoul South Korea
Abstract
AbstractDeep learning‐based models have become ubiquitous across a wide range of applications, including computer vision, natural language processing, and robotics. Despite their efficacy, one of the significant challenges associated with deep neural network (DNN) models is the potential risk of copyright leakage due to the inherent vulnerability of the entire model architecture and the communication burden of the large models during publishing. So far, it is still challenging for us to safeguard the intellectual property rights of these DNN models while reducing the communication time during model publishing. To this end, this paper introduces a novel approach using knowledge distillation techniques aimed at training a surrogate model to stand in for the original DNN model. To be specific, a knowledge distillation generative adversarial network (KDGAN) model is proposed to train a student model capable of achieving remarkable performance levels while simultaneously safeguarding the copyright integrity of the original large teacher model and improving communication efficiency during model publishing. Herein, comprehensive experiments are conducted to showcase the efficacy of model copyright protection, communication‐efficient model publishing, and the superiority of the proposed KDGAN model over other copyright protection mechanisms.
Publisher
Institution of Engineering and Technology (IET)
Reference58 articles.
1. Wang H. Raj B.:On the origin of deep learning. arXiv preprint arXiv:1702.07800 (2017)
2. Imagenet classification with deep convolutional neural networks;Krizhevsky A.;Adv. Neural Inf. Process. Syst.,2012
3. Deep Learning for Computer Vision: A Brief Review
4. Xia Z. et al.:Contemporary recommendation systems on big data and their applications: A survey. arXiv e‐prints: arXiv‐2206 (2022)
5. Natural Language Processing