Secure multi-party computation in deep learning : Enhancing privacy in distributed neural networks
-
Published:2024
Issue:2-A
Volume:27
Page:249-259
-
ISSN:0972-0529
-
Container-title:Journal of Discrete Mathematical Sciences and Cryptography
-
language:
-
Short-container-title:JDMSC
Author:
Sagar P. Vidya,Ghanimi Hayder M. A.,Prabhu L. Arokia Jesu,Raja L.,Dadheech Pankaj,Sengan Sudhakar
Abstract
Ensuring data privacy while applying Deep Learning (DL) on distributed datasets represents an essential task in the current period of critical data security. Data privacy and accuracy of models are typically impacted by traditional methods. Data privacy is of the most tremendous significance in distributed data settings, and the current research presents a novel model for DL termed Secure Multi-Party Computation (SMPC). The accuracy of the mathematical models and confidentiality of the data are frequently compelled to coexist in conventional methods. In order to enable collaborative DL without compromising private information, the recommended system uses the Paillier Homomorphic Encryption Scheme (PHES). By using innovative cryptographic methods, this decentralized method secures the confidentiality of data without utilizing a Trusted Authority (TA). By performing a thorough assessment of the CIFAR-10 and IMDB datasets, the present study demonstrates that the system the author uses is simple and scalable and that it offers accuracy on par with conventional methods. By presenting an approach for achieving a balance between the two competing demands of data security and computational performance, this method signifies a vast advance forward with a confidentiality DL.
Publisher
Taru Publications