Affiliation:
1. School of Computer Science and Engineering Beihang University Beijing China
2. Beijing Advanced Innovation Center for Big Data and Brain Computing Beijing China
3. ARC LAB Tencent Shenzhen China
4. China Academy of Industrial Internet Beijing China
5. National Computer Network Emergency Response Technical Team Coordination Center of China Beijing China
Abstract
SummarySocial networks collect enormous amounts of user personal and behavioral data, which could threaten users' privacy if published or shared directly. Privacy‐preserving graph publishing (PPGP) can make user data available while protecting private information. For this purpose, in PPGP, anonymization methods like perturbation and generalization are commonly used. However, traditional anonymization methods are challenging in balancing high‐level privacy and utility, ineffective at defending against both various link and hybrid inference attacks, as well as vulnerable to graph neural network (GNN)‐based attacks. To solve those problems, we present a novel privacy‐disentangled approach that disentangles private and non‐private information for a better privacy‐utility trade‐off. Moreover, we propose a unified graph deep learning framework for PPGP, denoted privacy‐disentangled variational information bottleneck
(PDVIB). Using low‐dimensional perturbations, the model generates an anonymized graph to defend against various inference attacks, including GNN‐based attacks. Particularly, the model fits various privacy settings by employing adjustable perturbations at the node level. With three real‐world datasets, PDVIB is demonstrated to generate robust anonymous graphs that defend against various privacy inference attacks while maintaining the utility of non‐private information.
Funder
National Natural Science Foundation of China
Subject
Computational Theory and Mathematics,Computer Networks and Communications,Computer Science Applications,Theoretical Computer Science,Software