Affiliation:
1. University of New South Wales, Australia
2. Iowa State University, USA
Abstract
Decentralized machine learning, such as Federated Learning (FL), is widely adopted in many application domains. Especially in domains like recommendation systems, sharing gradients instead of private data has recently caught the research community’s attention. Personalized travel route recommendation utilizes users’ location data to recommend optimal travel routes. Location data is extremely privacy sensitive, presenting increased risks of exposing behavioural patterns and demographic attributes. FL for route recommendation can mitigate the sharing of location data. However, this paper shows that an adversary can recover the user trajectories used to train the federated recommendation models with high proximity accuracy. To this effect, we propose a novel attack called DeepSneak, which uses shared gradients obtained from global model training in FL to reconstruct private user trajectories. We formulate the attack as a regression problem and train a generative model by minimizing the distance between gradients. We validate the success of DeepSneak on two real-world trajectory datasets. The results show that we can recover the location trajectories of users with reasonable spatial and semantic accuracy.
Publisher
Association for Computing Machinery (ACM)
Reference45 articles.
1. Deep Learning with Differential Privacy
2. Report from Dagstuhl
3. Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In International conference on machine learning. PMLR, 214–223.
4. Dawei Chen, Cheng Soon Ong, and Lexing Xie. 2016. Learning Points and Routes to Recommend Trajectories. CoRR abs/1608.07051 (2016). arXiv:1608.07051 http://arxiv.org/abs/1608.07051
5. Discovering popular routes from trajectories