Affiliation:
1. Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104
2. Department of Computer Science, University of California, Santa Cruz, CA 95064
Abstract
Graph representation learning is a fundamental technique for machine learning (ML) on complex networks. Given an input network, these methods represent the vertices by low-dimensional real-valued vectors. These vectors can be used for a multitude of downstream ML tasks. We study one of the most important such task, link prediction. Much of the recent literature on graph representation learning has shown remarkable success in link prediction. On closer investigation, we observe that the performance is measured by the AUC (area under the curve), which suffers biases. Since the ground truth in link prediction is sparse, we design a vertex-centric measure of performance, called the VCMPR@k plots. Under this measure, we show that link predictors using graph representations show poor scores. Despite having extremely high AUC scores, the predictors miss much of the ground truth. We identify a mathematical connection between this performance, the sparsity of the ground truth, and the low-dimensional geometry of the node embeddings. Under a formal theoretical framework, we prove that low-dimensional vectors cannot capture sparse ground truth using dot product similarities (the standard practice in the literature). Our results call into question existing results on link prediction and pose a significant scientific challenge for graph representation learning. The VCMPR plots identify specific scientific challenges for link prediction using low-dimensional node embeddings.
Funder
NSF | MPS | Division of Mathematical Sciences
NSF | CISE | Division of Computing and Communication Foundations
DOD | USA | AFC | CCDC | Army Research Office
Publisher
Proceedings of the National Academy of Sciences
Reference47 articles.
1. W. Hamilton Z. Ying J. Leskovec “Inductive representation learning on large graphs” in Neural Information Processing Systems (NeurIPS) (Curran Associates Inc. Red Hook NY 2017) pp. 1024–1034.
2. I. Chami S. Abu-El-Haija B. Perozzi C. Ré K. Murphy Machine learning on graphs: A model and comprehensive taxonomy. arXiv [Preprint] (2020). http://arxiv.org/abs/2005.03675 (Accessed 27 June 2023).
3. K. P. Murphy, Probabilistic Machine Learning: An Introduction (MIT Press, 2021).
4. Friends and neighbors on the Web
5. The link-prediction problem for social networks
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献