Abstract
AbstractGraph Neural Networks (GNNs) have gained prominence in various domains, such as social network analysis, recommendation systems, and drug discovery, due to their ability to model complex relationships in graph-structured data. GNNs can exhibit incorrect behavior, resulting in severe consequences. Therefore, testing is necessary and pivotal. However, labeling all test inputs for GNNs can be prohibitively costly and time-consuming, especially when dealing with large and complex graphs. In response to these challenges, test selection has emerged as a strategic approach to alleviate labeling expenses. The objective of test selection is to select a subset of tests from the complete test set. While various test selection techniques have been proposed for traditional deep neural networks (DNNs), their adaptation to GNNs presents unique challenges due to the distinctions between DNN and GNN test data. Specifically, DNN test inputs are independent of each other, whereas GNN test inputs (nodes) exhibit intricate interdependencies. Therefore, it remains unclear whether DNN test selection approaches can perform effectively on GNNs. To fill the gap, we conduct an empirical study that systematically evaluates the effectiveness of various test selection methods in the context of GNNs, focusing on three critical aspects: 1) Misclassification detection: selecting test inputs that are more likely to be misclassified; 2) Accuracy estimation: selecting a small set of tests to precisely estimate the accuracy of the whole testing set; 3) Performance enhancement: selecting retraining inputs to improve the GNN accuracy. Our empirical study encompasses 7 graph datasets and 8 GNN models, evaluating 22 test selection approaches. Our study includes not only node classification datasets but also graph classification datasets. Our findings reveal that: 1) In GNN misclassification detection, confidence-based test selection methods, which perform well in DNNs, do not demonstrate the same level of effectiveness; 2) In terms of GNN accuracy estimation, clustering-based methods, while consistently performing better than random selection, provide only slight improvements; 3) Regarding selecting inputs for GNN performance improvement, test selection methods, such as confidence-based and clustering-based test selection methods, demonstrate only slight effectiveness; 4) Concerning performance enhancement, node importance-based test selection methods are not suitable, and in many cases, they even perform worse than random selection.
Funder
Fonds National de la Recherche Luxembourg
Publisher
Springer Science and Business Media LLC
Reference87 articles.
1. Aghababaeyan Z, Abdellatif M, Briand L, Ramesh S, Bagherzadeh M (2023a) Black-box testing of deep neural networks through test case diversity. IEEE Trans Softw Eng, IEEE
2. Aghababaeyan Z, Abdellatif M, Dadkhah M, Briand L (2023b) Deepgd: A multi-objective black-box test selection approach for deep neural networks. arXiv:2303.04878
3. Ahmed M, Seraj R, Islam SMS (2020) The k-means algorithm: A comprehensive survey and performance evaluation. Electronics, MDPI 9(8):1295
4. Ali PJM, Faraj RH, Koya E, Ali PJM, Faraj RH (2014) Data normalization and standardization: a technical report. Machine Learning Technical Reports 1(1):1–6
5. Ando H, Bell M, Kurauchi F, Wong KI, Cheung KF (2021) Connectivity evaluation of large road network by capacity-weighted eigenvector centrality analysis. Transportmetrica A: Transport Science, Taylor & Francis 17(4):648–674