Affiliation:
1. Northwestern University, USA
2. Texas A&M University, USA
3. University of Virginia, USA
4. Arizona State University, USA
Abstract
Graph machine learning (Graph ML) models typically require abundant labeled instances to provide sufficient supervision signals, which is commonly infeasible in real-world scenarios since labeled data for newly emerged concepts (e.g., new categorizations of nodes) on graphs is rather limited. In order to efficiently learn with a small amount of data on graphs, meta-learning has been investigated in graph ML. By transferring the knowledge learned from previous experiences to new tasks, graph meta-learning approaches have demonstrated promising performance on few-shot graph learning problems. However, most existing efforts predominately assume that all the data from the seen classes is gold-labeled, while those methods may lose their efficacy when the seen data is weakly-labeled with severe label noise. As such, we aim to investigate a novel problem of weakly-supervised graph meta-learning for improving the model robustness in terms of knowledge transfer. To achieve this goal, we propose a new graph meta-learning framework – Graph Interpolation Networks (Meta-GIN). Based on a new robustness-enhanced episodic training paradigm, Meta-GIN is meta-learned to interpolate node representations from weakly-labeled data and extracts highly transferable meta-knowledge, which enables the model to quickly adapt to unseen tasks with few labeled instances. Extensive experiments demonstrate the superiority of Meta-GIN over existing graph meta-learning studies on the task of weakly-supervised few-shot node classification.
Publisher
Association for Computing Machinery (ACM)
Reference55 articles.
1. Jinheon Baek , Dong Bok Lee , and Sung Ju Hwang . 2020. Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction. NeurIPS ( 2020 ). Jinheon Baek, Dong Bok Lee, and Sung Ju Hwang. 2020. Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction. NeurIPS (2020).
2. Jatin Chauhan Deepak Nathani and Manohar Kaul. 2019. FEW-SHOT LEARNING ON GRAPHS VIA SUPER-CLASSES BASED ON GRAPH SPECTRAL MEASURES. In ICLR. Jatin Chauhan Deepak Nathani and Manohar Kaul. 2019. FEW-SHOT LEARNING ON GRAPHS VIA SUPER-CLASSES BASED ON GRAPH SPECTRAL MEASURES. In ICLR.
3. Pengfei Chen Ben Ben Liao Guangyong Chen and Shengyu Zhang. 2019. Understanding and utilizing deep neural networks trained with noisy labels. In ICML. Pengfei Chen Ben Ben Liao Guangyong Chen and Shengyu Zhang. 2019. Understanding and utilizing deep neural networks trained with noisy labels. In ICML.
4. Kaize Ding Jundong Li Rohit Bhanushali and Huan Liu. 2019. Deep anomaly detection on attributed networks. In SDM. Kaize Ding Jundong Li Rohit Bhanushali and Huan Liu. 2019. Deep anomaly detection on attributed networks. In SDM.
5. Kaize Ding Jianling Wang Jundong Li Kai Shu Chenghao Liu and Huan Liu. 2020. Graph prototypical networks for few-shot learning on attributed networks. In CIKM. Kaize Ding Jianling Wang Jundong Li Kai Shu Chenghao Liu and Huan Liu. 2020. Graph prototypical networks for few-shot learning on attributed networks. In CIKM.