Author:
Mavromatis Costas,Ioannidis Vassilis N.,Wang Shen,Zheng Da,Adeshina Soji,Ma Jun,Zhao Han,Faloutsos Christos,Karypis George
Publisher
Springer Nature Switzerland
Reference45 articles.
1. Ando, R., Zhang, T.: Learning on graph with laplacian regularization. In: NIPS (2006)
2. Beltagy, I., Lo, K., Cohan, A.: SciBERT: a pretrained language model for scientific text. In: EMNLP-IJCNLP (2019)
3. Chien, E., et al.: Node feature extraction by self-supervised multi-scale neighborhood prediction. In: ICLR (2022)
4. Deng, X., Zhang, Z.: Graph-free knowledge distillation for graph neural networks. arXiv (2021)
5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: ACL (2019)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Graph Intelligence with Large Language Models and Prompt Learning;Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining;2024-08-24
2. Can GNN be Good Adapter for LLMs?;Proceedings of the ACM Web Conference 2024;2024-05-13
3. LIGHT: Language-Infused Graph Representation Learning Networks;2024 5th International Conference on Computer Engineering and Application (ICCEA);2024-04-12
4. SemPool: Simple, Robust, and Interpretable KG Pooling for Enhancing Language Models;Lecture Notes in Computer Science;2024
5. Graph Coordinates and Conventional Neural Networks - An Alternative for Graph Neural Networks;2023 IEEE International Conference on Big Data (BigData);2023-12-15