1. Ralph Abboud , Ismail Ilkan Ceylan , Martin Grohe, and Thomas Lukasiewicz. 2020 . The surprising power of graph neural networks with random node initialization. arXiv preprint arXiv:2010.01179 (2020). Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. 2020. The surprising power of graph neural networks with random node initialization. arXiv preprint arXiv:2010.01179 (2020).
2. Ryan Aponte , Ryan A Rossi , Shunan Guo , Jane Hoffswell , Nedim Lipka , Chang Xiao , Gromit Chan , Eunyee Koh , and Nesreen Ahmed . 2022. A Hypergraph Neural Network Framework for Learning Hyperedge-Dependent Node Embeddings. arXiv preprint arXiv:2212.14077 ( 2022 ). Ryan Aponte, Ryan A Rossi, Shunan Guo, Jane Hoffswell, Nedim Lipka, Chang Xiao, Gromit Chan, Eunyee Koh, and Nesreen Ahmed. 2022. A Hypergraph Neural Network Framework for Learning Hyperedge-Dependent Node Embeddings. arXiv preprint arXiv:2212.14077 (2022).
3. Jimmy Lei Ba , Jamie Ryan Kiros, and Geoffrey E Hinton . 2016 . Layer normalization. arXiv:1607.06450 (2016). Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv:1607.06450 (2016).
4. Hypergraph convolution and hypergraph attention
5. Mikhail Belkin and Partha Niyogi . 2003. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation , Vol. 15 , 6 ( 2003 ), 1373--1396. Mikhail Belkin and Partha Niyogi. 2003. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, Vol. 15, 6 (2003), 1373--1396.