Affiliation:
1. Czech technical University
Abstract
Lifted Relational Neural Networks (LRNNs) were introduced in 2015 [1] as a framework for combining logic programming with neural networks for efficient learning of latent relational structures, such as various subgraph patterns in molecules. In this chapter, we will briefly re-introduce the framework and explain its current relevance in the context of contemporary Graph Neural Networks (GNNs). Particularly, we will detail how the declarative nature of differentiable logic programming in LRNNs can be used to elegantly capture various GNN variants and generalize to novel, even more expressive, deep relational learning concepts. Additionally, we will briefly demonstrate practical use and computation performance of the framework.