Abstract
Artificial Neural Networks are powerful function
approximators capable of modelling solutions to
a wide variety of problems, both supervised and
unsupervised. As their size and expressivity increases,
so too does the variance of the model,
yielding a nearly ubiquitous overfitting problem.
Although mitigated by a variety of model regularisation
methods, the common cure is to seek
large amounts of training data—which is not necessarily
easily obtained—that sufficiently approximates
the data distribution of the domain we wish
to test on. In contrast, logic programming methods
such as Inductive Logic Programming offer an extremely
data-efficient process by which models can
be trained to reason on symbolic domains. However,
these methods are unable to deal with the variety
of domains neural networks can be applied to:
they are not robust to noise in or mislabelling of inputs,
and perhaps more importantly, cannot be applied
to non-symbolic domains where the data is
ambiguous, such as operating on raw pixels. In this
paper, we propose a Differentiable Inductive Logic
framework (∂ILP), which can not only solve tasks
which traditional ILP systems are suited for, but
shows a robustness to noise and error in the training
data which ILP cannot cope with. Furthermore,
as it is trained by backpropagation against a likelihood
objective, it can be hybridised by connecting
it with neural networks over ambiguous data in
order to be applied to domains which ILP cannot
address, while providing data efficiency and generalisation
beyond what neural networks on their own
can achieve.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献