Author:
Lobo João,Finger Marcelo,Preto Sandro
Abstract
Explainability and formal verification of neural networks may be crucial when using these models to perform critical tasks. Pursuing explainability properties, we present a method for approximating neural networks by piecewise linear functions, which is a step to achieve a logical representation of the network. We also explain how such logical representations may be applied in the formal verification of some properties of neural networks. Furthermore, we present the results of an empirical experiment where the methods introduced are used in a case study.
Publisher
Sociedade Brasileira de Computação
Reference12 articles.
1. Acharya, J., Diakonikolas, I., Li, J., and Schmidt, L. (2016). Fast algorithms for segmented regression. In Proceedings of the 33rd International Conference on International Conference on Machine Learning Volume 48, ICML’16, pages 2878–2886. JMLR.org.
2. Ansótegui, C., Bofill, M., Manyà, F., and Villaret, M. (2012). Building automated theorem provers for infinitely-valued logics with satisfiability modulo theory solvers. In 2012 IEEE 42nd International Symposium on Multiple-Valued Logic, pages 25–30.
3. Cignoli, R., D’Ottaviano, I., and Mundici, D. (2000). Algebraic Foundations of Many-Valued Reasoning. Trends in Logic. Springer Netherlands.
4. Finger, M. and Preto, S. (2020). Probably partially true: Satisfiability for Łukasiewicz infinitely-valued probabilistic logic and related topics. Journal of Automated Reasoning, 64(7):1269–1286.
5. Hughes, R. B. and Anderson, M. R. (1996). Simplexity of the cube. Discret. Math., 158:99–150.