Author:
Giudici Paolo,Raffinetti Emanuela,Riani Marco
Abstract
AbstractArtificial Intelligence relies on the application of machine learning models which, while reaching high predictive accuracy, lack explainability and robustness. This is a problem in regulated industries, as authorities aimed at monitoring the risks arising from the application of Artificial Intelligence methods may not validate them. No measurement methodologies are yet available to jointly assess accuracy, explainability and robustness of machine learning models. We propose a methodology which fills the gap, extending the Forward Search approach, employed in robust statistical learning, to machine learning models. Doing so, we will be able to evaluate, by means of interpretable statistical tests, whether a specific Artificial Intelligence application is accurate, explainable and robust, through a unified methodology. We apply our proposal to the context of Bitcoin price prediction, comparing a linear regression model against a nonlinear neural network model.
Funder
Università degli Studi di Pavia
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献