Author:
Mann John,Meshkin Hamed,Zirkle Joel,Han Xiaomei,Thrasher Bradlee,Chaturbedi Anik,Arabidarrehdor Ghazal,Li Zhihua
Abstract
AbstractDeep learning neural networks are often described as black boxes, as it is difficult to trace model outputs back to model inputs due to a lack of clarity over the internal mechanisms. This is even true for those neural networks designed to emulate mechanistic models, which simply learn a mapping between the inputs and outputs of mechanistic models, ignoring the underlying processes. Using a mechanistic model studying the pharmacological interaction between opioids and naloxone as a proof-of-concept example, we demonstrated that by reorganizing the neural networks’ layers to mimic the structure of the mechanistic model, it is possible to achieve better training rates and prediction accuracy relative to the previously proposed black-box neural networks, while maintaining the interpretability of the mechanistic simulations. Our framework can be used to emulate mechanistic models in a large parameter space and offers an example on the utility of increasing the interpretability of deep learning networks.
Publisher
Springer Science and Business Media LLC
Reference33 articles.
1. Alzubi, J., Nayyar, A. & Kumar, A. Machine Learning from Theory to Algorithms: An Overview. In Second National Conference on Computational Intelligence (Ncci 2018), 1142 (2018).
2. Silver, D. et al. Mastering the game of Go without human knowledge. Nature 550(7676), 354–359 (2017).
3. Haleem, A., Javaid, M., Qadri, M. A., Singh, R. P. & Suman, R. Artificial Intelligence (AI) applications for marketing: A literature-based study. Int. J. Intell. Netw. 3, 119 (2022).
4. Bose, P., Roy, S. & Ghosh, P. A comparative NLP-based study on the current trends and future directions in COVID-19 research. Ieee Access 9, 78341–78355 (2021).
5. Haleem, A. Artificial Intelligence in Biological Sciences.