Abstract
AbstractUnderstanding uncertainty is critical, especially when data are sparse and variations are large. Bayesian neural networks offer a powerful strategy to build predictable models from sparse data, and inherently quantify both, aleatoric uncertainties of the data and epistemic uncertainties of the model. Yet, classical Bayesian neural networks ignore the fundamental laws of physics, they are non-interpretable, and their parameters have no physical meaning. Here we integrate concepts of Bayesian learning and constitutive neural networks to discover interpretable models, parameters, and uncertainties that best explain soft matter systems. Instead of training an individual constitutive neural network and learning point values of the network weights, we train an ensemble of networks and learn probability distributions of the weights, along with their means, standard deviations, and credible intervals. We use variational Bayesian inference and adopt an efficient backpropagation-compatible algorithm that approximates the true probability distributions by simpler distributions and minimizes their divergence through variational learning. When trained on synthetic data, our Bayesian constitutive neural network successfully rediscovers the initial model, even in the presence of noise, and robustly discovers uncertainties, even from incomplete data. When trained on real data from healthy and aneurysmal human arteries, our network discovers a model with more stretch stiffening, more anisotropy, and more uncertainty for diseased than for healthy arteries. Our results demonstrate that Bayesian constitutive neural networks can successfully discriminate between healthy and diseased arteries, robustly discover interpretable models and parameters for both, and efficiently quantify uncertainties in model discovery. We anticipate our approach to generalize to other soft biomedical systems for which real-world data are rare and inter-personal variations are large. Ultimately, our calculated uncertainties will help enhance model robustness, promote personalized predictions, enable informed decision-making, and build confidence in automated model discovery and simulation.Our source code, data, and examples are available athttps://github.com/LivingMatterLab/CANNs.
Publisher
Cold Spring Harbor Laboratory
Reference42 articles.
1. Integrating machine learning and multiscale modeling–perspectives, challenges, and opportunities in the biological, biomedical, and behavioral sciences;NPJ Digital Medicine,2019
2. LII. An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, F. R. S. communicated by Mr. Price, in a letter to John Canton, A. M. F. R. S
3. Clevert DA , Unterthiner T , Hochreiter, S (2016). Fast and accurate deep network learning by exponential linear units (elus). International Conference on Learning Representations. 1613–1622.
4. A note on the elasticity of soft biological tissues