Abstract
AbstractIn quest of improving the productivity and efficiency of manufacturing processes, Artificial Intelligence (AI) is being used extensively for response prediction, model dimensionality reduction, process optimization, and monitoring. Though having superior accuracy, AI predictions are unintelligible to the end users and stakeholders due to their opaqueness. Thus, building interpretable and inclusive machine learning (ML) models is a vital part of the smart manufacturing paradigm to establish traceability and repeatability. The study addresses this fundamental limitation of AI-driven manufacturing processes by introducing a novel Explainable AI (XAI) approach to develop interpretable processes and product fingerprints. Here the explainability is implemented in two stages: by developing interpretable representations for the fingerprints, and by posthoc explanations. Also, for the first time, the concept of process fingerprints is extended to develop an interpretable probabilistic model for bottleneck events during manufacturing processes. The approach is demonstrated using two datasets: nanosecond pulsed laser ablation to produce superhydrophobic surfaces and wire EDM real-time monitoring dataset during the machining of Inconel 718. The fingerprint identification is performed using a global Lipschitz functions optimization tool (MaxLIPO) and a stacked ensemble model is used for response prediction. The proposed interpretable fingerprint approach is robust to change in processes and can responsively handle both continuous and categorical responses alike. Implementation of XAI not only provided useful insights into the process physics but also revealed the decision-making logic for local predictions.
Funder
Engineering and Physical Sciences Research Council
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献