INN: An Interpretable Neural Network for AI Incubation in Manufacturing

Author:

Chen Xiaoyu1ORCID,Zeng Yingyan2,Kang Sungku3,Jin Ran2

Affiliation:

1. University of Louisville, Louisville, KY

2. Virginia Tech, Blacksburg, VA

3. Northeastern University, Boston, MA

Abstract

Both artificial intelligence (AI) and domain knowledge from human experts play an important role in manufacturing decision making. Smart manufacturing emphasizes a fully automated data-driven decision-making; however, the AI incubation process involves human experts to enhance AI systems by integrating domain knowledge for modeling, data collection and annotation, and feature extraction. Such an AI incubation process not only enhances the domain knowledge discovery but also improves the interpretability and trustworthiness of AI methods. In this article, we focus on the knowledge transfer from human experts to a supervised learning problem by learning domain knowledge as interpretable features and rules, which can be used to construct rule-based systems to support manufacturing decision making, such as process modeling and quality inspection. Although many advanced statistical and machine learning methods have shown promising modeling accuracy and efficiency, rule-based systems are still highly preferred and widely adopted due to their interpretability for human experts to comprehend. However, most of the existing rule-based systems are constructed based on deterministic human-crafted rules, whose parameters, such as thresholds of decision rules, are suboptimal. Yet the machine learning methods, such as tree models or neural networks, can learn a decision rule based structure without much interpretation or agreement with domain knowledge. Therefore, the traditional machine learning models and human experts’ domain knowledge cannot be directly improved by learning from data. In this research, we propose an interpretable neural network (INN) model with a center-adjustable sigmoid activation function to efficiently optimize the rule-based systems. Using the rule-based system from domain knowledge to regulate the INN architecture not only improves the prediction accuracy with optimized parameters but also ensures the interpretability by adopting the interpretable rule-based systems from domain knowledge. The proposed INN will be effective for supervised learning problems when rule-based systems are available. The merits of the INN model are demonstrated via a simulation study and a real case study in the quality modeling of a semiconductor manufacturing process. The source code of this work is hosted here: https://github.com/XiaoyuChenUofL/Interpretable-Neural-Network .

Publisher

Association for Computing Machinery (ACM)

Subject

Artificial Intelligence,Theoretical Computer Science

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Ensemble Active Learning by Contextual Bandits for AI Incubation in Manufacturing;ACM Transactions on Intelligent Systems and Technology;2023-12-19

2. Advancing Trustworthy Knowledge Engineering through User-Centered AI-based Systems: A Systematic Review;2023 IEEE Sixth International Conference on Artificial Intelligence and Knowledge Engineering (AIKE);2023-09-25

3. Verification of generality from weight analysis of Integration Neural Network approximators;QUARTERLY JOURNAL OF THE JAPAN WELDING SOCIETY;2023

4. Task-Driven Privacy-Preserving Data-Sharing Framework for the Industrial Internet;2022 IEEE International Conference on Big Data (Big Data);2022-12-17

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3