Artificial intelligence for materials research at extremes
-
Published:2022-11
Issue:11
Volume:47
Page:1154-1164
-
ISSN:0883-7694
-
Container-title:MRS Bulletin
-
language:en
-
Short-container-title:MRS Bulletin
Author:
Maruyama B.ORCID, Hattrick-Simpers J., Musinski W., Graham-Brady L., Li K., Hollenbach J., Singh A., Taheri M. L.
Abstract
AbstractMaterials development is slow and expensive, taking decades from inception to fielding. For materials research at extremes, the situation is even more demanding, as the desired property combinations such as strength and oxidation resistance can have complex interactions. Here, we explore the role of AI and autonomous experimentation (AE) in the process of understanding and developing materials for extreme and coupled environments. AI is important in understanding materials under extremes due to the highly demanding and unique cases these environments represent. Materials are pushed to their limits in ways that, for example, equilibrium phase diagrams cannot describe. Often, multiple physical phenomena compete to determine the material response. Further, validation is often difficult or impossible. AI can help bridge these gaps, providing heuristic but valuable links between materials properties and performance under extreme conditions. We explore the potential advantages of AE along with decision strategies. In particular, we consider the problem of deciding between low-fidelity, inexpensive experiments and high-fidelity, expensive experiments. The cost of experiments is described in terms of the speed and throughput of automated experiments, contrasted with the human resources needed to execute manual experiments. We also consider the cost and benefits of modeling and simulation to further materials understanding, along with characterization of materials under extreme environments in the AE loop.
Graphical abstract
AI sequential decision-making methods for materials research: Active learning, which focuses on exploration by sampling uncertain regions, Bayesian and bandit optimization as well as reinforcement learning (RL), which trades off exploration of uncertain regions with exploitation of optimum function value. Bayesian and bandit optimization focus on finding the optimal value of the function at each step or cumulatively over the entire steps, respectively, whereas RL considers cumulative value of the labeling function, where the latter can change depending on the state of the system (blue, orange, or green).
Funder
UES, Inc. U.S. Department of Energy Office of Naval Research Air Force Research Laboratory
Publisher
Springer Science and Business Media LLC
Subject
Physical and Theoretical Chemistry,Condensed Matter Physics,General Materials Science
Reference108 articles.
1. J.J. De Pablo, B. Jones, C.L. Kovacs, V. Ozolins, A.P. Ramirez, Curr. Opin. Solid State Mater. Sci. 18(2), 99 (2014). https://doi.org/10.1016/j.cossms.2014.02.003 2. R. Gómez-Bombarelli, J.N. Wei, D. Duvenaud, J.M. Hernández-Lobato, B. Sánchez-Lengeling, D. Sheberla, J. Aguilera-Iparraguirre, T.D. Hirzel, R.P. Adams, A. Aspuru-Guzik, ACS Cent. Sci. 4(2), 268 (2018). https://doi.org/10.1021/acscentsci.7b00572 3. J.R. Deneault, J. Chang, J. Myung, D. Hooper, A. Armstrong, M. Pitt, B. Maruyama, MRS Bull. 46(7), 566 (2021). https://doi.org/10.1557/s43577-021-00051-1 4. A.G. Kusne, H. Yu, C. Wu, H. Zhang, J. Hattrick-Simpers, B. DeCost, S. Sarker, C. Oses, C. Toher, S. Curtarolo, A.V. Davydov, R. Agarwal, L.A. Bendersky, M. Li, A. Mehta, I. Takeuchi, Nat. Commun. 11(1), 5966 (2020). https://doi.org/10.1038/s41467-020-19597-w 5. M.F. Ashby, Materials Selection in Mechanical Design (Elsevier, Boston, 2011)
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|