Author:
Li Haoyang,Duarte Javier,Roy Avik,Zhu Ruike,Huerta E. A.,Diaz Daniel,Harris Philip,Kansal Raghav,Katz Daniel S.,Kavoori Ishaan H.,Kindratenko Volodymyr V.,Mokhtar Farouk,Neubauer Mark S.,Park Sang Eon,Quinnan Melissa,Rusack Roger,Zhao Zhizhen
Abstract
The findable, accessible, interoperable, and reusable (FAIR) data principles serve as a framework for examining, evaluating, and improving data sharing to advance scientific endeavors. There is an emerging trend to adapt these principles for machine learning models—algorithms that learn from data without specific coding—and, more generally, AI models, due to AI’s swiftly growing impact on scientific and engineering sectors. In this paper, we propose a practical definition of the FAIR principles for AI models and provide a template program for their adoption. We exemplify this strategy with an implementation from high-energy physics, where a graph neural network is employed to detect Higgs bosons decaying into two bottom quarks.