Affiliation:
1. Department of Atmospheric and Oceanic Sciences University of California, Los Angeles Los Angeles CA USA
2. Laboratory for Atmospheric and Space Physics University of Colorado Boulder Boulder CO USA
3. Center for Space Physics Boston University Boston MA USA
Abstract
AbstractMany Machine Learning (ML) systems, especially deep neural networks, are fundamentally regarded as black boxes since it is difficult to fully grasp how they function once they have been trained. Here, we tackle the issue of the interpretability of a high‐accuracy ML model created to model the flux of Earth's radiation belt electrons. The Outer RadIation belt Electron Neural net (ORIENT) model uses only solar wind conditions and geomagnetic indices as input features. Using the Deep SHAPley additive explanations (DeepSHAP) method, for the first time, we show that the “black box” ORIENT model can be successfully explained. Two significant electron flux enhancement events observed by Van Allen Probes during the storm interval of 17–18 March 2013 and non‐storm interval of 19–20 September 2013 are investigated using the DeepSHAP method. The results show that the feature importance calculated from the purely data‐driven ORIENT model identifies physically meaningful behavior consistent with current physical understanding. This work not only demonstrates that the physics of the radiation belt was captured in the training of our previous model, but that this method can also be applied generally to other similar models to better explain the results and to potentially discover new physical mechanisms.
Funder
Defense Advanced Research Projects Agency
Publisher
American Geophysical Union (AGU)
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献