Systematic and objective evaluation of Earth system models: PCMDI Metrics Package (PMP) version 3
-
Published:2024-05-15
Issue:9
Volume:17
Page:3919-3948
-
ISSN:1991-9603
-
Container-title:Geoscientific Model Development
-
language:en
-
Short-container-title:Geosci. Model Dev.
Author:
Lee JiwooORCID, Gleckler Peter J., Ahn Min-SeopORCID, Ordonez AnaORCID, Ullrich Paul A., Sperber Kenneth R., Taylor Karl E.ORCID, Planton Yann Y., Guilyardi EricORCID, Durack PaulORCID, Bonfils Celine, Zelinka Mark D.ORCID, Chao Li-Wei, Dong Bo, Doutriaux Charles, Zhang Chengzhu, Vo Tom, Boutte Jason, Wehner Michael F.ORCID, Pendergrass Angeline G.ORCID, Kim Daehyun, Xue ZeyuORCID, Wittenberg Andrew T., Krasting John
Abstract
Abstract. Systematic, routine, and comprehensive evaluation of Earth system models (ESMs) facilitates benchmarking improvement across model generations and identifying the strengths and weaknesses of different model configurations. By gauging the consistency between models and observations, this endeavor is becoming increasingly necessary to objectively synthesize the thousands of simulations contributed to the Coupled Model Intercomparison Project (CMIP) to date. The Program for Climate Model Diagnosis and Intercomparison (PCMDI) Metrics Package (PMP) is an open-source Python software package that provides quick-look objective comparisons of ESMs with one another and with observations. The comparisons include metrics of large- to global-scale climatologies, tropical inter-annual and intra-seasonal variability modes such as the El Niño–Southern Oscillation (ENSO) and Madden–Julian Oscillation (MJO), extratropical modes of variability, regional monsoons, cloud radiative feedbacks, and high-frequency characteristics of simulated precipitation, including its extremes. The PMP comparison results are produced using all model simulations contributed to CMIP6 and earlier CMIP phases. An important objective of the PMP is to document the performance of ESMs participating in the recent phases of CMIP, together with providing version-controlled information for all datasets, software packages, and analysis codes being used in the evaluation process. Among other purposes, this also enables modeling groups to assess performance changes during the ESM development cycle in the context of the error distribution of the multi-model ensemble. Quantitative model evaluation provided by the PMP can assist modelers in their development priorities. In this paper, we provide an overview of the PMP, including its latest capabilities, and discuss its future direction.
Publisher
Copernicus GmbH
Reference167 articles.
1. Adler, R. F., Sapiano, M. R., Huffman, G. J., Wang, J. J., Gu, G., Bolvin, D., Chiu, L., Schneider, U., Becker, A., Nelkin, E., Xie, P., Ferraro, R., and Shin, D.-B.: The Global Precipitation Climatology Project (GPCP) monthly analysis (new version 2.3) and a review of 2017 global precipitation. Atmosphere, 9, 138, https://doi.org/10.3390/atmos9040138, 2018. 2. Ahn, M.-S., Kim, D. H., Sperber, K. R., Kang, I.-S., Maloney, E. D., Waliser, D. E., and Hendon, H. H.: MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis, Clim. Dynam., 49, 4023–4045, https://doi.org/10.1007/s00382-017-3558-4, 2017. 3. Ahn, M.-S., Gleckler, P. J., Lee, J., Pendergrass, A. G., and Jakob, C.: Benchmarking Simulated Precipitation Variability Amplitude across Time Scales, J. Climate, 35, 3173–3196, https://doi.org/10.1175/jcli-d-21-0542.1, 2022. 4. Ahn, M.-S., Ullrich, P. A., Gleckler, P. J., Lee, J., Ordonez, A. C., and Pendergrass, A. G.: Evaluating precipitation distributions at regional scales: a benchmarking framework and application to CMIP5 and 6 models, Geosci. Model Dev., 16, 3927–3951, https://doi.org/10.5194/gmd-16-3927-2023, 2023. 5. Anaconda pcmdi_metrics: https://anaconda.org/conda-forge/pcmdi_metrics, last access: 8 May 2024.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|