A framework of benchmarking land models
Author:
Luo Y. Q.,Randerson J.,Abramowitz G.,Bacour C.,Blyth E.,Carvalhais N.,Ciais P.,Dalmonech D.,Fisher J.,Fisher R.,Friedlingstein P.,Hibbard K.,Hoffman F.,Huntzinger D.,Jones C. D.,Koven C.,Lawrence D.,Li D. J.,Mahecha M.,Niu S. L.,Norby R.,Piao S. L.,Qi X.,Peylin P.,Prentice I. C.,Riley W.,Reichstein M.,Schwalm C.,Wang Y. P.,Xia J. Y.,Zaehle S.,Zhou X. H.
Abstract
Abstract. Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.
Funder
European Commission
Publisher
Copernicus GmbH
Reference129 articles.
1. Abramowitz, G.: Towards a benchmark for land surface models, Geophys. Res. Lett., 32, L22702, https://doi.org/22710.21029/22005gl024419, 2005. 2. Abramowitz, G., Gupta, H., Pitman, A., Wang, Y. P., Leuning, R., Cleugh, H., and Hsu, K. L.: Neural error regression diagnosis (NERD): A tool for model bias identification and prognostic data assimilation, J. Hydrometeorol., 7, 160–177, 2006. 3. Abramowitz, G., Pitman, A., Gupta, H., Kowalczyk, E., and Wang, Y.: Systematic Bias in Land Surface Models, J. Hydrometeorol., 8, 989–1001, 2007. 4. Abramowitz, G., Leuning, R., Clark, M., and Pitman, A.: Evaluating the Performance of Land Surface Models, J. Climate, 21, 5468–5481, 2008. 5. Arora, V. K., Scinocca, J. F., Boer, G. J., Christian, J. R., Denman, K. L., Flato, G. M., Kharin, V. V., Lee, W. G., and Merryfield, W. J.: Carbon emission limits required to satisfy future representative concentration pathways of greenhouse gases, Geophys. Res. Lett., 38, L05805, https://doi.org/05810.01029/02010gl046270, 2011.
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|