A framework for benchmarking land models
-
Published:2012-10-09
Issue:10
Volume:9
Page:3857-3874
-
ISSN:1726-4189
-
Container-title:Biogeosciences
-
language:en
-
Short-container-title:Biogeosciences
Author:
Luo Y. Q.,Randerson J. T.,Abramowitz G.,Bacour C.,Blyth E.,Carvalhais N.,Ciais P.,Dalmonech D.,Fisher J. B.,Fisher R.,Friedlingstein P.,Hibbard K.,Hoffman F.,Huntzinger D.,Jones C. D.,Koven C.,Lawrence D.,Li D. J.,Mahecha M.,Niu S. L.,Norby R.,Piao S. L.,Qi X.,Peylin P.,Prentice I. C.,Riley W.,Reichstein M.,Schwalm C.,Wang Y. P.,Xia J. Y.,Zaehle S.,Zhou X. H.
Abstract
Abstract. Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models to improve their prediction performance skills.
Funder
European Commission
Publisher
Copernicus GmbH
Subject
Earth-Surface Processes,Ecology, Evolution, Behavior and Systematics
Reference127 articles.
1. Abramowitz, G.: Towards a benchmark for land surface models, Geophys. Res. Lett., 32, L22702, https://doi.org/10.1029/2005gl024419, 2005. 2. Abramowitz, G., Gupta, H., Pitman, A., Wang, Y. P., Leuning, R., Cleugh, H., and Hsu, K. L.: Neural error regression diagnosis (NERD): A tool for model bias identification and prognostic data assimilation, J. Hydrometeorol., 7, 160–177, 2006. 3. Abramowitz, G., Pitman, A., Gupta, H., Kowalczyk, E., and Wang, Y.: Systematic Bias in Land Surface Models, J. Hydrometeorol., 8, 989–1001, 2007. 4. Abramowitz, G., Leuning, R., Clark, M., and Pitman, A.: Evaluating the Performance of Land Surface Models, J. Climate, 21, 5468–5481, 2008. 5. Arora, V. K., Scinocca, J. F., Boer, G. J., Christian, J. R., Denman, K. L., Flato, G. M., Kharin, V. V., Lee, W. G., and Merryfield, W. J.: Carbon emission limits required to satisfy future representative concentration pathways of greenhouse gases, Geophys. Res. Lett., 38, L05805, https://doi.org/10.1029/02010gl046270, 2011.
Cited by
266 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|