Abstract
AbstractGiven the rapid reductions in human mortality observed over recent decades and the uncertainty associated with their future evolution, there have been a large number of mortality projection models proposed by actuaries and demographers in recent years. Many of these, however, suffer from being overly complex, thereby producing spurious forecasts, particularly over long horizons and for small, noisy data sets. In this paper, we exploit statistical learning tools, namely group regularisation and cross-validation, to provide a robust framework to construct discrete-time mortality models by automatically selecting the most appropriate functions to best describe and forecast particular data sets. Most importantly, this approach produces bespoke models using a trade-off between complexity (to draw as much insight as possible from limited data sets) and parsimony (to prevent over-fitting to noise), with this trade-off designed to have specific regard to the forecasting horizon of interest. This is illustrated using both empirical data from the Human Mortality Database and simulated data, using code that has been made available within a user-friendly open-source R package StMoMo.
Publisher
Cambridge University Press (CUP)
Subject
Economics and Econometrics,Finance,Accounting
Reference63 articles.
1. On the Structure and Classification of Mortality Models
2. Mortality density forecasts: An analysis of six stochastic mortality models;Cairns;Insurance: Mathematics and Economics,2011
3. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
4. A comparative study of parametric mortality projection models;Haberman;Insurance: Mathematics and Economics,2011
5. A cohort-based extension to the Lee-Carter model for mortality reduction factors;Renshaw;Insurance: Mathematics and Economics,2006
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献