Affiliation:
1. University of Washington Seattle Department of Biostatistics, Hans Rosling Center for Population Health, Box 351617, , WA 98195, USA
Abstract
Abstract
In many biomedical applications, outcome is measured as a “time-to-event” (e.g., disease progression or death). To assess the connection between features of a patient and this outcome, it is common to assume a proportional hazards model and fit a proportional hazards regression (or Cox regression). To fit this model, a log-concave objective function known as the “partial likelihood” is maximized. For moderate-sized data sets, an efficient Newton–Raphson algorithm that leverages the structure of the objective function can be employed. However, in large data sets this approach has two issues: (i) The computational tricks that leverage structure can also lead to computational instability; (ii) The objective function does not naturally decouple: Thus, if the data set does not fit in memory, the model can be computationally expensive to fit. This additionally means that the objective is not directly amenable to stochastic gradient-based optimization methods. To overcome these issues, we propose a simple, new framing of proportional hazards regression: This results in an objective function that is amenable to stochastic gradient descent. We show that this simple modification allows us to efficiently fit survival models with very large data sets. This also facilitates training complex, for example, neural-network-based, models with survival data.
Funder
National Institutes of Health
Publisher
Oxford University Press (OUP)
Subject
Statistics, Probability and Uncertainty,General Medicine,Statistics and Probability
Reference48 articles.
1. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable;Austin,;BMC Medical Research Methodology,2012
2. Generating survival times to simulate Cox proportional hazards models;Bender,;Statistics in Medicine,2005
3. Some properties of incomplete u-statistics;Blom,;Biometrika,1976
4. Large-scale machine learning with stochastic gradient descent;Bottou,;Proceedings of COMPSTAT’2010,2010
5. Distributed optimization and statistical learning via the alternating direction method of multipliers;Boyd,;Foundations and Trends in Machine Learning,2010
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献