Abstract
AbstractMachine learning (ML) has the potential to transform patient care and outcomes. However, there are important differences between measuring the performance of ML models in silico and usefulness at the point of care. One lens to use to evaluate models during early development is actionability, which is currently undervalued. We propose a metric for actionability intended to be used before the evaluation of calibration and ultimately decision curve analysis and calculation of net benefit. Our metric should be viewed as part of an overarching effort to increase the number of pragmatic tools that identify a model’s possible clinical impacts.
Funder
Center for Research on Computation and Society (CRCS) at the Harvard John A. Paulson School of Engineering and Applied Sciences
William G. Williams Directorship at the Hospital for Sick Children
Publisher
Springer Science and Business Media LLC
Subject
Health Information Management,Health Informatics,Computer Science Applications,Medicine (miscellaneous)
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献