MLcps: machine learning cumulative performance score for classification problems

Author:

Akshay Akshay12ORCID,Abedi Masoud3ORCID,Shekarchizadeh Navid34ORCID,Burkhard Fiona C15ORCID,Katoch Mitali6ORCID,Bigger-Allen Alex78910ORCID,Adam Rosalyn M8910ORCID,Monastyrskaya Katia15ORCID,Gheinani Ali Hashemi158910ORCID

Affiliation:

1. Functional Urology Research Group, Department for BioMedical Research DBMR, University of Bern , 3008 Bern , Switzerland

2. Graduate School for Cellular and Biomedical Sciences, University of Bern , 3012 Bern , Switzerland

3. Department of Medical Data Science, Leipzig University Medical Centre , 04107 Leipzig , Germany

4. Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig , 04105 Leipzig , Germany

5. Department of Urology, Inselspital University Hospital , 3010 Bern , Switzerland

6. Institute of Neuropathology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) , 91054 Erlangen , Germany

7. Biological & Biomedical Sciences Program, Division of Medical Sciences, Harvard Medical School , 02115 Boston, MA , USA

8. Urological Diseases Research Center, Boston Children's Hospital , 02115 Boston, MA , USA

9. Department of Surgery , Harvard Medical School, 02115 Boston, MA , USA

10. Broad Institute of MIT and Harvard , 02142 Cambridge, MA , USA

Abstract

Abstract Background Assessing the performance of machine learning (ML) models requires careful consideration of the evaluation metrics used. It is often necessary to utilize multiple metrics to gain a comprehensive understanding of a trained model’s performance, as each metric focuses on a specific aspect. However, comparing the scores of these individual metrics for each model to determine the best-performing model can be time-consuming and susceptible to subjective user preferences, potentially introducing bias. Results We propose the Machine Learning Cumulative Performance Score (MLcps), a novel evaluation metric for classification problems. MLcps integrates several precomputed evaluation metrics into a unified score, enabling a comprehensive assessment of the trained model’s strengths and weaknesses. We tested MLcps on 4 publicly available datasets, and the results demonstrate that MLcps provides a holistic evaluation of the model’s robustness, ensuring a thorough understanding of its overall performance. Conclusions By utilizing MLcps, researchers and practitioners no longer need to individually examine and compare multiple metrics to identify the best-performing models. Instead, they can rely on a single MLcps value to assess the overall performance of their ML models. This streamlined evaluation process saves valuable time and effort, enhancing the efficiency of model evaluation. MLcps is available as a Python package at https://pypi.org/project/MLcps/.

Funder

National Science Foundation

Publisher

Oxford University Press (OUP)

Subject

Computer Science Applications,Health Informatics

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3