Leveraging Simulation Data to Understand Bias in Predictive Models of Infectious Disease Spread

Author:

Züfle Andreas1ORCID,Salim Flora2ORCID,Anderson Taylor3ORCID,Scotch Matthew4ORCID,Xiong Li5ORCID,Sokol Kacper6ORCID,Xue Hao7ORCID,Kong Ruochen1ORCID,Heslop David7ORCID,Paik Hye-Young7ORCID,MacIntyre C. Raina7ORCID

Affiliation:

1. Computer Science, Emory University, Atlanta, United States

2. School of Computer Science and Engineering, University of New South Wales, Sydney, Australia

3. George Mason University, Fairfax, United States

4. Arizona State University, Tempe, United States

5. Emory University, Atlanta, United States

6. ETH Zurich, Zurich, Switzerland

7. University of New South Wales, Sydney, Australia

Abstract

The spread of infectious diseases is a highly complex spatiotemporal process, difficult to understand, predict, and effectively respond to. Machine learning and artificial intelligence (AI) have achieved impressive results in other learning and prediction tasks; however, while many AI solutions are developed for disease prediction, only a few of them are adopted by decision-makers to support policy interventions. Among several issues preventing their uptake, AI methods are known to amplify the bias in the data they are trained on. This is especially problematic for infectious disease models that typically leverage large, open, and inherently biased spatiotemporal data. These biases may propagate through the modeling pipeline to decision-making, resulting in inequitable policy interventions. Therefore, there is a need to gain an understanding of how the AI disease modeling pipeline can mitigate biased input data, in-processing models, and biased outputs. Specifically, our vision is to develop a large-scale micro-simulation of individuals from which human mobility, population, and disease ground-truth data can be obtained. From this complete dataset—which may not reflect the real world—we can sample and inject different types of bias. By using the sampled data in which bias is known (as it is given as the simulation parameter), we can explore how existing solutions for fairness in AI can mitigate and correct these biases and investigate novel AI fairness solutions. Achieving this vision would result in improved trust in such models for informing fair and equitable policy interventions.

Funder

United States National Science Foundation

Australian Commonwealth Scientific and Industrial Research Organisation

National Science Foundation

Publisher

Association for Computing Machinery (ACM)

Reference125 articles.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3