Abstract
Living in the era of Big Data, one may advocate that the additional synthetic generation of data is redundant. However, to be able to truly say whether it is valid or not, one needs to focus more on the meaning and quality of data than on the quantity. In some domains, such as biomedical and translational sciences, data privacy still holds a higher importance than data sharing. This by default limits access to valuable research data. Intensive discussion, agreements, and conventions among different medical research players, as well as effective techniques and regulations for data anonymization, already made a big step toward simplification of data sharing. However, the situation with the availability of data about rare diseases or outcomes of novel treatments still requires costly and risky clinical trials and, thus, would greatly benefit from smart data generation. Clinical trials and tests on animals initiate a cyclic procedure that may involve multiple redesigns and retesting, which typically takes two or three years for medical devices and up to eight years for novel medicines, and costs between 10 and 20 million euros. The US Food and Drug Administration (FDA) acknowledges that for many novel devices, practical limitations require alternative approaches, such as computer modeling and engineering tests, to conduct large, randomized studies. In this article, we give an overview of global initiatives advocating for computer simulations in support of the 3R principles (Replacement, Reduction, and Refinement) in humane experimentation. We also present several research works that have developed methodologies of smart and comprehensive generation of synthetic biomedical data, such as virtual cohorts of patients, in support of In Silico Clinical Trials (ISCT) and discuss their common ground.
Subject
Artificial Intelligence,Information Systems,Computer Science (miscellaneous)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献