Abstract
Modern medical imaging methods allow for both qualitative and quantitative evaluations of tumors and issues surrounding them. Advances in computer science and big data processing are transforming any radiological study into analytic datasets, especially with the use of machine learning in medical image analysis. Among these datasets, statistically significant correlations with clinical events can then be searched for to subsequently assess their predictive value and ability to predict a particular clinical outcome. This concept, known as radiomics, was first described in 2012. It is particularly important in oncology because each type of tumor can be subdivided into many different molecular genetic subtypes, and simple visual characteristics are no longer sufficient. Moreover, as an absolutely noninvasive method, radiomics can provide a radiologist with additional information that would otherwise be unavailable without a histological examination of biopsy material. However, as with any methodology based on the use of big data, the question of the quality of the initial data becomes critical, because this can directly affect the outcome of the analysis and provide incorrect diagnostic information.
In this literature review, we examine potential approaches to ensuring the quality of research at all stages, from technical control of the state of diagnostic equipment to the extraction of imaging markers in oncology and the calculation of their correlation with clinical data.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献