Abstract
AbstractThe advent of biobanks with vast quantities of medical imaging and paired genetic measurements creates huge opportunities for a new generation of genotype-phenotype association studies. However, disentangling biological signals from the many sources of bias and artifacts remains difficult. Using diverse types of medical imaging (i.e. MRIs, ECGs and DXAs), we develop registered and cross-modal generative models. In all cases, we show how registration, both spatial and temporal, guided by domain knowledge or learned de novo, uncovers rich biological information. Remarkably, our findings demonstrate that even extremely lossy transformations, such as registering images onto a single 1D curve (e.g. a circle), can yield robust signals. Conversely, we demonstrate that increasing data dimensionality by integrating multiple modalities can also result in richer representations. Through genome- and phenome-wide association studies (GWAS and PheWAS) of learned embeddings, we uncover significantly more associations with registered and fused modalities than with equivalently trained and sized representations learned from native coordinate spaces. Our findings systematically reveal the crucial role registration plays in enhancing the characterization of physiological states across a broad range of medical imaging data types.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献