Abstract
AbstractAdvances in spatial transcriptomics (ST) technologies have provided unprecedented opportunities to depict transcriptomic and histological landscapes in the spatial context. Multi-modal ST data provide abundant and comprehensive information about cellular status, function, and organization. However, in dealing with the processing and analysis of spatial transcriptomics data, existing algorithms struggle to effectively fuse the multi-modal information contained within ST data. Here, we propose a graph contrastive learning-based cross-modality fusion model named stGCL for accurate and robust integrating gene expression, spatial information as well as histological profiles simultaneously. stGCL adopts a novel histology-based Vision Transformer (H-ViT) method to effectively encode histological features and combines multi-modal graph attention auto-encoder (GATE) with contrastive learning to fuse cross-modality features. In addition, stGCL introduces a pioneering spatial coordinate correcting and registering strategy for tissue slices integration, which can reduce batch effects and identify cross-sectional domains precisely. Compared with state-of-the-art methods on spatial transcriptomics data across platforms and resolutions, stGCL achieves a superior clustering performance and is more robust in unraveling spatial patterns of biological significance. Additionally, stGCL successfully reconstructed three-dimensional (3D) brain tissue structures by integrating vertical and horizontal slices respectively. Application of stGCL in human bronchiolar adenoma (BA) data reveals intratumor spatial heterogeneity and identifies candidate gene biomarkers. In summary, stGCL enables the fusion of various spatial modality data and is a powerful tool for analytical tasks such as spatial domain identification and multi-slice integration.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献