Affiliation:
1. Cancer Research UK Cambridge Institute University of Cambridge Robinson Way Cambridge CB2 0RE UK
2. Department of Physics University of Cambridge JJ Thomson Avenue Cambridge CB3 0HE UK
Abstract
AbstractMesoscopic photoacoustic imaging (PAI) enables label‐free visualization of vascular networks in tissues with high contrast and resolution. Segmenting these networks from 3D PAI data and interpreting their physiological and pathological significance is crucial yet challenging due to the time‐consuming and error‐prone nature of current methods. Deep learning offers a potential solution; however, supervised analysis frameworks typically require human‐annotated ground‐truth labels. To address this, an unsupervised image‐to‐image translation deep learning model is introduced, the Vessel Segmentation Generative Adversarial Network (VAN‐GAN). VAN‐GAN integrates synthetic blood vessel networks that closely resemble real‐life anatomy into its training process and learns to replicate the underlying physics of the PAI system in order to learn how to segment vasculature from 3D photoacoustic images. Applied to a diverse range of in silico, in vitro, and in vivo data, including patient‐derived breast cancer xenograft models and 3D clinical angiograms, VAN‐GAN demonstrates its capability to facilitate accurate and unbiased segmentation of 3D vascular networks. By leveraging synthetic data, VAN‐GAN reduces the reliance on manual labeling, thus lowering the barrier to entry for high‐quality blood vessel segmentation (F1 score: VAN‐GAN vs. U‐Net = 0.84 vs. 0.87) and enhancing preclinical and clinical research into vascular structure and function.
Funder
Cancer Research UK
Wellcome Trust
Cambridge Trust
National Physical Laboratory
Engineering and Physical Sciences Research Council