Affiliation:
1. Houston Methodist Cancer Center
2. Houston Methodist Hospital
3. Houston Methodist Academic Institute
Abstract
Translating images generated by label-free microscopy imaging, such as
Coherent Anti-Stokes Raman Scattering (CARS), into more familiar
clinical presentations of histopathological images will help the
adoption of real-time, spectrally resolved label-free imaging in
clinical diagnosis. Generative adversarial networks (GAN) have made
great progress in image generation and translation, but have been
criticized for lacking precision. In particular, GAN has often
misinterpreted image information and identified incorrect content
categories during image translation of microscopy scans. To alleviate
this problem, we developed a new Pix2pix GAN model that simultaneously
learns classifying contents in the images from a segmentation dataset
during the image translation training. Our model integrates
UNet+ with seg-cGAN, conditional generative adversarial
networks with partial regularization of segmentation. Technical
innovations of the UNet+/seg-cGAN model include: (1) replacing
UNet with UNet+ as the Pix2pix cGAN’s generator to
enhance pattern extraction and richness of the gradient, and (2)
applying the partial regularization strategy to train a part of the
generator network as the segmentation sub-model on a separate
segmentation dataset, thus enabling the model to identify correct
content categories during image translation. The quality of
histopathological-like images generated based on label-free CARS
images has been improved significantly.
Funder
John S. Dunn Foundation
Ting Tsung and Wei Fong Chao Family
Foundation
Subject
Atomic and Molecular Physics, and Optics,Biotechnology
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献