Author:
Jang Junbong,Wang Chuangqi,Zhang Xitong,Choi Hee June,Pan Xiang,Lin Bolun,Yu Yudong,Whittle Carly,Ryan Madison,Chen Yenyu,Lee Kwonmoo
Abstract
AbstractQuantitative studies of cellular morphodynamics rely on extracting leading-edge velocity time-series based on accurate cell segmentation from live cell imaging. However, live cell imaging has numerous challenging issues about accurate edge localization. Here, we develop a deep learning-based pipeline, termed MARS-Net (Multiple-microscopy- type-based Accurate and Robust Segmentation Network), that utilizes transfer learning and the datasets from multiple types of microscopy to localize cell edges with high accuracy, allowing quantitative profiling of cellular morphodynamics. For effective training with the datasets from multiple types of live cell microscopy, we integrated the pretrained VGG-19 encoder with U-Net decoder and added dropout layers. Using this structure, we were able to train one neural network model that can accurately segment various live cell movies from phase contrast, spinning disk confocal, and total internal reflection fluorescence microscopes. Intriguingly, MARS-Net produced more accurate edge localization than the neural network models trained with single microscopy type datasets, whereas the standard U-Net could not increase the overall accuracy. We expect that MARS-Net can accelerate the studies of cellular morphodynamics by providing accurate segmentation of challenging live cell images.
Publisher
Cold Spring Harbor Laboratory
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献