Author:
Najafiaghdam Hossein,Rabbani Rozhan,Gharia Asmaysinh,Papageorgiou Efthymios P.,Anwar Mekhail
Abstract
AbstractMillimeter-scale multi-cellular level imagers enable various applications, ranging from intraoperative surgical navigation to implantable sensors. However, the tradeoffs for miniaturization compromise resolution, making extracting 3D cell locations challenging—critical for tumor margin assessment and therapy monitoring. This work presents three machine-learning-based modules that extract spatial information from single image acquisitions using custom-made millimeter-scale imagers. The neural networks were trained on synthetically-generated (using Perlin noise) cell images. The first network is a convolutional neural network estimating the depth of a single layer of cells, the second is a deblurring module correcting for the point spread function (PSF). The final module extracts spatial information from a single image acquisition of a 3D specimen and reconstructs cross-sections, by providing a layered “map” of cell locations. The maximum depth error of the first module is 100 µm, with 87% test accuracy. The second module’s PSF correction achieves a least-square-error of only 4%. The third module generates a binary “cell” or “no cell” per-pixel labeling with an accuracy ranging from 89% to 85%. This work demonstrates the synergy between ultra-small silicon-based imagers that enable in vivo imaging but face a trade-off in spatial resolution, and the processing power of neural networks to achieve enhancements beyond conventional linear optimization techniques.
Funder
National Institute of Dental and Craniofacial Research
National Institute of Biomedical Imaging and Bioengineering
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献