Author:
Yen Sheng-Yang,Huang Hao-En,Lien Gi-Shih,Liu Chih-Wen,Chu Chia-Feng,Huang Wei-Ming,Suk Fat-Moon
Abstract
AbstractWe developed a magnetic-assisted capsule colonoscope system with integration of computer vision-based object detection and an alignment control scheme. Two convolutional neural network models A and B for lumen identification were trained on an endoscopic dataset of 9080 images. In the lumen alignment experiment, models C and D used a simulated dataset of 8414 images. The models were evaluated using validation indexes for recall (R), precision (P), mean average precision (mAP), and F1 score. Predictive performance was evaluated with the area under the P-R curve. Adjustments of pitch and yaw angles and alignment control time were analyzed in the alignment experiment. Model D had the best predictive performance. Its R, P, mAP, and F1 score were 0.964, 0.961, 0.961, and 0.963, respectively, when the area of overlap/area of union was at 0.3. In the lumen alignment experiment, the mean degrees of adjustment for yaw and pitch in 160 trials were 21.70° and 13.78°, respectively. Mean alignment control time was 0.902 s. Finally, we compared the cecal intubation time between semi-automated and manual navigation in 20 trials. The average cecal intubation time of manual navigation and semi-automated navigation were 9 min 28.41 s and 7 min 23.61 s, respectively. The automatic lumen detection model, which was trained using a deep learning algorithm, demonstrated high performance in each validation index.
Funder
Ministry of Health and Welfare, Taiwan, Republic of China.
Publisher
Springer Science and Business Media LLC
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献