Author:
Brößner Peter,Hohlmann Benjamin,Radermacher Klaus
Abstract
The automated and robust segmentation of bone surfaces in ultrasound (US) images can open up new fields of application for US imaging in computer-assisted orthopedic surgery, e.g. for the patient-specific planning process in computer-assisted knee replacement. For the automated, deep learning-based segmentation of medical images, CNN-based methods have been the state of the art over the last years, while recently Transformer-based methods are on the rise in computer vision. To compare these methods with respect to US image segmentation, in this paper the recent Transformer- based Swin-UNet is exemplarily benchmarked against the commonly used CNN-based nnUNet on the application of in-vivo 2D US knee segmentation.Trained and tested on our own dataset with 8166 annotated images (split in 7155 and 1011 images respectively), both the nnUNet and the pre-trained Swin-UNet show a Dice coefficient of 0.78 during testing. For distances between skeletonized labels and predictions, a symmetric Hausdorff distance of 44.69 pixels and a symmetric surface distance of 5.77 pixels is found for nnUNet as compared to 42.78 pixels and 5.68 pixels respectively for the Swin-UNet. Based on qualitative assessment, the Transformer-based Swin-UNet appears to benefit from its capability of learning global relationships as compared to the CNN-based nnUNet, while the latter shows more consistent and smooth predictions on a local level, presumably due to the character of convolution operation. Besides, the Swin-UNet requires generalized pre-training to be competitive.Since both architectures are evenly suited for the task at hand, for our future work, hybrid architectures combining the characteristic advantages of Transformer-based and CNN-based methods seem promising for US image segmentation.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献