Abstract
AbstractAutomated amygdala segmentation is one of the most common tasks in human neuroscience research. However, due to the small volume of the human amygdala, especially in developing brains, the precision and consistency of the segmentation results are often affected by individual differences and inconsistencies in data distribution. To address these challenges, we propose an algorithm for learning boundary contrast of 427 manually traced amygdalae in children and adolescents to generate a transformer, AmygdalaGo-BOLT3D, for automatic segmentation of human amygdala. This method focuses on the boundary to effectively address the issue of false positive recognition and inaccurate edges due to small amygdala volume. Firstly, AmygdalaGo-BOLT3D develops a basic architecture for an adaptive cooperation network with multiple granularities. Secondly, AmygdalaGo-BOLT3D builds the self-attention-based consistency module to address generalizability problems arising from individual differences and inconsistent data distributions. Third, AmygdalaGo-BOLT3D adapts the original sample-mask model for the amygdala scene, which consists of three parts, namely a lightweight volumetric feature encoder, a 3D cue encoder, and a volume mask decoder, to improve the generalized segmentation of the model. Finally, AmygdalaGo-BOLT3D implements a boundary contrastive learning framework that utilizes the interaction mechanism between a prior cue and the embedded magnetic resonance images to achieve effective integration between the two. Experimental results demonstrate that predictions of the overall structure and boundaries of the human amygdala exhibit highly improved precision and help maintain stability in multiple age groups and imaging centers. This verifies the stability and generalization of the algorithm designed for multiple tasks. AmygdalaGo-BOLT3D has been deployed for the community (GITHUB LINK) to provide an open science foundation for its applications in population neuroscience.
Publisher
Cold Spring Harbor Laboratory