UNesT:利用层次Transformer进行局部空间表示学习以实现高效的医学分割。
UNesT: Local spatial representation learning with hierarchical transformer for efficient medical segmentation.
发表日期:2023 Aug 25
作者:
Xin Yu, Qi Yang, Yinchi Zhou, Leon Y Cai, Riqiang Gao, Ho Hin Lee, Thomas Li, Shunxing Bao, Zhoubing Xu, Thomas A Lasko, Richard G Abramson, Zizhao Zhang, Yuankai Huo, Bennett A Landman, Yucheng Tang
来源:
MEDICAL IMAGE ANALYSIS
摘要:
最近,基于Transformer的模型在计算机视觉和医学图像分析领域展示了卓越的表示学习能力,能够学习更好的全局依赖关系。Transformer将图像重新格式化为独立的补丁,并通过自注意机制实现全局通信。然而,在这种一维序列中保留补丁之间的位置信息很困难,丢失位置信息可能导致在处理大量异构尺寸的3D医学图像分割时性能不佳。此外,现有方法在处理预测大量组织类别或建模全局相互连接组织结构等重型医学分割任务时不够稳健和高效。为了应对这些挑战,并受到视觉Transformer中的嵌套分层结构的启发,我们提出了一种新颖的3D医学图像分割方法(UNesT),采用简化且收敛更快的Transformer编码器设计,通过分层聚合在空间上相邻的补丁序列之间实现局部通信。我们对多个具有挑战性的数据集进行了广泛验证,包括多个模态、解剖结构和广泛范围的组织类别,包括133个脑结构、腹部的14个器官、肾脏的4个分层组分以及相互连接的肾脏肿瘤和脑肿瘤。我们表明,UNesT始终实现了最先进的性能,并评估了其泛化能力和数据效率。特别是,在单个网络中,该模型完成了包含133个组织类别的整个脑分割任务,优于之前的最先进方法SLANT27与27个网络集成。我们的模型性能将公开可用的Colin和CANDI数据集的均值DSC分数分别从0.7264提高到0.7444和从0.6968提高到0.7025。代码、预训练模型和使用案例流程可以在以下网址找到:https://github.com/MASILab/UNesT。版权所有 © 2023 Elsevier B.V.。
Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.Copyright © 2023 Elsevier B.V. All rights reserved.