研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

医学图像分割模型:一项实验研究

Segment anything model for medical image analysis: An experimental study.

发表日期:2023 Aug 02
作者: Maciej A Mazurowski, Haoyu Dong, Hanxue Gu, Jichen Yang, Nicholas Konz, Yixin Zhang
来源: MEDICAL IMAGE ANALYSIS

摘要:

由于数据注释的有限可用性,对医学图像进行分割模型训练仍然具有挑战性。Segment Anything Model (SAM) 是一个基础模型,通过对超过10亿个注释进行训练,主要针对自然图像,旨在以交互方式分割用户定义的感兴趣对象。尽管模型在自然图像上的性能令人印象深刻,但医学图像域存在一系列挑战。在这里,我们对SAM在来自各种模态和解剖学的19个医学成像数据集上分割医学图像的能力进行了广泛的评估。在我们的实验中,我们使用标准方法为SAM生成了点和框提示,以模拟交互式分割。我们报告以下的发现:(1) 根据单个提示,SAM的性能在不同的数据集和任务上有很大的变化,从脊柱MRI的IoU=0.1135到髋部X光的IoU=0.8650。(2) 对于具有较少歧义的提示的良好界定对象(如计算机断层扫描中的器官分割),分割性能似乎较好,而对于其他各种情景(如脑肿瘤的分割),分割性能较差。(3) 与点提示相比,SAM在框提示下的表现明显更好。(4) 在几乎所有单点提示设置中,SAM的性能优于类似的方法RITM、SimpleClick和FocalClick。(5) 当以迭代方式提供多个点提示时,SAM的性能一般仅略微提高,而其他方法的性能提高到超过SAM的基于点的性能水平。我们还提供了SAM在所有测试数据集上的表现、迭代分割和在提示模棱两可时的行为的几个示例。我们得出结论,SAM在某些医学成像数据集上表现出令人印象深刻的零样本分割性能,但在其他数据集上表现中等至较差。SAM在医学成像中自动化图像分割方面具有重要潜力,但在使用时需要适当关注。我们公开提供了评估SAM的代码,网址为https://github.com/mazurowski-lab/segment-anything-medical-evaluation。版权所有 © 2023 Elsevier B.V. 保留所有权利。
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.Copyright © 2023 Elsevier B.V. All rights reserved.