研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

SegCoFusion:一种与融合管道合作的综合多模态体积分割方法,以增强病变感知能力。

SegCoFusion: An Integrative Multimodal Volumetric Segmentation Cooperating with Fusion Pipeline to Enhance Lesion Awareness.

发表日期:2023 Sep 22
作者: Yuanjie Gu, Yinghan Guan, Zekuan Yu, Biqin Dong
来源: IEEE Journal of Biomedical and Health Informatics

摘要:

多模态体积分割和融合是外科手术治疗规划、图像引导干预、肿瘤生长检测、放射治疗图生成等领域的两种有价值的技术。近年来,深度学习在上述两个任务中展现出了优秀的能力,但这些方法不可避免地面临瓶颈。一方面,近期的分割研究,尤其是U-Net系列,已经在分割任务中达到了性能极限。另一方面,由于成像模态之间的物理原理差异,几乎不可能准确捕捉多模态成像中的融合地面真值。因此,在现有的多模态医学图像融合研究中,只使用针对性手工比例融合两种模态的研究相对主观和针对性。为了解决上述问题,本文提出了一种多模态分割与融合的整合方案,即SegCoFusion,它由一种名为FDNet的新颖特征频率划分网络和一种使用双单路径特征补充策略的分割部分组成,以优化分割输入并与融合部分相连。此外,针对多模态脑肿瘤体积融合和分割,定性和定量结果表明SegCoFusion能够突破分割和融合方法的性能极限。此外,通过与二维两模态融合任务上的最先进融合方法进行比较,我们的方法在融合性能上更优。因此,提出的SegCoFusion通过与分割协作来改善体积融合的性能,并增强病变感知能力,开创了一种新的改进视角。
Multimodal volumetric segmentation and fusion are two valuable techniques for surgical treatment planning, image-guided interventions, tumor growth detection, radiotherapy map generation, etc. In recent years, deep learning has demonstrated its excellent capability in both of the above tasks, while these methods inevitably face bottlenecks. On the one hand, recent segmentation studies, especially the U-Net-style series, have reached the performance ceiling in segmentation tasks. On the other hand, it is almost impossible to capture the ground truth of the fusion in multimodal imaging, due to differences in physical principles among imaging modalities. Hence, most of the existing studies in the field of multimodal medical image fusion, which fuse only two modalities at a time with hand-crafted proportions, are subjective and task-specific. To address the above concerns, this work proposes an integration of multimodal segmentation and fusion, namely SegCoFusion, which consists of a novel feature frequency dividing network named FDNet and a segmentation part using a dual-single path feature supplementing strategy to optimize the segmentation inputs and suture with the fusion part. Furthermore, focusing on multimodal brain tumor volumetric fusion and segmentation, the qualitative and quantitative results demonstrate that SegCoFusion can break the ceiling both of segmentation and fusion methods. Moreover, the effectiveness of the proposed framework is also revealed by comparing it with state-of-the-art fusion methods on 2D two-modality fusion tasks, our method achieves better fusion performance than others. Therefore, the proposed SegCoFusion develops a novel perspective that improves the performance in volumetric fusion by cooperating with segmentation and enhances lesion awareness.