研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

MFA-Net: 多特征关联网络,用于医学图像分割。

MFA-Net: Multiple Feature Association Network for medical image segmentation.

发表日期:2023 Mar 28
作者: Zhixun Li, Nan Zhang, Huiling Gong, Ruiyun Qiu, Wei Zhang
来源: COMPUTERS IN BIOLOGY AND MEDICINE

摘要:

医学图像分割在计算机辅助诊断中发挥了至关重要的作用。然而,由于医学图像具有大量变异性,准确的分割是一项极具挑战性的任务。在本文中,我们提出了一种基于深度学习技术的全新医学图像分割网络,名为多特征关联网络(MFA-Net)。MFA-Net利用编码器解码器体系结构和跳过连接作为其骨干网,以及并行扩张卷积排列(PDCA)模块集成在编码器和解码器之间以捕捉更具代表性的深度特征。此外,我们引入了多尺度特征重组模块(MFRM)来重组和融合编码器的深度特征。为了增强全局关注力的感知度,我们在解码器上堆叠了称为全局注意堆叠(GAS)模块的提议机制。所提出的MFA-Net利用了新颖的全局注意机制来改善不同特征尺度的分割性能。我们在四个分割任务中评估了我们的MFA-Net,包括肠息肉、肝瘤、前列腺癌和皮肤病变。实验结果和消融研究表明,所提出的MFA-Net在全局定位和局部边缘识别方面优于现有技术的方法。版权所有©2023 Elsevier Ltd.
Medical image segmentation plays a crucial role in computer-aided diagnosis. However, due to the large variability of medical images, accurate segmentation is a highly challenging task. In this paper, we present a novel medical image segmentation network named the Multiple Feature Association Network (MFA-Net), which is based on deep learning techniques. The MFA-Net utilizes an encoder-decoder architecture with skip connections as its backbone network, and a parallelly dilated convolutions arrangement (PDCA) module is integrated between the encoder and the decoder to capture more representative deep features. Furthermore, a multi-scale feature restructuring module (MFRM) is introduced to restructure and fuse the deep features of the encoder. To enhance global attention perception, the proposed global attention stacking (GAS) modules are cascaded on the decoder. The proposed MFA-Net leverages novel global attention mechanisms to improve the segmentation performance at different feature scales. We evaluated our MFA-Net on four segmentation tasks, including lesions in intestinal polyp, liver tumor, prostate cancer, and skin lesion. Our experimental results and ablation study demonstrate that the proposed MFA-Net outperforms state-of-the-art methods in terms of global positioning and local edge recognition.Copyright © 2023 Elsevier Ltd. All rights reserved.