研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

DRI-Net:使用密集残差初始网络对结肠镜检查图像中的息肉进行分割。

DRI-Net: segmentation of polyp in colonoscopy images using dense residual-inception network.

发表日期:2023
作者: Xiaoke Lan, Honghuan Chen, Wenbing Jin
来源: Frontiers in Physiology

摘要:

结直肠癌是胃肠道常见的恶性肿瘤,通常由腺瘤性息肉演变而来。然而,由于结肠镜图像中息肉与其周围组织的颜色相似,且大小、形状和纹理的多样性,智能诊断仍然存在巨大挑战。为此,我们提出了一种新颖的密集残差初始网络(DRI-Net),它利用 U-Net 作为骨干。首先,为了增加网络的宽度,设计了一种改进的residual-inception块来代替传统的卷积,从而提高了其容量和表达能力。此外,采用密集连接方案来增加网络深度,以便可以拟合更复杂的特征输入。最后,构建了改进的下采样模块以减少图像特征信息的损失。为了公平比较,我们使用三种流行的评估指标在 Kvasir-SEG 数据集上验证了所有方法。实验结果一致表明,DRI-Net在IoU、Mcc和Dice上的值分别达到77.72%、85.94%和86.51%,比次优模型高1.41%、0.66%和0.75%。同样,通过消融研究,也证明了我们的方法在结直肠语义分割方面的有效性。版权所有 © 2023 Lan、Chen 和 Jin。
Colorectal cancer is a common malignant tumor in the gastrointestinal tract, which usually evolves from adenomatous polyps. However, due to the similarity in color between polyps and their surrounding tissues in colonoscopy images, and their diversity in size, shape, and texture, intelligent diagnosis still remains great challenges. For this reason, we present a novel dense residual-inception network (DRI-Net) which utilizes U-Net as the backbone. Firstly, in order to increase the width of the network, a modified residual-inception block is designed to replace the traditional convolutional, thereby improving its capacity and expressiveness. Moreover, the dense connection scheme is adopted to increase the network depth so that more complex feature inputs can be fitted. Finally, an improved down-sampling module is built to reduce the loss of image feature information. For fair comparison, we validated all method on the Kvasir-SEG dataset using three popular evaluation metrics. Experimental results consistently illustrates that the values of DRI-Net on IoU, Mcc and Dice attain 77.72%, 85.94% and 86.51%, which were 1.41%, 0.66% and 0.75% higher than the suboptimal model. Similarly, through ablation studies, it also demonstrated the effectiveness of our approach in colorectal semantic segmentation.Copyright © 2023 Lan, Chen and Jin.