研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

一种基于卷积神经网络的方法,用于对AgNOR染色图像中的细胞核和核仁进行联合分割和数量化。

A CNN-based approach for joint segmentation and quantification of nuclei and NORs in AgNOR-stained images.

发表日期:2023 Sep 07
作者: Maikel M Rönnau, Tatiana W Lepper, Luara N Amaral, Pantelis V Rados, Manuel M Oliveira
来源: Comput Meth Prog Bio

摘要:

口腔癌是人类癌症中第六常见类型。刷子细胞学在计算AgNOR(银染核仁组织区)的数量上有助于早期口腔癌的检测,降低患者死亡率。然而,目前仍在使用的AgNOR的手动计数方法耗时、劳动密集且容易出错。我们的工作目标是通过提出一种基于卷积神经网络(CNN)的方法,自动分割显微镜幻灯片图像中的单个细胞核和AgNOR,并计算每个细胞核中的AgNOR数量,从而解决这些缺点。我们系统地定义、训练和测试了102个CNN,寻找一个高性能的解决方案。其中包括评估51个网络架构,结合17个编码器、3个解码器和2个损失函数。这些CNN在一个新的包含1171幅来自48名患者的口腔黏膜上皮细胞AgNOR染色图像数据集上进行了训练和评估,并由专家进行了地面实况标注。在我们的项目中,我们开发了一种半自动程序,极大地简化了标注工作。我们还开发了一种自动方法,用于丢弃重叠的细胞核(它们往往会隐藏AgNOR,从而影响其真实计数)。除了在测试数据集上的评估外,还针对第二个数据集,通过与人类专家组的结果进行比较,评估了最佳模型的稳健性。在测试数据集上表现最好的CNN模型由DenseNet-169和LinkNet与Focal Loss组成(DenseNet-169作为编码器,LinkNet作为解码器)。它获得了0.90的Dice得分和0.84的重叠联合度(IoU)。细胞核和AgNOR的计数分别达到了0.94和0.90的精确度和召回率,AgNOR的精确度和召回率分别达到了0.82和0.74。我们的解决方案在一个由6名新患者的291幅图像组成的数据集上实现了与人类专家类似的性能,细胞核的组内相关系数(ICC)为0.91,AgNOR的ICC为0.81,置信区间为[0.89, 0.93]和[0.77, 0.84],p值小于0.001,验证了其统计学的显著性。我们的AgNOR染色图像数据集是目前公开可用的在患者数量上最多样化的AgNOR染色图像数据集,并且是口腔细胞的首个数据集。基于CNN的AgNOR染色图像中细胞核和核仁的联合分割和计量可以实现类似专家的性能水平,并且比后者快数个数量级。我们的解决方案通过与专家组的结果显示了强一致性,凸显了加速诊断工作流程的潜力。我们训练的模型、代码和数据集可供使用,并能激发早期口腔癌检测领域的新研究。版权所有 © 2023 Elsevier B.V. 保留所有权利。
Oral cancer is the sixth most common kind of human cancer. Brush cytology for counting Argyrophilic Nucleolar Organizer Regions (AgNORs) can help early mouth cancer detection, lowering patient mortality. However, the manual counting of AgNORs still in use today is time-consuming, labor-intensive, and error-prone. The goal of our work is to address these shortcomings by proposing a convolutional neural network (CNN) based method to automatically segment individual nuclei and AgNORs in microscope slide images and count the number of AgNORs within each nucleus.We systematically defined, trained and tested 102 CNNs in the search for a high-performing solution. This included the evaluation of 51 network architectures combining 17 encoders with 3 decoders and 2 loss functions. These CNNs were trained and evaluated on a new AgNOR-stained image dataset of epithelial cells from oral mucosa containing 1,171 images from 48 patients, with ground truth annotated by specialists. The annotations were greatly facilitated by a semi-automatic procedure developed in our project. Overlapping nuclei, which tend to hide AgNORs, thus affecting their true count, were discarded using an automatic solution also developed in our project. Besides the evaluation on the test dataset, the robustness of the best performing model was evaluated against the results produced by a group of human experts on a second dataset.The best performing CNN model on the test dataset consisted of a DenseNet-169 + LinkNet with Focal Loss (DenseNet-169 as encoder and LinkNet as decoder). It obtained a Dice score of 0.90 and intersection over union (IoU) of 0.84. The counting of nuclei and AgNORs achieved precision and recall of 0.94 and 0.90 for nuclei, and 0.82 and 0.74 for AgNORs, respectively. Our solution achieved a performance similar to human experts on a set of 291 images from 6 new patients, obtaining Intraclass Correlation Coefficient (ICC) of 0.91 for nuclei and 0.81 for AgNORs with 95% confidence intervals of [0.89, 0.93] and [0.77, 0.84], respectively, and p-values < 0.001, confirming its statistical significance. Our AgNOR-stained image dataset is the most diverse publicly available AgNOR-stained image dataset in terms of number of patients and the first for oral cells.CNN-based joint segmentation and quantification of nuclei and NORs in AgNOR-stained images achieves expert-like performance levels, while being orders of magnitude faster than the later. Our solution demonstrated this by showing strong agreement with the results produced by a group of specialists, highlighting its potential to accelerate diagnostic workflows. Our trained model, code, and dataset are available and can stimulate new research in early oral cancer detection.Copyright © 2023 Elsevier B.V. All rights reserved.