非配对式图像到图像转换在结直肠癌组织学分类中的染色彩色标准化中的作用。
The role of unpaired image-to-image translation for stain color normalization in colorectal cancer histology classification.
发表日期:2023 Mar 26
作者:
Nicola Altini, Tommaso Maria Marvulli, Francesco Alfredo Zito, Mariapia Caputo, Stefania Tommasi, Amalia Azzariti, Antonio Brunetti, Berardino Prencipe, Eliseo Mattioli, Simona De Summa, Vitoantonio Bevilacqua
来源:
Best Pract Res Cl Ob
摘要:
结肠癌(CRC)组织的组织学评估是病理学家关键而且必要的任务。不幸的是,由受过训练的专家手动注释是一个繁重的操作,常常受到诸如病理学家内部和相互之间的变异性等问题的困扰。计算模型正在革命数字病理学领域,提供可靠和快速的方法来应对组织分割和分类等挑战。在这方面,一个重要的障碍是不同实验室之间的染色颜色变化,这可能会降低分类器的性能。在这项工作中,我们研究了非配对图像到图像转换(UI2IT)模型在CRC组织学染色归一化方面的作用,并与经典的Hematoxylin-Eosin(H&E)图像归一化技术进行了比较。我们深度比较了五个基于生成对抗网络(GANs)的UI2IT范例的归一化模型,以实现具有鲁棒性的染色归一化管道。为避免需要在每对数据域之间训练风格转换GAN的需要,本文介绍了一种利用包含来自各种实验室的数据的元域进行训练的概念。所提出的框架通过允许为目标实验室训练单个图像归一化模型,在培训时间方面实现了巨大的节省。为了证明所提出的工作流程在临床实践中的适用性,我们构思了一种新的感知质量度量方式,称为病理学家感知质量(PPQ)度量。第二阶段涉及到CRC组织学组织类型的分类,其中利用从卷积神经网络中提取的深度特征来实现基于支持向量机(SVM)的计算机辅助诊断系统。为了证明该系统对新数据的可靠性,我们收集了一个外部验证数据集,该数据集由IRCCS Istituto Tumori "Giovanni Paolo II"的N = 15,857块组成。利用元领域的开发允许训练归一化模型,使得实现比在源领域上明确训练的归一化模型得到更好的分类结果成为可能。PPQ指标已被发现与分布的质量(Fréchet Inception Distance - FID)和转换图像与原始图像的相似性(学习可感知图像块相似性-LPIPS)相关,从而表明引入到自然图像处理任务中的GAN质量度量可以与病理学家对H&E图像的评估相联系。此外,FID已被发现与下游分类器的准确度相关。在利用DenseNet201特征训练的SVM中,使我们在所有配置中获得了最高的分类结果。在基于快速版CUT(Contrastive Unpaired Translation)的归一化方法FastCUT中,基于元域范例训练的方法允许在下游任务中获得最佳的分类结果,并相应地在分类数据集上显示最高的FID。染色归一化是组织病理学设置中的一个困难但基本的问题。应该考虑几个措施,以便正确评估归一化方法,以便它们可以引入临床实践。UI2IT框架提供了一种强大而有效的方法来执行归一化过程,提供带有适当色彩的逼真图像,而传统的归一化方法则会引入色彩伪像。采用所提出的元域框架,可以缩短训练时间,并提高下游分类器的准确度。 版权所有©2023作者。Elsevier B.V.发表。保留所有权利。
Histological assessment of colorectal cancer (CRC) tissue is a crucial and demanding task for pathologists. Unfortunately, manual annotation by trained specialists is a burdensome operation, which suffers from problems like intra- and inter-pathologist variability. Computational models are revolutionizing the Digital Pathology field, offering reliable and fast approaches for challenges like tissue segmentation and classification. With this respect, an important obstacle to overcome consists in stain color variations among different laboratories, which can decrease the performance of classifiers. In this work, we investigated the role of Unpaired Image-to-Image Translation (UI2IT) models for stain color normalization in CRC histology and compared to classical normalization techniques for Hematoxylin-Eosin (H&E) images.Five Deep Learning normalization models based on Generative Adversarial Networks (GANs) belonging to the UI2IT paradigm have been thoroughly compared to realize a robust stain color normalization pipeline. To avoid the need for training a style transfer GAN between each pair of data domains, in this paper we introduce the concept of training by exploiting a meta-domain, which contains data coming from a wide variety of laboratories. The proposed framework enables a huge saving in terms of training time, by allowing to train a single image normalization model for a target laboratory. To prove the applicability of the proposed workflow in the clinical practice, we conceived a novel perceptive quality measure, which we defined as Pathologist Perceptive Quality (PPQ). The second stage involved the classification of tissue types in CRC histology, where deep features extracted from Convolutional Neural Networks have been exploited to realize a Computer-Aided Diagnosis system based on a Support Vector Machine (SVM). To prove the reliability of the system on new data, an external validation set composed of N = 15,857 tiles has been collected at IRCCS Istituto Tumori "Giovanni Paolo II".The exploitation of a meta-domain consented to train normalization models that allowed achieving better classification results than normalization models explicitly trained on the source domain. PPQ metric has been found correlated to quality of distributions (Fréchet Inception Distance - FID) and to similarity of the transformed image to the original one (Learned Perceptual Image Patch Similarity - LPIPS), thus showing that GAN quality measures introduced in natural image processing tasks can be linked to pathologist evaluation of H&E images. Furthermore, FID has been found correlated to accuracies of the downstream classifiers. The SVM trained on DenseNet201 features allowed to obtain the highest classification results in all configurations. The normalization method based on the fast variant of CUT (Contrastive Unpaired Translation), FastCUT, trained with the meta-domain paradigm, allowed to achieve the best classification result for the downstream task and, correspondingly, showed the highest FID on the classification dataset.Stain color normalization is a difficult but fundamental problem in the histopathological setting. Several measures should be considered for properly assessing normalization methods, so that they can be introduced in the clinical practice. UI2IT frameworks offer a powerful and effective way to perform the normalization process, providing realistic images with proper colorization, unlike traditional normalization methods that introduce color artifacts. By adopting the proposed meta-domain framework, the training time can be reduced, and the accuracy of downstream classifiers can be increased.Copyright © 2023 The Author(s). Published by Elsevier B.V. All rights reserved.