研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

一种新的xAI框架,带有超声数据中肿瘤决策的特征可解释性:与Grad-CAM进行比较。

A new xAI framework with feature explainability for tumors decision-making in Ultrasound data: comparing with Grad-CAM.

发表日期:2023 Apr 07
作者: Di Song, Jincao Yao, Yitao Jiang, Siyuan Shi, Chen Cui, Liping Wang, Lijing Wang, Huaiyu Wu, Hongtian Tian, Xiuqin Ye, Di Ou, Wei Li, Na Feng, Weiyun Pan, Mei Song, Jinfeng Xu, Dong Xu, Linghu Wu, Fajin Dong
来源: Comput Meth Prog Bio

摘要:

应用人工智能(AI)在甲状腺癌超声筛查中的价值已被认可,早期的许多研究证实AI可能帮助医生获得更准确的诊断。然而,AI决策过程的黑匣子本质使用户难以理解AI预测的基础。此外,可解释性不仅与AI的性能有关,而且还涉及医学诊断的责任和风险。本文提出了Explainer,这是一个本质上可解释的框架,可以对图像进行分类并创建高亮显示其预测基础的热力图。使用19341个甲状腺超声图像数据集进行训练和测试,并对所提出的框架的鲁棒性进行评估。然后进行了良恶性分类研究,以确定医生在使用Explainer时是否比单独或使用Gradient-weighted Class Activation Mapping(Grad-CAM)时表现更好。读者研究表明,Explainer可以在解释热力图的同时实现更准确的诊断,并且在辅助下医生的表现得到了改善。案例研究结果证实,与Grad-CAM相比,Explainer能够定位更合理和特征相关的区域。Explainer为医生提供了一个理解AI预测基础并评估其可靠性的工具,有潜力揭开医学成像AI的“黑匣子”。© 2023 Elsevier B.V.保留所有权利。
The value of implementing artificial intelligence (AI) on ultrasound screening for thyroid cancer has been acknowledged, with numerous early studies confirming AI might help physicians acquire more accurate diagnoses. However, the black box nature of AI's decision-making process makes it difficult for users to grasp the foundation of AI's predictions. Furthermore, explainability is not only related to AI performance, but also responsibility and risk in medical diagnosis. In this paper, we offer Explainer, an intrinsically explainable framework that can categorize images and create heatmaps highlighting the regions on which its prediction is based.A dataset of 19341 thyroid ultrasound images with pathological results and physician-annotated TI-RADS features is used to train and test the robustness of the proposed framework. Then we conducted a benign-malignant classification study to determine whether physicians perform better with the assistance of an explainer than they do alone or with Gradient-weighted Class Activation Mapping (Grad-CAM).Reader studies show that the Explainer can achieve a more accurate diagnosis while explaining heatmaps, and that physicians' performances are improved when assisted by the Explainer. Case study results confirm that the Explainer is capable of locating more reasonable and feature-related regions than the Grad-CAM.The Explainer offers physicians a tool to understand the basis of AI predictions and evaluate their reliability, which has the potential to unbox the "black box" of medical imaging AI.Copyright © 2023 Elsevier B.V. All rights reserved.