研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

交互式可解释深度学习模型在核磁共振诊断前列腺癌中提供信息。

Interactive Explainable Deep Learning Model Informs Prostate Cancer Diagnosis at MRI.

发表日期:2023 Apr 11
作者: Charlie A Hamm, Georg L Baumgärtner, Felix Biessmann, Nick L Beetz, Alexander Hartenstein, Lynn J Savic, Konrad Froböse, Franziska Dräger, Simon Schallenberg, Madhuri Rudolph, Alexander D J Baur, Bernd Hamm, Matthias Haas, Sebastian Hofbauer, Hannes Cash, Tobias Penzkofer
来源: RADIOLOGY

摘要:

背景 在磁共振成像中准确、高效地诊断临床上有意义的前列腺癌(PCa)需要精准而高效的放射学解释。尽管人工智能可以协助完成这项工作,但缺乏透明度使其在临床翻译方面受到限制。目的 利用前列腺成像报告和数据系统(PI-RADS)特征开发可解释的人工智能(XAI)模型,用于双参数磁共振成像中有意义的临床PCa诊断,并用PI-RADS特征进行分类证明。材料和方法 本回顾性研究包括在2012年1月至2017年12月期间接受双参数磁共振成像和活检的连续有组织病理学证明的前列腺病变病人。在两名放射科医师的图像注释后,训练了深度学习模型以检测指数病变;分类PCa、有临床意义的PCa(Gleason评分≥7)和良性病变(例如前列腺炎);并使用PI-RADS特征证明分类。使用五倍交叉验证和受试者工作特征曲线下的区域评估基于病变和患者的表现。在多读者研究和使用外部PROSTATEx数据集中测试了临床可行性。多读者研究的统计评估包括曼-惠特尼U和精确费舍尔-叶茨检验。结果 总体而言,1224名男性(中位年龄67岁;IQR,62-73岁)有3260个前列腺病变(Gleason评分为6的病变372个;Gleason评分≥7的病变743个;良性病变2145个)。XAI可靠地在内部(受试者工作特征曲线下的区域,0.89)和外部测试集(受试者工作特征曲线下的区域,0.87)中检测有意义的临床PCa,灵敏度为93%(95% CI:87,98),每个患者平均有一个假阳性发现。 XAI分类的视觉和文本解释的准确性为80%(1080/1352),经专家证实。 XAI辅助阅读提高了非专家对于PI-RADS 3病变评估的信心(五点李氏量表上的4.1 vs 3.4;P = .007),将阅读时间缩短了58秒(P = .009)。结论 这种可解释的AI模型可靠地检测和分类有意义的前列腺癌,并改善非专家的信心和阅读时间,同时使用已经建立的成像特征提供视觉和文本解释。 ©RSNA,2023本文章提供附加资料。另见该期刊的Chapiro社论。
Background Clinically significant prostate cancer (PCa) diagnosis at MRI requires accurate and efficient radiologic interpretation. Although artificial intelligence may assist in this task, lack of transparency has limited clinical translation. Purpose To develop an explainable artificial intelligence (XAI) model for clinically significant PCa diagnosis at biparametric MRI using Prostate Imaging Reporting and Data System (PI-RADS) features for classification justification. Materials and Methods This retrospective study included consecutive patients with histopathologic analysis-proven prostatic lesions who underwent biparametric MRI and biopsy between January 2012 and December 2017. After image annotation by two radiologists, a deep learning model was trained to detect the index lesion; classify PCa, clinically significant PCa (Gleason score ≥ 7), and benign lesions (eg, prostatitis); and justify classifications using PI-RADS features. Lesion- and patient-based performance were assessed using fivefold cross validation and areas under the receiver operating characteristic curve. Clinical feasibility was tested in a multireader study and by using the external PROSTATEx data set. Statistical evaluation of the multireader study included Mann-Whitney U and exact Fisher-Yates test. Results Overall, 1224 men (median age, 67 years; IQR, 62-73 years) had 3260 prostatic lesions (372 lesions with Gleason score of 6; 743 lesions with Gleason score of ≥ 7; 2145 benign lesions). XAI reliably detected clinically significant PCa in internal (area under the receiver operating characteristic curve, 0.89) and external test sets (area under the receiver operating characteristic curve, 0.87) with a sensitivity of 93% (95% CI: 87, 98) and an average of one false-positive finding per patient. Accuracy of the visual and textual explanations of XAI classifications was 80% (1080 of 1352), confirmed by experts. XAI-assisted readings improved the confidence (4.1 vs 3.4 on a five-point Likert scale; P = .007) of nonexperts in assessing PI-RADS 3 lesions, reducing reading time by 58 seconds (P = .009). Conclusion The explainable AI model reliably detected and classified clinically significant prostate cancer and improved the confidence and reading time of nonexperts while providing visual and textual explanations using well-established imaging features. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Chapiro in this issue.