ChatGPT和GPT-4在肺癌自由文本CT报告数据挖掘方面的潜力。
Potential of ChatGPT and GPT-4 for Data Mining of Free-Text CT Reports on Lung Cancer.
发表日期:2023 Sep
作者:
Matthias A Fink, Arved Bischoff, Christoph A Fink, Martin Moll, Jonas Kroschke, Luca Dulz, Claus Peter Heußel, Hans-Ulrich Kauczor, Tim F Weber
来源:
RADIOLOGY
摘要:
背景 最新的大型语言模型(LLMs)通过用户定义的文本提示解决未见过的问题,无需重新训练,提供了从自由文本医学记录中更高效的信息提取方法,比手动注释更具潜力。 目的 通过使用用户定义的提示,比较LLMs ChatGPT和GPT-4在自由文本CT报告中数据挖掘和标记肺癌的肿瘤表型的性能。 材料和方法 本回顾性研究纳入了于2021年9月至2023年3月间接受肺癌随访CT的患者。留出25份报告进行提示工程,以指导LLMs提取病变直径、标记转移性疾病和评估肿瘤学进展。将这些输出送入基于规则的自然语言处理流程,以匹配四位放射科医生的基准真实注释,并计算性能指标。对LLMs的肿瘤学推理进行了五分Likert量表评分,评估其事实准确性和准确度。记录了虚构说法的发生情况。统计分析包括Wilcoxon符号秩和麦克马尔检验。 结果 在424位患者(平均年龄65岁±11 [SD];265男性)的424份CT报告中,GPT-4在提取病变参数方面优于ChatGPT(98.6%对84.0%,P < .001),导致96%正确的挖掘报告(对比ChatGPT的67%,P < .001)。GPT-4在识别转移性疾病方面具有更高的准确性(98.1% [95% CI: 97.7, 98.5] 对90.3% [95% CI: 89.4, 91.0])以及对肿瘤学进展的正确标签的更高性能(F1分数0.96 [95% CI: 0.94, 0.98] 对0.91 [95% CI: 0.89, 0.94])(两者P < .001)。在肿瘤学推理方面,与ChatGPT相比,GPT-4在事实准确性(4.3对3.9)和准确度(4.4对3.3)上获得了更高的Likert量表评分,并且虚构说法的发生率较低(1.7%对13.7%)(所有P < .001)。 结论 使用用户定义的提示时,GPT-4在从自由文本CT报告中提取肺癌的肿瘤表型方面优于ChatGPT,并展现出更好的肿瘤学推理能力,虚构说法较少。© RSNA, 2023。本文的补充材料可在此文章中获得。此外,本期编辑文章由Hafezi-Nejad和Trivedi提供。
Background The latest large language models (LLMs) solve unseen problems via user-defined text prompts without the need for retraining, offering potentially more efficient information extraction from free-text medical records than manual annotation. Purpose To compare the performance of the LLMs ChatGPT and GPT-4 in data mining and labeling oncologic phenotypes from free-text CT reports on lung cancer by using user-defined prompts. Materials and Methods This retrospective study included patients who underwent lung cancer follow-up CT between September 2021 and March 2023. A subset of 25 reports was reserved for prompt engineering to instruct the LLMs in extracting lesion diameters, labeling metastatic disease, and assessing oncologic progression. This output was fed into a rule-based natural language processing pipeline to match ground truth annotations from four radiologists and derive performance metrics. The oncologic reasoning of LLMs was rated on a five-point Likert scale for factual correctness and accuracy. The occurrence of confabulations was recorded. Statistical analyses included Wilcoxon signed rank and McNemar tests. Results On 424 CT reports from 424 patients (mean age, 65 years ± 11 [SD]; 265 male), GPT-4 outperformed ChatGPT in extracting lesion parameters (98.6% vs 84.0%, P < .001), resulting in 96% correctly mined reports (vs 67% for ChatGPT, P < .001). GPT-4 achieved higher accuracy in identification of metastatic disease (98.1% [95% CI: 97.7, 98.5] vs 90.3% [95% CI: 89.4, 91.0]) and higher performance in generating correct labels for oncologic progression (F1 score, 0.96 [95% CI: 0.94, 0.98] vs 0.91 [95% CI: 0.89, 0.94]) (both P < .001). In oncologic reasoning, GPT-4 had higher Likert scale scores for factual correctness (4.3 vs 3.9) and accuracy (4.4 vs 3.3), with a lower rate of confabulation (1.7% vs 13.7%) than ChatGPT (all P < .001). Conclusion When using user-defined prompts, GPT-4 outperformed ChatGPT in extracting oncologic phenotypes from free-text CT reports on lung cancer and demonstrated better oncologic reasoning with fewer confabulations. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Hafezi-Nejad and Trivedi in this issue.