使用[18F]FET PET成像的多标签卷积神经网络模型,用于自动检测和分割胶质瘤。
A multi-label CNN model for the automatic detection and segmentation of gliomas using [18F]FET PET imaging.
发表日期:2023 Mar 18
作者:
Masoomeh Rahimpour, Ronald Boellaard, Sander Jentjens, Wies Deckers, Karolien Goffin, Michel Koole
来源:
Eur J Nucl Med Mol I
摘要:
此研究的目的是开发一种卷积神经网络(CNN),利用[18F]氟乙基-L-酪氨酸([18F]FET)PET自动检测和切割胶质瘤。回顾性地纳入了进行过20-40分钟固定[18F]FET PET扫描的93名患者(其中84名在院内,7名在外部)。两位核医学医师在MIM软件中定义了病变和背景区域,一个专家读者的勾画用作CNN模型的训练和测试基准,而第二个专家读者的勾画则用于评估读者之间的一致性。开发了一种多标签CNN来分割病变和背景区域,而单标签CNN则用于仅进行病变分割。通过将[18F]FET PET扫描分类为未分割肿瘤和相反情况来评估病变可检测性,同时使用dice相似系数(DSC)和分割肿瘤体积评估分割性能。使用最大和平均肿瘤到平均背景摄取率(TBRmax / TBRmean)评估定量精度。使用院内数据进行三倍交叉验证(CV)训练和测试CNN模型,而使用外部数据进行独立评估以评估两个CNN模型的普适性。根据三倍CV,与单标签CNN模型相比,多标签CNN模型实现了88.9%的灵敏度和96.5%的准确度,可将[18F]FET PET扫描分为阳性和阴性,而单标签CNN模型则仅实现了35.3%的灵敏度和83.1%的准确度。此外,多标签CNN模型允许准确估计最大/平均病灶和平均背景摄取量,从而实现了准确的TBRmax / TBRmean估计,相对于半自动方法,可以最小化用户交互和潜在的读者之间差异性。在病变分割方面,多标签CNN模型(DSC = 74.6 ± 23.1%)表现出与单标签CNN模型(DSC = 73.7 ± 23.2%)相等的性能,单标签和多标签模型估计的肿瘤体积(分别为22.9 ± 23.6 ml和23.1 ± 24.3 ml)与专家读者估计的肿瘤体积(24.1 ± 24.4 ml)基本相似。两个CNN模型的DSC与第二位专家读者根据第一位专家读者的病灶分割所得的DSC相一致,而使用院内数据确定的检测和分割性能得到了使用外部数据进行的独立评估的证实。建议的多标签CNN模型通过高灵敏度和准确度检测阳性[18F]FET PET扫描。一旦检测到,便实现了准确的肿瘤分割和背景活动估计,从而最小化用户交互和可能的读者之间的差异性。 ©2023年作者,独家许可Springer-Verlag GmbH Germany,属于Springer Nature。
The aim of this study was to develop a convolutional neural network (CNN) for the automatic detection and segmentation of gliomas using [18F]fluoroethyl-L-tyrosine ([18F]FET) PET.Ninety-three patients (84 in-house/7 external) who underwent a 20-40-min static [18F]FET PET scan were retrospectively included. Lesions and background regions were defined by two nuclear medicine physicians using the MIM software, such that delineations by one expert reader served as ground truth for training and testing the CNN model, while delineations by the second expert reader were used to evaluate inter-reader agreement. A multi-label CNN was developed to segment the lesion and background region while a single-label CNN was implemented for a lesion-only segmentation. Lesion detectability was evaluated by classifying [18F]FET PET scans as negative when no tumor was segmented and vice versa, while segmentation performance was assessed using the dice similarity coefficient (DSC) and segmented tumor volume. The quantitative accuracy was evaluated using the maximal and mean tumor to mean background uptake ratio (TBRmax/TBRmean). CNN models were trained and tested by a threefold cross-validation (CV) using the in-house data, while the external data was used for an independent evaluation to assess the generalizability of the two CNN models.Based on the threefold CV, the multi-label CNN model achieved 88.9% sensitivity and 96.5% precision for discriminating between positive and negative [18F]FET PET scans compared to a 35.3% sensitivity and 83.1% precision obtained with the single-label CNN model. In addition, the multi-label CNN allowed an accurate estimation of the maximal/mean lesion and mean background uptake, resulting in an accurate TBRmax/TBRmean estimation compared to a semi-automatic approach. In terms of lesion segmentation, the multi-label CNN model (DSC = 74.6 ± 23.1%) demonstrated equal performance as the single-label CNN model (DSC = 73.7 ± 23.2%) with tumor volumes estimated by the single-label and multi-label model (22.9 ± 23.6 ml and 23.1 ± 24.3 ml, respectively) closely approximating the tumor volumes estimated by the expert reader (24.1 ± 24.4 ml). DSCs of both CNN models were in line with the DSCs by the second expert reader compared with the lesion segmentations by the first expert reader, while detection and segmentation performance of both CNN models as determined with the in-house data were confirmed by the independent evaluation using external data.The proposed multi-label CNN model detected positive [18F]FET PET scans with high sensitivity and precision. Once detected, an accurate tumor segmentation and estimation of background activity was achieved resulting in an automatic and accurate TBRmax/TBRmean estimation, such that user interaction and potential inter-reader variability can be minimized.© 2023. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.