研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

深度学习模型融合能够在不同的训练与测试数据集比例下提高肺部肿瘤分割的准确性。

Deep learning model fusion improves lung tumor segmentation accuracy across variable training-to-test dataset ratios.

发表日期:2023 Aug 07
作者: Yunhao Cui, Hidetaka Arimura, Tadamasa Yoshitake, Yoshiyuki Shioyama, Hidetake Yabuuchi
来源: Physical and Engineering Sciences in Medicine

摘要:

本研究旨在调查深度学习(DL)融合模型在低训练-测试比(TTR)数据集中对肺癌立体定向放射治疗(SBRT)计划三维计算机断层扫描(CT)影像中粗大肿瘤体积(GTVs)的鲁棒性。共纳入192例进行SBRT的肺癌患者(实体肿瘤118例,部分实体肿瘤53例,玻璃影21例)。根据计划CT影像中GTV中心的位置,裁剪出GTV的感兴趣区域。使用3D U-Net、V-Net和Dense V-Net训练三个DL模型来分割GTV区域。采用逻辑与、逻辑或和两个或三个DL模型输出的投票构建了九个融合模型。TTR定义为训练数据集中的病例数与测试数据集中的病例数之比。评估了12个模型的Dice相似系数(DSC)和Hausdorff距离(HD),TTR分别为1.00(训练数据:验证数据:测试数据=40:20:40)、0.791(35:20:45)、0.531(31:10:59)、0.291(20:10:70)和0.116(10:5:85)。投票融合模型在所有TTR中的DSC最高,为0.829至0.798;而其他模型在TTR为1.00时的DSC为0.818至0.804,在TTR为0.116时的DSC为0.788至0.742,HD为5.40 ± 3.00至6.07 ± 3.26毫米,优于任何单个DL模型。研究结果表明,提出的投票融合模型是一种对低TTR数据集分割肺癌SBRT计划CT影像中GTV的鲁棒方法。© 2023。澳大利亚物理科学与医学工程学院。
This study aimed to investigate the robustness of a deep learning (DL) fusion model for low training-to-test ratio (TTR) datasets in the segmentation of gross tumor volumes (GTVs) in three-dimensional planning computed tomography (CT) images for lung cancer stereotactic body radiotherapy (SBRT). A total of 192 patients with lung cancer (solid tumor, 118; part-solid tumor, 53; ground-glass opacity, 21) who underwent SBRT were included in this study. Regions of interest in the GTVs were cropped based on GTV centroids from planning CT images. Three DL models, 3D U-Net, V-Net, and dense V-Net, were trained to segment the GTV regions. Nine fusion models were constructed with logical AND, logical OR, and voting of the two or three outputs of the three DL models. TTR was defined as the ratio of the number of cases in a training dataset to that in a test dataset. The Dice similarity coefficients (DSCs) and Hausdorff distance (HD) of the 12 models were assessed with TTRs of 1.00 (training data: validation data: test data = 40:20:40), 0.791 (35:20:45), 0.531 (31:10:59), 0.291 (20:10:70), and 0.116 (10:5:85). The voting fusion model achieved the highest DSCs of 0.829 to 0.798 for all TTRs among the 12 models, whereas the other models showed DSCs of 0.818 to 0.804 for a TTR of 1.00 and 0.788 to 0.742 for a TTR of 0.116, and an HD of 5.40 ± 3.00 to 6.07 ± 3.26 mm better than any single DL models. The findings suggest that the proposed voting fusion model is a robust approach for low TTR datasets in segmenting GTVs in planning CT images of lung cancer SBRT.© 2023. Australasian College of Physical Scientists and Engineers in Medicine.