研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

基于交叉参数生成对抗网络的磁共振影像特征合成用于乳腺病变分类。

Cross-parametric generative adversarial network-based magnetic resonance image feature synthesis for breast lesion classification.

发表日期:2023 Sep 01
作者: Ming Fan, Guangyao Huang, Junhong Lou, Xin Gao, Tieyong Zeng, Lihua Li
来源: IEEE Journal of Biomedical and Health Informatics

摘要:

动态对比增强磁共振成像(DCE-MRI)具有乳腺癌诊断和治疗中有关肿瘤形态和生理的信息。然而,与其他参数图像(如T2加权成像)相比,这项技术需要注射对比剂,并且需要更多的采集时间。当前的图像合成方法尝试从一个域映射到另一个域,但将具有一个序列的图像映射成具有多个序列的图像是具有挑战性甚至是不可行的。在这里,我们提出了一种基于交叉参数生成对抗网络(GAN)的特征合成(CPGANFS)的新方法,用于从T2加权成像生成区分性DCE-MRI特征,并应用于乳腺癌诊断。所提出的方法将T2加权成像解码为潜在的交叉参数特征,通过平衡两者之间的共享信息来重构DCE-MRI和T2加权成像特征。使用具有渐进惩罚项的Wasserstein GAN来区分从DCE-MRI中提取的地面真实特征和从T2加权成像生成的特征。合成DCE-MRI特征模型在乳腺癌诊断中的预测性能(AUC = 0.866)明显优于基于T2加权成像的模型(AUC = 0.815)(p = 0.036)。模型的可视化显示,我们的CPGANFS方法通过从T2加权成像和DCE-MRI学习的跨参数信息提高了预测能力,从而引起对病灶和周围实质区域的关注。我们提出的CPGANFS提供了一个从一个单一序列图像引导的富含信息的时间序列图像中生成交叉参数MR图像特征的框架。广泛的实验结果证明了其在乳腺癌诊断中的有效性,具有较高的可解释性和改进的性能。
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) contains information on tumor morphology and physiology for breast cancer diagnosis and treatment. However, this technology requires contrast agent injection with more acquisition time than other parametric images, such as T2-weighted imaging (T2WI). Current image synthesis methods attempt to map the image data from one domain to another, whereas it is challenging or even infeasible to map the images with one sequence into images with multiple sequences. Here, we propose a new approach of cross-parametric generative adversarial network (GAN)-based feature synthesis (CPGANFS) to generate discriminative DCE-MRI features from T2WI with applications in breast cancer diagnosis. The proposed approach decodes the T2W images into latent cross-parameter features to reconstruct the DCE-MRI and T2WI features by balancing the information shared between the two. A Wasserstein GAN with a gradient penalty is employed to differentiate the T2WI-generated features from ground-truth features extracted from DCE-MRI. The synthesized DCE-MRI feature-based model achieved significantly (p = 0.036) higher prediction performance (AUC = 0.866) in breast cancer diagnosis than that based on T2WI (AUC = 0.815). Visualization of the model shows that our CPGANFS method enhances the predictive power by levitating attention to the lesion and the surrounding parenchyma areas, which is driven by the interparametric information learned from T2WI and DCE-MRI. Our proposed CPGANFS provides a framework for cross-parametric MR image feature generation from a single-sequence image guided by an information-rich, time-series image with kinetic information. Extensive experimental results demonstrate its effectiveness with high interpretability and improved performance in breast cancer diagnosis.