通过调整视觉转换器来解决医学联合学习中的异质性。
Tackling heterogeneity in medical federated learning via aligning vision transformers.
发表日期:2024 Jul 25
作者:
Erfan Darzi, Yiqing Shen, Yangming Ou, Nanna M Sijtsema, P M A van Ooijen
来源:
ARTIFICIAL INTELLIGENCE IN MEDICINE
摘要:
联合学习支持在分布式、隐私敏感的医学成像数据上训练模型。然而,参与机构之间的数据异质性导致模型性能下降和公平性问题,特别是对于代表性不足的数据集。为了应对这些挑战,我们建议利用 Vision Transformers 中的多头注意力机制来协调跨客户端的异构数据的表示。通过关注注意力机制作为对齐目标,我们的方法旨在提高医学成像应用中联邦学习模型的准确性和公平性。我们在 IQ-OTH/NCCD 肺癌数据集上评估我们的方法,使用潜在狄利克雷分配 (LDA) 模拟不同级别的数据异质性。我们的结果表明,与不同异质性水平上最先进的联邦学习方法相比,我们的方法实现了具有竞争力的性能,并提高了代表性不足的客户的模型性能,促进了联邦学习环境中的公平性。这些发现凸显了利用多头注意力机制解决医学联邦学习中数据异构性挑战的潜力。版权所有 © 2024 Elsevier B.V. 保留所有权利。
Federated learning enables training models on distributed, privacy-sensitive medical imaging data. However, data heterogeneity across participating institutions leads to reduced model performance and fairness issues, especially for underrepresented datasets. To address these challenges, we propose leveraging the multi-head attention mechanism in Vision Transformers to align the representations of heterogeneous data across clients. By focusing on the attention mechanism as the alignment objective, our approach aims to improve both the accuracy and fairness of federated learning models in medical imaging applications. We evaluate our method on the IQ-OTH/NCCD Lung Cancer dataset, simulating various levels of data heterogeneity using Latent Dirichlet Allocation (LDA). Our results demonstrate that our approach achieves competitive performance compared to state-of-the-art federated learning methods across different heterogeneity levels and improves the performance of models for underrepresented clients, promoting fairness in the federated learning setting. These findings highlight the potential of leveraging the multi-head attention mechanism to address the challenges of data heterogeneity in medical federated learning.Copyright © 2024 Elsevier B.V. All rights reserved.