Federated Explainable Multimodal Deep Learning for Anaemia Subtype Prediction and Fairness

Multimodal Deep Learning

Federated Explainable Multimodal Deep Learning for Anaemia Subtype Prediction and Fairness

Authors
Shreya Sharma and Deepak Dudeja
Published in
Vol 1, Issue 2, 2025

Abstract

Diagnosing anemia accurately remains challenging, particularly in differentiating subtypes (e.g., Iron Deficiency vs. Thalassemia) in decentralized settings. Current Deep Learning (DL) models are limited by uni-modal data (only images), poor generalizability across diverse populations, and a lack of transparency regarding inherent biases. This paper proposes a Federated Explainable Multimodal Deep Learning (FEM-DL) framework for anaemia subtype prediction. We fuse non-invasive biometric images (e.g., conjunctiva, retina) with tabular clinical data (demographics, blood history) using a Transformer-based fusion network. Training is conducted via Federated Learning (FL) across multiple centers to ensure data privacy and enhance cross-domain robustness. Finally, we integrate XAI (SHAP and Grad-CAM) to audit model fairness across protected subgroups (e.g., ethnicity, gender) and provide interpretable feature attributions, establishing a new standard for ethical and globally scalable AI diagnostics.