Federated learning (FL) enables collaborative model training without centralizing
data, which is particularly relevant in environments with sensitive data,
such as healthcare. In the thesis, we compare the performance of centralized
learning and FL on several medical datasets and systematically examine the
impact of the choice of federated strategy and data distribution. In addition,
we evaluate the added value of FL by comparing the results of models
trained solely on local data from individual clients with those participating
in FL. The comparison is conducted on linear models, neural networks, and
XGBoost. The evaluation includes classification and regression tasks with
appropriate performance measures. The results show that, with proper configuration,
FL can achieve performance comparable to centralized learning
while also outperforming models trained only on clients’ local data, confirming
that FL is a practical alternative in scenarios where sharing raw data is
not feasible, particularly in the medical context.
|