Federated Learning With Uncertainty For Different Tasks
View/ Open
Date
2024-04-29Author
Özer, Yekta Olgun
xmlui.dri2xhtml.METS-1.0.item-emb
Acik erisimxmlui.mirage2.itemSummaryView.MetaData
Show full item recordAbstract
Classical machine learning paradigms typically centralize all data on a single server for training, which can lead to problems such as data confidentiality, storage costs, and bandwidth requirements for data transfer. To address these challenges, Federated Learning has emerged. Although Federated Learning mitigates these issues, it also introduces its own set of drawbacks. In traditional machine learning approaches, data is preprocessed before model training to ensure that the resulting model yields reliable and consistent outcomes. However, in Federated Learning, direct access to input data is unavailable, which hinders successful model training and results in comparatively lower accuracy than traditional methods, particularly for inputs with inherent noise.
This thesis proposes a new method for computing loss, called 'Uncertainty Loss Calculation for Federate Learning.' The method was originally proposed for multi-task learning and has been presented theoretically, but it lacks empirical validation. The computation method integrates both regression and classification learning methodologies. In this study, the equation is deconstructed into two parts, allowing for the independent construction, training, and testing of two distinct models.
To assess the effectiveness of this new approach, we conducted computational comparisons using Mean Squared Error for Regression and Cross Entropy Loss for Classification. Based on these results, our study aims to determine if the method presented in prior literature has a practical applicatio