Training and Model Parameters to Defend against Tabular Leakage Attacks
Permanent lenke
https://hdl.handle.net/10037/33826Dato
2024-05-15Type
MastergradsoppgaveMaster thesis
Forfatter
Balasubramanian, PragatheeswaranSammendrag
Federated Learning (FL) is a privacy-preserving approach to train machine learning models on distributed datasets across different organizations. This is particularly beneficial for domains like healthcare and finance, where user data is often sensitive and tabular (e.g., hospital records and financial transactions). However, recent research like Tableak highlighted vulnerabilities that can exploit information leakage in model updates to reconstruct sensitive user data from tabular FL systems.
This thesis addresses these vulnerabilities by investigating the potential of training and machine learning parameters as defensive measures against leakage attacks on tabular data.
We conducted experiments to analyze how modifying these parameters within the Federated Learning training process impacts the attacker's ability to reconstruct data.
Our findings demonstrate that specific parameter configurations, including data encoding techniques, batch updates, epoch adjustments, and the use of sequential Peer-to-Peer (P2P) architectures, can significantly hinder reconstruction attacks on tabular data. These results contribute significantly to the development of more robust and privacy-preserving FL systems, especially for applications relying on sensitive tabular data.
Forlag
UiT Norges arktiske universitetUiT The Arctic University of Norway
Metadata
Vis full innførselSamlinger
Copyright 2024 The Author(s)
Følgende lisensfil er knyttet til denne innførselen: