Defending Against Poisoning Attacks in Federated Learning with Blockchain
Permanent link
https://hdl.handle.net/10037/35512Date
2024-03-18Type
Journal articleTidsskriftartikkel
Peer reviewed
Author
Dong, Nanqing; Wang, Zhipeng; Sun, Jiahao; Kampffmeyer, Michael Christian; Knottenbelt, William; Xing, EricAbstract
In the era of deep learning, federated learning (FL)
presents a promising approach that allows multiinstitutional data
owners, or clients, to collaboratively train machine learning models without compromising data privacy. However, most existing
FL approaches rely on a centralized server for global model
aggregation, leading to a single point of failure. This makes
the system vulnerable to malicious attacks when dealing with
dishonest clients. In this work, we address this problem by
proposing a secure and reliable FL system based on blockchain
and distributed ledger technology. Our system incorporates a
peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect
and deter malicious behaviors. Both theoretical and empirical
analyses are presented to demonstrate the effectiveness of the
proposed approach, showing that our framework is robust against
malicious client-side behaviors.
Impact Statement—FL has been a promising solution to utilize
multisite data while preserving users’ privacy. Despite the success
of integrating blockchain with FL to decentralize global model
aggregation, the protection of this integration from clients with
malicious intent in federated scenarios remains unclear. This
article presents the first formulation of this problem, and the
proposed stake-based aggregation mechanism shows robustness
in detecting malicious behaviors. The results in this work not
only pose a new research direction in FL but can also benefit a
wide variety of applications such as finance and healthcare.
Publisher
IEEECitation
Dong, Wang, Sun, Kampffmeyer, Knottenbelt, Xing. Defending Against Poisoning Attacks in Federated Learning with Blockchain. IEEE Transactions on Artificial Intelligence (TAI). 2024;5(7):3743-3756Metadata
Show full item recordCollections
Copyright 2024 The Author(s)