Show simple item record

dc.contributor.authorDong, Nanqing
dc.contributor.authorWang, Zhipeng
dc.contributor.authorSun, Jiahao
dc.contributor.authorKampffmeyer, Michael Christian
dc.contributor.authorKnottenbelt, William
dc.contributor.authorXing, Eric
dc.date.accessioned2024-11-07T09:49:50Z
dc.date.available2024-11-07T09:49:50Z
dc.date.issued2024-03-18
dc.description.abstractIn the era of deep learning, federated learning (FL) presents a promising approach that allows multiinstitutional data owners, or clients, to collaboratively train machine learning models without compromising data privacy. However, most existing FL approaches rely on a centralized server for global model aggregation, leading to a single point of failure. This makes the system vulnerable to malicious attacks when dealing with dishonest clients. In this work, we address this problem by proposing a secure and reliable FL system based on blockchain and distributed ledger technology. Our system incorporates a peer-to-peer voting mechanism and a reward-and-slash mechanism, which are powered by on-chain smart contracts, to detect and deter malicious behaviors. Both theoretical and empirical analyses are presented to demonstrate the effectiveness of the proposed approach, showing that our framework is robust against malicious client-side behaviors. Impact Statement—FL has been a promising solution to utilize multisite data while preserving users’ privacy. Despite the success of integrating blockchain with FL to decentralize global model aggregation, the protection of this integration from clients with malicious intent in federated scenarios remains unclear. This article presents the first formulation of this problem, and the proposed stake-based aggregation mechanism shows robustness in detecting malicious behaviors. The results in this work not only pose a new research direction in FL but can also benefit a wide variety of applications such as finance and healthcare.en_US
dc.identifier.citationDong, Wang, Sun, Kampffmeyer, Knottenbelt, Xing. Defending Against Poisoning Attacks in Federated Learning with Blockchain. IEEE Transactions on Artificial Intelligence (TAI). 2024;5(7):3743-3756en_US
dc.identifier.cristinIDFRIDAID 2287985
dc.identifier.doi10.1109/TAI.2024.3376651
dc.identifier.issn2691-4581
dc.identifier.urihttps://hdl.handle.net/10037/35512
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.journalIEEE Transactions on Artificial Intelligence (TAI)
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2024 The Author(s)en_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0en_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)en_US
dc.titleDefending Against Poisoning Attacks in Federated Learning with Blockchainen_US
dc.type.versionpublishedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


File(s) in this item

Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)