Model poisoning in federated learning: Collusive and individual attacks

Loading...
Thumbnail Image

Date

2023-05

Journal Title

Journal ISSN

Volume Title

Publisher

The Ohio State University

Research Projects

Organizational Units

Journal Issue

Abstract

Federated learning is a distributed learning paradigm which enables many clients to train a shared learning model together, without needing to share their local training data, thereby increasing user privacy and decreasing network communication costs. However, federated learning is especially vulnerable to so-called poisoning attacks, which aim to degrade the learned model's performance, since the clients have full control over their local training processes. In this work, we propose two new untargeted model poisoning attacks on federated learning. In one of the proposed attacks, the attackers operate independently, and in the other attack, the attackers collude to make the attack more effective. In our experiments, the non-collusive attack significantly reduced the learned model's accuracy compared with a no-attack scenario. The collusive attack was even more successful than the non-collusive attack, with the model's accuracy only barely above the expected accuracy of a random guess. We tested two existing poisoning attack defenses, static norm-clipping and dynamic norm-clipping, to see how well these defenses mitigated our proposed attacks. We also tested whether using the defenses reduced model performance in the no-attack scenario. We found that using either defense increased model accuracy during both of our proposed attacks, but the model accuracy with the defenses was still lower than for the no-attack scenario. The dynamic norm-clipping defense was slightly more effective than the static norm-clipping defense. Both defenses only very slightly lowered model accuracy in the no-attack scenario.

Description

Keywords

federated learning, model poisoning, poisoning attack, collusive attack, norm clipping, untargeted attack

Citation