SkillAgentSearch skills...

FLPoison

FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning

Install / Use

/learn @vio1etus/FLPoison

README

Welcome to FLPoison

Python Versions Framework Last Commit License: GPL v2

<!-- ![Repo Size](https://img.shields.io/github/repo-size/vio1etus/FLPoison) -->

Check the Wiki to get started with the project.

Features

PyTorch's implementation of poisoning attacks and defenses in federated learning.

| Category | Details | | :-------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | FL Algorithms | FedAvg, FedSGD, FedOpt(see fl/algorithms) | | Data Distribution | Balanced IID, Class-imbalanced IID, Quantity-imbalanced Dirichlet Non-IID, (Quantity-Balanced|-Imbalanced) Pathological Non-IID (see data_utils.py) | | Datasets | MNIST, FashionMNIST, EMNIST, CIFAR10, CINIC10, CIFAR100, CHMNIST, TinyImageNet (see dataset_config.yaml) | | Models | Logistic Regression, SimpleCNN, LeNet5, ResNet-series, VGG-series |

Supported datasets and models pairs see datamodel.pdf

Federated Learning Algorithms

<!-- prettier-ignore -->

| Name | Source File |Paper| |--|--|--| |FedSGD|fedsgd.py|Communication-Efficient Learning of Deep Networks from Decentralized Data - AISTATS '17| |FedAvg|fedavg.py|Communication-Efficient Learning of Deep Networks from Decentralized Data - AISTATS '17| |FedOpt|fedopt.py|Adaptive Federated Optimization - arxiv '20, ICLR '21|

Attacks and Defenses

Applicable algorithms include base algorithms used by original paper, as well as others not explicitly mentioned but applicable based on the described principles. [ ] indicates necessary modifications for compatibility, also implemented within this framework. To sum up, we implemented and adapted the attacks and defenses to be compatible with three commonly-used FL algorithms, FedSGD, FedOpt, FedAvg.

Data Poisoning Attacks (DPAs)

Data poisoning attacks here, mainly targeted attacks, refer to attacks aimed at embedding backdoors or bias into the model, thus misleading it to produce the attacker's intended prediction

<!-- | 3DFed | [threedfed.py](attackers/threedfed.py) | [3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning](https://ieeexplore.ieee.org/document/10179401) - S&P '23| || --> <!-- prettier-ignore -->

| Name | Source File | Paper | Base Algorithm | Applicable Algorithms | |:---:|:---:|:---:|:---:|:---:| | Neurotoxin | neurotoxin.py | Neurotoxin: Durable Backdoors in Federated Learning - ICML '22 | FedOpt | FedOpt, [FedSGD, FedAvg] | | Edge-case Backdoor | edgecase.py | Attack of the Tails: Yes, You Really Can Backdoor Federated Learning - NeurIPS '20 | FedOpt |FedSGD, FedOpt, [FedAvg]| | Model Replacement Attack (Scaling Attack) | modelreplacement.py | How to Backdoor Federated Learning - AISTATS '20 |FedOpt |FedOpt, [FedSGD, FedAvg]| | Alternating Minimization | altermin.py | Analyzing Federated Learning Through an Adversarial Lens - ICML '19 | FedOpt |FedSGD, FedOpt, [FedAvg]| | DBA | dba.py | DBA: Distributed Backdoor Attacks Against Federated Learning - ICLR '19 | FedOpt |FedSGD, FedOpt, [FedAvg]| | BadNets | badnets.py | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain - NIPS-WS '17 | Centralized ML |[FedSGD, FedOpt, FedAvg]| | Label Flipping Attack | labelflipping.py | Poisoning Attacks against Support Vector Machines - ICML'12 | Centralized ML |[FedSGD, FedOpt, FedAvg]|

Defenses Against DPAs

<!-- prettier-ignore -->

| Name | Source File | Paper | Base Algorithm | Applicable Algorithms | |:---:|:---:|:---:|:---:|:---:| | FLAME | flame.py | FLAME: Taming Backdoors in Federated Learning - USENIX Security '22 | FedOpt | FedOpt,[FedSGD, FedAvg]| | DeepSight | deepsight.py | DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection - NDSS '22 | FedOpt |FedOpt, [FedSGD, FedAvg]| | CRFL | crfl.py | CRFL: Certifiably Robust Federated Learning against Backdoor Attacks - ICML '21| FedOpt | FedOpt, [FedSGD, FedAvg]| | NormClipping | normclipping.py | Can You Really Backdoor Federated Learning - NeurIPS '20 | FedOpt |FedOpt, [FedSGD, FedAvg]| | FoolsGold | foolsgold.py | The Limitations of Federated Learning in Sybil Settings - RAID '20 | FedSGD |FedSGD, [FedOpt, FedAvg]| | Auror | auror.py | Auror: Defending against poisoning attacks in collaborative deep learning systems - ACSAC '16 | FedSGD |FedSGD, [FedOpt, FedAvg]|

Model Poisoning Attacks (MPAs)

Model poisoning attacks here, main untargeted attacks, refer to the attacks aimed at preventing convergence of the model, thus affecting the model's performance.

<!-- prettier-ignore -->

| Name | Source File | Paper | Base Algorithm | Applicable Algorithms | |:---:|:---:|:---:|:---:|:---:| | Mimic Attack | mimic.py | Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing - ICLR '22 | FedSGD | FedSGD, [FedOpt, FedAvg] | | Min-Max attack | min.py | Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning - NDSS '21 | FedSGD | FedSGD, [FedOpt, FedAvg] | | Min-Sum attack | min.py | Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning - NDSS '21 | FedSGD | FedSGD, [FedOpt, FedAvg] | | Fang attack (Adaptive attack) | fangattack.py | Local Model Poisoning Attacks to Byzantine-Robust Federated Learning - USENIX Security '20 | FedAvg | [FedSGD, FedOpt], FedAvg | | IPM attack | ipm.py | Fall of empires: Breaking Byzantine-tolerant SGD by inner product manipulation - UAI '20 | FedSGD | FedSGD, [FedOpt, FedAvg] | | ALIE attack | alie.py | A Little Is Enough: Circumventing Defenses For Distributed Learning - NeurIPS '19 | FedSGD | FedSGD, [FedOpt, FedAvg] | | Sign flipping attack | signflipping.py | Asynchronous Byzantine machine learning (the case of SGD) - ICML '18 |FedSGD| FedSGD, [FedOpt, FedAvg] | | Gaussian (noise) attack | gaussian.py | Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent - NeurIPS '17 |FedSGD| FedSGD, [FedOpt, FedAvg] |

Defenses Against MPAs

<!-- prettier-ignore -->

| Name | Source File | Paper | Base Algorithm | Applicable Algorithms | |:---:|:---:|:---:|:---:|:---:| | LASA |lasa.py|Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation - WACV '25 | FedOpt | FedSGD, [FedOpt, FedAvg] | | FLDetector | fldetector.py | FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients - KDD '22 | FedSGD |FedOpt, [FedOpt, FedAvg]| | SignGuard | signguard.py | Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering - ICDCS '22 | FedSGD |FedSGD, [FedOpt, FedAvg]| | Bucketing | bucketing.py | [Byzantine-Robust Learni

View on GitHub
GitHub Stars57
CategoryEducation
Updated1d ago
Forks9

Languages

Python

Security Score

100/100

Audited on Apr 2, 2026

No findings