FedMLB
Official implementation of "Multi-Level Branched Regularization for Federated Learning" in ICML 2022
Install / Use
/learn @jinkyu032/FedMLBREADME
Multi-Level Branched Regularization for Federated Learning
Official implementation of Multi-Level Branched Regularization for Federated Learning
Jinkyu Kim*, Geeho Kim*, Bohyung Han (* equal contribution) in ICML 2022
:gear: Setup
Dependencies
This repository is implemented based on Pytorch, and the required packages are specified in environment.yaml
Environment
We tested the code on virtual environment with Anaconda on Ubuntu 16.04.
Create the virtual environment by importing the dependencies with the command
conda env create -f environment.yaml -n fedmlb
conda activate fedmlb
:computer: Training models from scratch
Non-iid data (Dirichlet 0.3) on CIFAR-100
100 clients, 5% participation, 1000 rounds communication, 5 local epochs, and ResNet18
python3 federated_train.py --cuda_visible_device 0 --method FedMLB --arch=ResNet18_FedMLB --mode dirichlet --dirichlet_alpha 0.3 --global_epochs 1000 --local_epochs 5 --lr 0.1 --learning_rate_decay 0.998 --weight_decay 1e-3 --seed 0 --set CIFAR100 --workers 8 --participation_rate=0.05 --learning_rate_decay 0.998 --num_of_clients=100 --batch_size=50
:label: Citation
If you use our code for your work, please cite our paper as below:
@InProceedings{Kim2022Multi,
author = {Kim, Jinkyu and Kim, Geeho and Han, Bohyung},
title = {Multi-Level Branched Regularization for Federated Learning},
booktitle = {International Conference on Machine Learning},
year = {2022},
organization={PMLR}
}
Acknowledgement
Part of our code is borrowed from Federated-Learning-PyTorch
