FEDL
FEDL-Federated Learning algorithm using TensorFlow (Transaction on Networking 2021)
Install / Use
/learn @CharlieDinh/FEDLREADME
Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation (Accepted by IEEE/ACM Transactions on Networking (TON))
This repository is for the Experiment Section of the paper: "Federated Learning over Wireless Networks: Convergence Analysis and Resource Allocation"
Authors: Canh T. Dinh, Nguyen H. Tran, Minh N. H. Nguyen, Choong Seon Hong, Wei Bao, Albert Zomaya, Vincent Gramoli
Paper Link: https://arxiv.org/abs/1910.13067
This version is developed based on the code from: https://github.com/litian96/FedProx
We develop the new version using Pytorch at: https://github.com/CharlieDinh/FEDL_pytorch
Software requirements:
-
numpy, scipy, tensorflow, Pillow, matplotlib.
-
To download the dependencies: pip3 install -r requirements.txt
-
The code can be run on any pc.
Dataset: We use 3 datasets: MNIST, FEMNIST, and Synthetic
- To generate non-idd MNIST Data: In folder data/mnist, run: "python3 generate_niid_mnist_100users.py"
- To generate FEMNIST Data: first In folder data/nist run preprocess.sh to obtain all raw data, or can be download in the link below, then run python3 generate_niid_femnist_100users.py
- To generate niid Linear Synthetic: In folder data/linear_synthetic, run: "python3 generate_linear_regession.py"
- The datasets are available to download at: https://drive.google.com/drive/folders/1Q91NCGcpHQjB3bXJTvtx5qZ-TrIZ9WzT?usp=sharing
Produce figures in the paper:
-
There is a main file "main.py" which allows running all experiments and 3 files "main_mnist.py, main_nist.py, main_linear.py" to produce the figures corresponding for 3 datasets. It is noted that each experiment is run at least 10 times and then the result is averaged.
-
To produce the experiments for Linear Regresstion:
-
In folder data/linear_synthetic, before generating linear data set, configure the value of $\rho$ for example rho = 1.4 (in the papers we use 3 different values of $\rho$: 1.4, 2, 5) then run: "python3 generate_linear_regession_update.py" to generate data corresponding to different values of $\rho$.
-
To find the optimal solution: In folder data/linear_synthetic, run python3 optimal_solution_finding_update.py (also the value of $\rho$ need to be configured to find the optimal solution)
-
To generate result for the training process, run below commands:
<pre><code> python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.01 --rho 1.4 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.03 --rho 1.4 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.05 --rho 1.4 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.07 --rho 1.4 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.01 --rho 2 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.03 --rho 2 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.05 --rho 2 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.07 --rho 2 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.01 --rho 5 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.03 --rho 5 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.05 --rho 5 --times 1 python3 -u main.py --dataset linear_synthetic --optimizer fedfedl --model linear.py --num_rounds 200 --clients_per_round 100 --batch_size 0 --num_epochs 20 --learning_rate 0.04 --hyper_learning_rate 0.07 --rho 5 --times 1 </code></pre> -
All the train loss, testing accuracy, and training accuracy will be stored as h5py file in the folder "results".
-
To produce the figure for linear regression run <pre><code> python3 main_linear.py</code></pre>
-
Note that all users are selected in Synthetic data, so the experiments for each case of synthetic only need to be run once
-
For MNIST, run below commands:
<pre><code> python3 -u main.py --dataset mnist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 20 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 0.2 --rho 0 --times 10 python3 -u main.py --dataset mnist --optimizer fedsgd --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 20 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 0 --rho 0 --times 10 python3 -u main.py --dataset mnist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 40 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 0.2 --rho 0 --times 10 python3 -u main.py --dataset mnist --optimizer fedsgd --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 40 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 0 --rho 0 --times 10 python3 -u main.py --dataset mnist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 0 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 0.2 --rho 0 --times 10 python3 -u main.py --dataset mnist --optimizer fedsgd --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 0 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 0 --rho 0 --times 10 python3 -u main.py --dataset mnist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 0 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 2 --rho 0 --times 10 python3 -u main.py --dataset mnist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 0 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 4 --rho 0 --times 10 </code></pre>
-
To produce the figure for MNIST experiment, run <pre><code> python3 main_mn.py</code></pre>
-
For FEMNIST, run below commands:
<pre><code> python3 -u main.py --dataset nist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 20 --num_epochs 10 --learning_rate 0.003 --hyper_learning_rate 0.2 --rho 0 --times 10 python3 -u main.py --dataset nist --optimizer fedsgd --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 20 --num_epochs 10 --learning_rate 0.003 --hyper_learning_rate 0 --rho 0 --times 10 python3 -u main.py --dataset nist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 0 --num_epochs 10 --learning_rate 0.015 --hyper_learning_rate 0.5 --rho 0 --times 10 python3 -u main.py --dataset nist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 20 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 0.2 --rho 0 --times 10 python3 -u main.py --dataset nist --optimizer fedsgd --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 20 --num_epochs 20 --learning_rate 0.003 --hyper_learning_rate 0 --rho 0 --times 10 python3 -u main.py --dataset nist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 0 --num_epochs 20 --learning_rate 0.015 --hyper_learning_rate 0.5 --rho 0 --times 10 python3 -u main.py --dataset nist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 20 --num_epochs 40 --learning_rate 0.003 --hyper_learning_rate 0.2 --rho 0 --times 10 python3 -u main.py --dataset nist --optimizer fedsgd --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 20 --num_epochs 40 --learning_rate 0.003 --hyper_learning_rate 0 --rho 0 --times 10 python3 -u main.py --dataset nist --optimizer fedfedl --model mclr.py --num_rounds 800 --clients_per_round 10 --batch_size 0 --num_epochs 40 --learning_rate 0.015 --hyper_learning_rate 0.5 --rho 0 --times 10 </code></pre>
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
flutter-tutor
Flutter Learning Tutor Guide You are a friendly computer science tutor specializing in Flutter development. Your role is to guide the student through learning Flutter step by step, not to provide d
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
last30days-skill
16.9kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
