GithubHelp home page GithubHelp logo

zenghui9977 / federated-learning-pytorch Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ashwinrj/federated-learning-pytorch

0.0 1.0 0.0 214 KB

Implementation of Communication-Efficient Learning of Deep Networks from Decentralized Data

License: MIT License

Python 100.00%

federated-learning-pytorch's Introduction

Federated-Learning (PyTorch)

Implementation of the vanilla federated learning paper : Communication-Efficient Learning of Deep Networks from Decentralized Data.

Experiments are produced on MNIST, Fashion MNIST and CIFAR10 (both IID and non-IID). In case of non-IID, the data amongst the users can be split equally or unequally.

Since the purpose of these experiments are to illustrate the effectiveness of the federated learning paradigm, only simple models such as MLP and CNN are used.

Requirments

Install all the packages from requirments.txt

  • Python3
  • Pytorch
  • Torchvision

Data

  • Download train and test datasets manually or they will be automatically downloaded from torchvision datasets.
  • Experiments are run on Mnist, Fashion Mnist and Cifar.
  • To use your own dataset: Move your dataset to data directory and write a wrapper on pytorch dataset class.

Running the experiments

The baseline experiment trains the model in the conventional way.

  • To run the baseline experiment with MNIST on MLP using CPU:
python src/baseline_main.py --model=mlp --dataset=mnist --epochs=10
  • Or to run it on GPU (eg: if gpu:0 is available):
python src/baseline_main.py --model=mlp --dataset=mnist --gpu=0 --epochs=10

Federated experiment involves training a global model using many local models.

  • To run the federated experiment with CIFAR on CNN (IID):
python src/federated_main.py --model=cnn --dataset=cifar --gpu=0 --iid=1 --epochs=10
  • To run the same experiment under non-IID condition:
python src/federated_main.py --model=cnn --dataset=cifar --gpu=0 --iid=0 --epochs=10

You can change the default values of other parameters to simulate different conditions. Refer to the options section.

Options

The default values for various paramters parsed to the experiment are given in options.py. Details are given some of those parameters:

  • --dataset: Default: 'mnist'. Options: 'mnist', 'fmnist', 'cifar'
  • --model: Default: 'mlp'. Options: 'mlp', 'cnn'
  • --gpu: Default: None (runs on CPU). Can also be set to the specific gpu id.
  • --epochs: Number of rounds of training.
  • --lr: Learning rate set to 0.01 by default.
  • --verbose: Detailed log outputs. Activated by default, set to 0 to deactivate.
  • --seed: Random Seed. Default set to 1.

Federated Parameters

  • --iid: Distribution of data amongst users. Default set to IID. Set to 0 for non-IID.
  • --num_users:Number of users. Default is 100.
  • --frac: Fraction of users to be used for federated updates. Default is 0.1.
  • --local_ep: Number of local training epochs in each user. Default is 10.
  • --local_bs: Batch size of local updates in each user. Default is 10.
  • --unequal: Used in non-iid setting. Option to split the data amongst users equally or unequally. Default set to 0 for equal splits. Set to 1 for unequal splits.

Results on MNIST

Baseline Experiment:

The experiment involves training a single model in the conventional way.

Parameters:

  • Optimizer: : SGD
  • Learning Rate: 0.01

Table 1: Test accuracy after training for 10 epochs:

Model Test Acc
MLP 92.71%
CNN 98.42%

Federated Experiment:

The experiment involves training a global model in the federated setting.

Federated parameters (default values):

  • Fraction of users (C): 0.1
  • Local Batch size (B): 10
  • Local Epochs (E): 10
  • Optimizer : SGD
  • Learning Rate : 0.01

Table 2: Test accuracy after training for 10 global epochs with:

Model IID Non-IID (equal)
MLP 88.38% 73.49%
CNN 97.28% 75.94%

Further Readings

Papers:

Blog Posts:

federated-learning-pytorch's People

Contributors

ashwinrj avatar amitport avatar gladuz avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.