A PyTorch Implementation of Federated Learning http://doi.org/10.5281/zenodo.4321561
This is partly the reproduction of the paper of Communication-Efficient Learning of Deep Networks from Decentralized Data
Only experiments on MNIST and CIFAR10 (both IID and non-IID) is produced by far.
Note: The scripts will be slow without the implementation of parallel computing.
The MLP and CNN models are produced by:
Federated learning with MLP and CNN is produced by:
See the arguments in options.py.
python mainfed.py --dataset mnist --iid --numchannels 1 --model cnn --epochs 50 --gpu 0
--all_clientsfor averaging over all client models
NB: for CIFAR-10,
num_channelsmust be 3.
Results are shown in Table 1 and Table 2, with the parameters C=0.1, B=10, E=5.
Table 1. results of 10 epochs training with the learning rate of 0.01
| Model | Acc. of IID | Acc. of Non-IID| | ----- | ----- | ---- | | FedAVG-MLP| 94.57% | 70.44% | | FedAVG-CNN| 96.59% | 77.72% |
Table 2. results of 50 epochs training with the learning rate of 0.01
| Model | Acc. of IID | Acc. of Non-IID| | ----- | ----- | ---- | | FedAVG-MLP| 97.21% | 93.03% | | FedAVG-CNN| 98.60% | 93.81% |
Acknowledgements give to youkaichao.
McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Artificial Intelligence and Statistics (AISTATS), 2017.
Shaoxiong Ji. (2018, March 30). A PyTorch Implementation of Federated Learning. Zenodo. http://doi.org/10.5281/zenodo.4321561