更新日期截止2020年5月22日,项目定期维护和更新,维护各种SOTA的Federated Learning的攻防模型。(更新中。。)
- (Krum): Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent【NIPS 2017】
- (trimmed_mean): D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning (ICML), 2018.
- (bulyan): E. M. El Mhamdi, R. Guerraoui, and S. Rouault. The hidden vulnerability of distributed learning in Byzantium. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 3521–3530, 2018.
- A Little Is Enough: Circumventing Defenses For Distributed Learning【NIPS 2019】
代码运行
mkdir logs
python main.py