Black-box aderversarial attack experiments for Communication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning.
For detailed information and results, please refer to the paper.
The environment for this experiment is
- Python 3.8.5
- Pytorch
please download other required packages via pip install -r requirements.txt
The algorithm is expected to work on multiple datsets, not limited to cifar
fashion mnist
mnist
which are standard datasets
FedZO/
│ README.md
│ requirements.txt
│
└─── blackbox_attack/
│ │ models/
│ │ save/
│ │ src/
│
└─── dataset/
│ cifar/
│ fmnist/
│ mnist/
-
blackbox_attack/
the main folder --models/
the well-trained classification models --save/
the running result of the algorithm, metrics including training loss and testing accuracy --src/
the algorithm code -
dataset/
the folder stores all datasets --cifar/
fmnist/
mnist/
folders to store different datasets
src/
contains multiple files:
alg_*.py
the distributed zeroth order optimization algorithmattack_main.py
the main function to perform aderversarial attackload_file.py
load dataset and modelmodels.py
pytorch model structureObjFunc.py
blackbox attack objective functionoptions.py
defining the arguments of the experiments/ hyperparameters of the algorithmsrun_*.sh
shell file to excute the experiment (contain default argument setting)train_model.py
train the DNN classification model from initilizationutils.py
other util functions
Once you have configured the environment, run the experiement via
bash run_[your_algorithm_name].sh
and replace [your_algorithm_name]
with your desired experiment name.
If everything goes well, you will find the results in save/
folder, each execute of any of the algorithms should yield two products:
- a
*.pkl
file, produced by pickle package, contains the the algorithm's per-iteration training loss / testing accuracy - a
*/
folder sharing the same prefix as the previous file, containing the visualization of the optimized adversarial image, and a mini-batch of samples: the original images and the images perturbed by the adversarial image (aims at showing that the perturbation is imperceptible by human)
If you encounter any problem, feel free to post it in issue
of this repository. You can also contact us via the following email:
Ziyi Yu
Wenzhi Fang