GithubHelp home page GithubHelp logo

rofl-project-code's Introduction


Logo

RoFL: Attestable Robustness for Federated Learning

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Logging
  5. License
  6. Contact

About the Project

This framework is an end-to-end implementation of the protocol proposed in of RoFL: Attestable Robustness for Secure Federated Learning. The protocol relies on combining secure aggregation with commitments together with zero-knowledge proofs to proof constraints on client updates. One constraint that we show to be effective against some types of backdoor attack is the bounding of update norms. We evaluate this constraint using the federated learning analysis framework, which can be used to perform experiments to analyse the effectiveness of various federated learning backdoor attacks and defenses.

The current implementation of RoFL is an academic proof-of-concept prototype. This prototype is designed to focus on evaluating the overheads of zero-knowledge proofs on client updates on top of secure aggregation. The current prototype is not meant to be directly used for applications in productions.

RoFL components

This repository is structured as follows.

  • RoFL_Service: This directory contains the code for the federated learning server and client, written in Rust.
  • RoFL_Crypto: This directory contains the cryptographic library to generate and verify the zero-knowledge proof constraints used in RoFL.
  • RoFL_Train_Client: This directory contains a python service to handle the training and inference operations for the machine learning model. The RoFL_Service interfaces with the RoFL_Train_Client using gRPC. This component acts as a wrapper around the FL Analysis framework that is used for machine learning training.

Utilities

  • ansible: The Ansible setup used for the evaluation of the framework. For more information on how to use this, See ansible/README.md.

  • plots: This directory contains code used to generate plots for the paper.

End-to-end implementation

The end-to-end setup consists of two components. First the secure federated learning with constraints implementation performs the communication between the server and the clients. On the client- and server-side, this component offloads the machine learning operations to a python training and evaluation service.

Getting Started

Follow these steps to run the implementation on your local machine.

Requirements

Python 3.7 & Rust version:

Min: 1.52.0-nightly 

Installation

Both the secure FL component and the training service are installed separately.

Secure FL with constraints

  1. Clone this repository
git clone [email protected]:pps-lab/rofl-project-code.git
  1. Install Cargo/Rust
curl https://sh.rustup.rs -sSf | sh -s -- -y
  1. Switch to nightly. Note: As of now, only a specific nightly version is supported due to a deprecated feature that a dependency is using.
rustup override set nightly-2021-05-11
  1. Select specific versions for some dependencies (due to these dependencies breaking some nightly features)
cargo update -p proc-macro2 --precise 1.0.28
cargo update -p packed_simd_2 --precise 0.3.4
  1. cargo build

Python training service

  1. Install the requirements for the trainservice wrapper (in rofl_project_code)
cd rofl_train_client
pip install -r requirements.txt
  1. Download the analysis framework
cd ../../ # go up to workspace directory
git clone [email protected]:pps-lab/fl-analysis.git
  1. Install the requirements for the analysis framework
cd fl-analysis
pipenv install

Usage

The framework can be used in two ways.

Using Ansible

We provide a setup in Ansible to easily deploy and evaluate the framework on multiple servers on AWS. See ansible/README.md for instructions on how to use this Ansible setup.

Manually

To run the setup manually, several components must be run separately. The individual components must be started in this order. The example shown are for a basic local configuration with four clients with L8-norm (infinity) range proof verification. The implementation of RoFL uses the analysis framework for model training and evaluation. In the following, we assume the following directory structure:

Top-level directory (e.g., workspace):

  • rofl-project-code (this repository)
  • fl-analysis (the analysis framework)

Each component must be run in a separate terminal window.

Server

In rofl-project-code, run the server:

./target/debug/flserver

Client Trainer

First, navigate to the analysis framework directory and enter the pipenv:

cd ../fl-analysis
pipenv shell

Then, navigate back to the python directory in the implementation directory:

cd ../rofl-project-code/rofl_train_client

From the rofl_train_client directory, run the python service.

cd rofl_train_client
PYTHONPATH=$(pwd) python trainservice/service.py

Client

In the rofl-project-code directory, run the client executable.

cd ../
./target/debug/flclients -n 4 -r 50016

Observer (optional)

After running the client, training has started. In addition, the observer component can be used to evaluate the model accuracy on the server-side. To do so, first, navigate to the analysis framework directory and enter the pipenv:

cd ../fl-analysis
pipenv shell

Then, navigate back to the python directory in the implementation directory:

cd ../rofl-project-code/rofl_train_client

Set the PYTHONPATH to include the current directory and run

PYTHONPATH=$(pwd) python trainservice/observer.py

The observer will connect to the FL server and receive the global model for each round.

Logging

The implementation outputs time and bandwidth measurements in several files.

Benchmark Log Format

The benchmark files for both the server and the clients can be found in the benchlog folder.

#### Format of the server log

t1--t2--t3--t4
t1: round starts
t2: round aggregation done
t3: round param extraction done
t4: verification completes

Format of a benchmark log line:
<Round ID>, <t2 - t1>, <t3 - t2>, <t4 - t3>, <total duration>

#### Format of the client log

t1--t2--t3--t4--t5
t1: model meta received
t2: model completely received
t3: local model training done
t4: model update encryption + proofs completed
t5: model sent to server

Format of a benchmark log line:
<Round ID>, <t2 - t1>, <t3 - t2>, <t4 - t3>, <t5 - t4>, <total duration>, <bytes received>,  <bytes sent>

License

This project's code is distributed under the MIT License. See LICENSE for more information.

Contact

Project Links: https://github.com/pps-lab/rofl-project-code and https://pps-lab.com/research/ml-sec/

rofl-project-code's People

Contributors

anwarhit avatar hiddely avatar nicolas-kuechler avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.