GithubHelp home page GithubHelp logo

xtremeblaze777 / secsmt_artifact Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mktrm/secsmt_artifact

0.0 0.0 0.0 33.28 MB

The artifact for SecSMT paper -- Usenix Security 2022

License: GNU General Public License v3.0

Shell 0.69% JavaScript 0.48% C++ 25.62% Python 4.47% Perl 0.02% C 27.47% Emacs Lisp 0.01% Objective-C 0.01% Java 0.01% Scala 0.01% Fortran 0.01% SuperCollider 0.01% Tcl 0.01% CoffeeScript 0.01% C# 0.09% Assembly 40.96% Awk 0.01% Forth 0.01% CSS 0.01% Hack 0.15%

secsmt_artifact's Introduction

SecSMT: Securing SMT Processors against Contention-Based Covert Channels

Authors: Mohammadkazem Taram, Xida Ren, Ashish Venkat, Dean Tullsen.

Appears in USENIX Security 2022.

Introduction

This artifact comprises two main relatively separable components: the framework for covert channel measurements and the simulation infrastructure for our mitigations.

Covert Channels

Overview

  • System Requirements
  • Getting Started (5 minutes)
  • Install dependencies (10 minutes)
  • Switch to Performance Mode (5 minutes)
  • Run the Covert Channels Measurements (2 hours)
  • Validate Results (10 minutes)

0. Requirements

  • Intel(R) Core(TM) i7-6770HQ Processor
  • AMD Ryzen Threadripper 3960X Processor

1. Getting Started

  • Install git if it is not already installed: sudo apt-get install git
  • Use an appropriate package manager if not running Ubuntu/Debian.
  • Clone our git repository: git clone https://github.com/mktrm/secSMT_Artifact
  • Checkout the right commit hash: git checkout <commit-hash>

2. Install dependencies

  • Install all the dependencies required for covert channel framework cd channels/DriverSrcLinux && sudo ./init.sh
  • Install performance counter module sudo ./install.sh

3. Switch to Performance Mode

  • Force the first physical core into performance mode: sudo cpupower -c 1 frequency-set -g performance

4. Evaluate the Covert Channels

  • Go back to the channel top folder cd ..
  • Run all the measurements (should take around 1 hour): make run
  • This should output a list with the average bandwidth achieved for each covert channel

5. Validate the Results

If the measurements take place on the mentioned processors, the bandwidth numbers should be around the results reported in Table 1, but not exactly the same. Note that the fluctuation in bandwidth and error rate numbers can be caused by various sources of noise such as voltage/frequency scaling, OS scheduling, etc.

Mitigations

Overview

  • System Requirements
  • Getting Started (5 minutes)
  • Compile gem5 (15 minutes)
  • Run Spec Experiments (min of ~800 core hours)
  • Validate Results (5 minutes)
  • Run Javascript Benchmarks (~40 core hours)
  • Validate Results (5 minutes)

0. Requirements

  • Access to SPEC 2017 CPU benchmarks and simpoints
  • 800 core hours, some SPEC benchmarks require 64 GB of memory
  • The run script for SPEC 2017 experiments requires Slurm workload manager

1. Getting Started

  • Install git if it is not already installed: sudo apt-get install git
  • Use an appropriate package manager if not running Ubuntu/Debian.
  • Clone our git repository: git clone https://github.com/mktrm/secSMT_Artifact && cd secSMT_Artifact/mitigations
  • Checkout the right commit hash: git checkout <commit-hash>

2. Install gem5 dependency

  • Install all the dependencies required for gem5 compilation by running sudo ./scripts/install_requirements.sh
    • [ArtifactEvaluators] you can run ./scripts/load_requirements.sh on our servers to load required Environment Modules

3. Build the simulator

  • Run scripts/build_gem5.sh to build the simulator.

4. Generate SPEC Simpoints

  gunzip pinplay-drdebug-3.7-pin-3.7-97619-g0d0c92f4f-gcc-linux.tar.gz
  tar -xvf pinplay-drdebug-3.7-pin-3.7-97619-g0d0c92f4f-gcc-linux.tar
  mv pinplay-drdebug-3.7-pin-3.7-97619-g0d0c92f4f-gcc-linux pin
  • Setup environment for pin tool:
  export PIN_KIT=$(pwd)/pin
  cd $PIN_KIT/extras/pinplay/examples
  make
  • Create a pin configuration file for each benchmark. For example, here is the content of mcf.cfg, the config file that we used for mcf benchmark:
[Parameters]
program_name: mcf
input_name: ref
command: SPEC2017/benchspec/CPU/605.mcf_s/run/run_base_refspeed_mytest-m64.0007/mcf_s_base.mytest-m64 inp.in
maxk: 32
mode: st
warmup_length: 100000
slice_size: 100000000
pinplayhome: [replace_with_PIN_path]   
  • Run the pinplay tool to create simpoints [it should takes several hours]:
$PIN_KIT/extras/pinplay/scripts/pinpoints.py --cfg mcf.cfg -l -b  --simpoint  --whole_pgm_dir run/mcf/whole
  • If the previous command is successful, you should have a folder named mcf.ref.[num].Data that contains simpoint files: t.simpoints and t.weights.

  • Now we use gem5 in fast Atomic mode to create checkpoints of the representative regions [this will take up to several weeks]

build/X86/gem5.fast configs/example/se.py --take-simpoint  --checkpoint=t.simpoints,t.weights,100000000,30000000 -c -c ./mcf_s_base.mytest-m64 -o 'inp.in' --cpu-type=AtomicSimpleCPU 

5. Run Spec Experiments

  • Create the SPEC folder workloads/spec2017 with all the available simpoints, alternatively, make sure the script scripts/spec17/config.py has a correct pointer to the spec17 root folder.

    • [ArtifactEvaluators] You can run ln -s /p/csd/mtaram/spec2017-2/ workloads/spec2017 on our servers to link the folder containing spec2017 simpoints into the correct path.
  • If you already have access to SPEC joint checkpoints skip this step. Use python scripts/spec17/merge_ckp.py to create joint multithreaded SMT checkpoints from single-threaded Spec Simpoints.

    • [ArtifactEvaluators] You should skip this step.
    • Users should manually set up scripts/spec17/config.py and set heaviest_ckp to the simpoint number with maximum weight from t.weights from previous step.
    • The script, then will go through all pairs of the benchmarks specified in scripts/spec17/config.py and generates the joint checkpoints.
  • Use python3 scripts/spec17/run_gem5.py to submit all the simulation jobs to the Slurm scheduler.

    • It should take around 30 min to submit all the jobs, and they should take around 2 to 3 hours to complete if enough computing nodes are available. You can see the status of the jobs using squeue.

6. Validate Results

  • Run python3 scripts/spec17/draw_figs.py to parse the results and draw the performance figure.
  • Open the pdf file that is stored in the current directory (fig-mitigation-all.pdf) and compare the results with figure 7 of the paper.

7. Run Javascript Experiments

  • Use ./scripts/browserlike/runall.sh slurm to submit Javascript experiments to the Slurm scheduler.
    • Alternatively, you can run this locally using ./scripts/browserlike/runall.sh
    • This should take about an hour if enough parallelism is available.

8. Validate Results

  • Run python3 scripts/browserlike/graph.py to parse the results and draw the performance figure.
  • Open the pdf file that is stored in the current directory (browserlike.pdf) and compare the results with figure 8 of the paper.

9. Run Security Experiments

  • Use python3 scripts/security/run_security_eval.py slurm to submit security experiments to the Slurm scheduler.
    • Alternatively, you can run this locally using python3 scripts/security/run_security_eval.py
    • This should take about ten minutes if enough parallelism is available.

10. Validate Results

  • Run python3 scripts/security/draw_sec_figs.py to parse the results and draw the performance figure.
  • Open the pdf file that is stored in the current directory (fig-sec-1.pdf) and compare the results with figure 6 of the paper.

Citation

Please cite the following paper if you use this artifact:

@inproceedings{secsmt,
 author = {Mohammadkazem Taram and Xida Ren and Ashish Venkat and Dean Tullsen},
 title = {{SecSMT: Securing SMT Processors against Contention-Based Covert Channels}},
 booktitle = {{USENIX} Security Symposium ({USENIX} Security)},
 year = {2022}
}

secsmt_artifact's People

Contributors

mktrm avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.