GithubHelp home page GithubHelp logo

arlineq / arline_benchmarks Goto Github PK

View Code? Open in Web Editor NEW
31.0 6.0 9.0 319 KB

Arline Benchmarks platform allows to benchmark various algorithms for quantum circuit mapping/compression against each other on a list of predefined hardware types and target circuit classes

Home Page: https://arlineq.wixsite.com/arline

License: GNU Affero General Public License v3.0

Python 80.10% OpenQASM 8.42% Jsonnet 11.05% Shell 0.43%
quantum-computing quantum python benchmarks quantum-benchmarks benchmark benchmarking benchmarking-framework

arline_benchmarks's Introduction

Arline Benchmarks

Arline Benchmarks platform allows to benchmark various algorithms for quantum circuit mapping/compression against each other on a list of predefined hardware types and target circuit classes.

Arline Benchmarks has been recently used by Oxford Quantum Circuits team for compiler performance testing (see blog post).

Demo (report generation preview)

Benchmarking run

Benchmarking run

LaTeX report

Report

List of supported compilation frameworks

Installation

$ pip3 install arline-benchmarks

Alternatively, Arline Benchmarks can be installed locally in the editable mode. Clone Arline Benchmarks repository, cd to the source directory:

Clone repository, cd to the source directory:

$ git clone https://github.com/ArlineQ/arline_benchmarks.git
$ cd arline_benchmarks

We recommend to install Arline Benchmarks in virtual environment.

$ virtualenv venv
$ source venv/bin/activate

If virtualenv is not installed on your machine, run

$ pip3 install virtualenv

Next in order to install the Arline Benchmarks platform execute:

$ pip3 install .

Alternatively, Arline Benchmarks can be installed in the editable mode:

$ pip3 install -e .

VOQC installation

To install Python wrapper for VOQC package follow these instructions.

Setting Environment Variables

Add to your ~/.profile:

export ARLINE_BENCHMARKS=<full path to arline_benchmarks repository>

TeXLive installation

Automated generation of LaTeX report is an essential part of Arline Benchmarks. In order to enable full functionality of Arline Benchmarks, you will need to install TeXLive distribution.

Ubuntu or Debian Linux:

To install TeXLive simply run in terminal:

$ sudo apt install texlive-latex-extra

Windows:

On Windows, TeXLive can be installed by downloading source code from official website and following installation instructions.

MacOS:

On MacOS simply install MacTex distribution from the official website.

Alternative solution for Linux/Windows/MacOS:

TeXLive can be also installed as a part of the MikTex package by downloading and installing source code from https://miktex.org. TeXworks frontend is not required and can be ignored.

Getting started

Benchmark example run

In order to run your first benchmarking experiment execute following commands

$ cd arline_benchmarks/configs/compression/
$ bash run_and_plot.sh

Bash script run_and_plot.sh executes

  1. scripts/arline-benchmarks-runner - runs benchmarking experiment and saves result to results/output /gate_chain_report.csv
  2. arline_benhmarks/reports/plot_benchmarks.py - generates plots with metrics based on results/output /gate_chain_report.csv to results/output/figure
  3. scripts/arline-latex-report-generator - generates results/latex/benchmark_report.tex and results/latex/benchmark_report.pdf report files with benchmarking results.

Configuration file configs/compression/config.jsonnet contains full description of benchmarking experiments.

Generate plots with benchmark metrics

To re-draw plots execute (from arline_benchmarks/configs/compression/)

$ bash plot.sh

Generate LaTeX report

To re-generate LaTeX report based on the last benchmarking run (from arline_benchmarks/configs/compression/)

$ arline-latex-report-generator -i results -o results

How to create a custom compilation pipeline?

The key element of Arline Benchmarks is the concept of compilation pipeline. A pipeline is a sequence of compilation stages: [stage1, stage2, stage3, ..].

A typical pipeline consists of the following stages:

  • Generation of a target circuit
  • Mapping of logical qubits to physical qubits
  • Qubit routing for a particular hardware coupling topology
  • Circuit compression by applying circuit identities
  • Rebase to the final hardware gate set

You can easily create a custom compilation pipeline by stacking individual stages (that might correspond to different compiler providers). A pipeline can consist of an unlimited number of compilation stages combined in an arbitrary order. The only exceptions are the first stage target_analysis and the last gateset rebase stage (optional).

Configuration file .jsonnet

Pipelines should be specified in the main configuration file .jsonnet. An example of a configuration file is located in configs/compression/config.jsonnet.

  • Function local pipelines_set(target, hardware, plot_group) defines a list of compilation pipelines to be benchmarked, [pipeline1, pipeline2, ...].

Each pipeline_i = {...} is represented as a dictionary that contains a description of the pipeline and a list of compilation stages.

  • Target circuits generation is defined in .jsonnet functions local random_chain_cliford_t_target(...) and local random_chain_cx_u3_target(...).

  • Benchmarking experiment specifications are defined at the end of the config file in the dictionary with keys {pipelines: ..., plotter: ...}

API documentation

API documentation is here documentation. To generate HTML API documentation, run below command:

$ cd docs/
$ make html

Running tests

To run unit-tests and check installed dependencies:

$ tox

Folder structure

arline_benchmarks
│
├── arline_benchmarks            # platform classes
│   ├── config_parser            # parser of pipeline configuration
│   ├── engine                   # pipeline engine
│   ├── metrics                  # metrics for pipeline comparison
|   ├── pipeline                 # pipeline
│   ├── reports                  # LaTeX report generator
│   ├── strategies               # list of strategies for mapping/compression/rebase
│   └── targets                  # target generator
│
├── circuits                     # qasm circuits dataset
│
├── configs                      # configuration files
│   └── compression              # config .jsonnet file and .sh scripts
│
├── docs                         # documentation
│
├── scripts                      # run files
│
└── test                         # tests
    ├── qasm_files               # .qasm files for test
    └── targets                  # test for targets module

Contributors

Arline team: Yaroslav Kharkov, Eugeny Mikhantyev, Alina Ivanova, Alex Kotelnikov

arline_benchmarks's People

Contributors

surrealmind avatar yourball avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

arline_benchmarks's Issues

Pytket version update.

Hi,

I've noticed that the pytket version used in this repository is version 0.6.0. This version was released almost two years ago.

It would be great to see benchmarking results for the latest pytket version 1.3 if possible. To rerun the benchmarks with the latest version would require a couple changes to the compilation pipline as pytket's passes have changed since 0.6.0. There may also be some dependency issues to resolve. @ferbetanzo and I would be willing to help with this if needed.

Having an up to date comparison would certainly be very informative for us.

Thanks!

Installation fail

Hi,
I am a phd student currently working on a transpiler. I would like to use your code for benchmarking but i have problems installing the package. I have tried both pip install, and cloning the repository...
Can you fix the code or help me with the installation?

Arline Benchmarks install

Good afternoon,

We are @LauraRgz (Github user) and I, Andrés Bravo, PhD students from Spain. Our research is focused on Quantum Computing. 

We want to use the Arline Benchmark tool to try it with new circuits, applications, quantum gates, etc. We are writing to you because we could not install it. We managed to solve some errors related to non compatible releases, but after trying for some days we could not install it correctly. 

I got in touch with @alinaivanovaoff because we wrote to [email protected] and we got a reply that the inbox was full.

For this reason, we want to know if you can help us to solve the errors that you can see in the attached image, when we execute ./run_and_plot.sh

image

Thank you for your support and your time

Laura y Andrés

run_and_plot.sh failing with package dependency error

# python --version
Python 3.6.9
# sh run_and_plot.sh 
Traceback (most recent call last):
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 583, in _build_master
    ws.require(__requires__)
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 900, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 791, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (protobuf 3.8.0 (/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages), Requirement.parse('protobuf>=3.12.0'), {'google-api-core'})

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/mgsk/hackery/arline_benchmarks/venv/bin/arline-benchmarks-runner", line 4, in <module>
    __import__('pkg_resources').require('arline-benchmarks==0.1.8')
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3260, in <module>
    @_call_aside
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3244, in _call_aside
    f(*args, **kwargs)
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3273, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 585, in _build_master
    return cls._build_from_requirements(__requires__)
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 598, in _build_from_requirements
    dists = ws.resolve(reqs, Environment())
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 791, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (protobuf 3.8.0 (/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages), Requirement.parse('protobuf>=3.12.0'), {'google-api-core'})
Traceback (most recent call last):
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 583, in _build_master
    ws.require(__requires__)
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 900, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 791, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (protobuf 3.8.0 (/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages), Requirement.parse('protobuf>=3.12.0'), {'google-api-core'})

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/mgsk/hackery/arline_benchmarks/venv/bin/arline-latex-report-generator", line 4, in <module>
    __import__('pkg_resources').require('arline-benchmarks==0.1.8')
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3260, in <module>
    @_call_aside
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3244, in _call_aside
    f(*args, **kwargs)
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3273, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 585, in _build_master
    return cls._build_from_requirements(__requires__)
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 598, in _build_from_requirements
    dists = ws.resolve(reqs, Environment())
  File "/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 791, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (protobuf 3.8.0 (/home/mgsk/hackery/arline_benchmarks/venv/lib/python3.6/site-packages), Requirement.parse('protobuf>=3.12.0'), {'google-api-core'})

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.