Arline Benchmarks platform allows to benchmark various algorithms for quantum circuit mapping/compression against each other on a list of predefined hardware types and target circuit classes.
Arline Benchmarks has been recently used by Oxford Quantum Circuits team for compiler performance testing (see blog post).
$ pip3 install arline-benchmarks
Alternatively, Arline Benchmarks can be installed locally in the editable mode.
Clone Arline Benchmarks repository, cd
to the source directory:
Clone repository, cd
to the source directory:
$ git clone https://github.com/ArlineQ/arline_benchmarks.git
$ cd arline_benchmarks
We recommend to install Arline Benchmarks in virtual environment.
$ virtualenv venv
$ source venv/bin/activate
If virtualenv
is not installed on your machine, run
$ pip3 install virtualenv
Next in order to install the Arline Benchmarks platform execute:
$ pip3 install .
Alternatively, Arline Benchmarks can be installed in the editable mode:
$ pip3 install -e .
To install Python wrapper for VOQC package follow these instructions.
Add to your ~/.profile
:
export ARLINE_BENCHMARKS=<full path to arline_benchmarks repository>
Automated generation of LaTeX report is an essential part of Arline Benchmarks. In order to enable full functionality of Arline Benchmarks, you will need to install TeXLive distribution.
To install TeXLive simply run in terminal:
$ sudo apt install texlive-latex-extra
On Windows, TeXLive can be installed by downloading source code from official website and following installation instructions.
On MacOS simply install MacTex distribution from the official website.
TeXLive can be also installed as a part of the MikTex package by downloading and installing source code from https://miktex.org. TeXworks frontend is not required and can be ignored.
In order to run your first benchmarking experiment execute following commands
$ cd arline_benchmarks/configs/compression/
$ bash run_and_plot.sh
Bash script run_and_plot.sh
executes
scripts/arline-benchmarks-runner
- runs benchmarking experiment and saves result toresults/output /gate_chain_report.csv
arline_benhmarks/reports/plot_benchmarks.py
- generates plots with metrics based onresults/output /gate_chain_report.csv
toresults/output/figure
scripts/arline-latex-report-generator
- generatesresults/latex/benchmark_report.tex
andresults/latex/benchmark_report.pdf
report files with benchmarking results.
Configuration file configs/compression/config.jsonnet
contains full description of benchmarking experiments.
To re-draw plots execute (from arline_benchmarks/configs/compression/
)
$ bash plot.sh
To re-generate LaTeX report based on the last benchmarking run (from arline_benchmarks/configs/compression/
)
$ arline-latex-report-generator -i results -o results
The key element of Arline Benchmarks is the concept of compilation pipeline.
A pipeline is a sequence of compilation stages: [stage1, stage2, stage3, ..]
.
A typical pipeline consists of the following stages:
- Generation of a target circuit
- Mapping of logical qubits to physical qubits
- Qubit routing for a particular hardware coupling topology
- Circuit compression by applying circuit identities
- Rebase to the final hardware gate set
You can easily create a custom compilation pipeline by stacking individual stages (that might correspond to different
compiler providers). A pipeline can consist of an unlimited number of compilation stages combined in an arbitrary order.
The only exceptions are the first stage target_analysis
and the last gateset rebase stage
(optional).
Pipelines should be specified in the main configuration file .jsonnet.
An example of a configuration file is located in configs/compression/config.jsonnet
.
- Function
local pipelines_set(target, hardware, plot_group)
defines a list of compilation pipelines to be benchmarked,[pipeline1, pipeline2, ...]
.
Each pipeline_i = {...}
is represented as a dictionary that contains a description of the pipeline and a list of
compilation stages.
-
Target circuits generation is defined in .jsonnet functions
local random_chain_cliford_t_target(...)
andlocal random_chain_cx_u3_target(...)
. -
Benchmarking experiment specifications are defined at the end of the config file in the dictionary with keys
{pipelines: ..., plotter: ...}
API documentation is here documentation. To generate HTML API documentation, run below command:
$ cd docs/
$ make html
To run unit-tests and check installed dependencies:
$ tox
arline_benchmarks
│
├── arline_benchmarks # platform classes
│ ├── config_parser # parser of pipeline configuration
│ ├── engine # pipeline engine
│ ├── metrics # metrics for pipeline comparison
| ├── pipeline # pipeline
│ ├── reports # LaTeX report generator
│ ├── strategies # list of strategies for mapping/compression/rebase
│ └── targets # target generator
│
├── circuits # qasm circuits dataset
│
├── configs # configuration files
│ └── compression # config .jsonnet file and .sh scripts
│
├── docs # documentation
│
├── scripts # run files
│
└── test # tests
├── qasm_files # .qasm files for test
└── targets # test for targets module
Arline team: Yaroslav Kharkov, Eugeny Mikhantyev, Alina Ivanova, Alex Kotelnikov