Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create benchmark framework #325

Closed
ewinston opened this issue Mar 8, 2018 · 8 comments
Closed

Create benchmark framework #325

ewinston opened this issue Mar 8, 2018 · 8 comments
Assignees
Labels
priority: low type: discussion type: qa Issues and PRs that relate to testing and code quality

Comments

@ewinston
Copy link
Contributor

ewinston commented Mar 8, 2018

The project needs a way to profile changes to the SDK (compilation, simulation, etc).
Possible framework: airspeedvelocity.

@jaygambetta
Copy link
Member

Can you make a proposal for this now we have the three simulator backends in place. Qasm statevector and unitary.

@ewinston ewinston self-assigned this May 14, 2018
@psyhtest
Copy link

psyhtest commented May 15, 2018

Dear QISKit developers,

We have recently started a project called Quantum Collective Knowledge to benchmark existing quantum computing systems to pinpoint the state-of-the-art and forecast future developments. Quantum Collective Knowledge (QCK) builds upon Collective Knowledge (CK), a universal open framework for reproducible and collaborative R&D. CK is being contributed to by a growing community across industry (including semiconductor companies) and academia (including the Artifact Evaluation process at leading systems conferences and pilot integration with the ACM Digital Library). Please feel free to browse through public results from various workflows at http://cknowledge.org/repo.

As part of QCK, we are providing CK workflows for several popular quantum packages, including CK-QISKit, CK-Rigetti, CK-ProjectQ, etc.

We started to provide support for multiple QISKit backends a couple of months ago, but then decided to wait until the dust from the major rework on simulators settled. It sounds now is a good time to resume this work!

Supporting multiple backends is only one part of the story. What we need is realistic, useful and diverse workloads! We have spotted a comprehensive list of standard algorithms. Would benchmarking these be of interest? Or would perhaps benchmarking and optimising variational algorithms be more interesting? Please share your thoughts!

/cc @fvella @ens-lg4 @gfursin @stevebrierley @oscarhiggott @daochenw

@delapuente
Copy link
Contributor

delapuente commented Jul 31, 2018

Just saw this bug. @psyhtest the Quantum Collective Knowledge sounds super-interesting. I don't know if we published a post about it but we should.

Anyway, I think @ewinston was suggesting a benchmark infrastructure for Qiskit itself that would help us to detect regressions while developing.

@psyhtest
Copy link

Many thanks @delapuente! This would be awesome.

Speaking of the project internal benchmarking needs, we do develop various Collective Knowledge plugins for micro- and macro-level benchmarking and optimization (although it would probably be atom-level for quantum benchmarking :)).

For example, in collaboration with Arm we have developed and released CK-NNTest, a test suit for collaboratively validating, benchmarking and optimizing neural net operators across platforms, frameworks and datasets. We have successfully used it to optimize the Arm Compute Library for their latest GPU architecture (Bifrost), achieving up to 10x kernel-level speedups, up to 5x operator-level speedups and up to 3x network-level speedups (e.g. see our ReQuEST@ASPLOS'18 paper). Moreover, we keep using it to detect performance anomalies and regressions.

@psyhtest
Copy link

psyhtest commented Oct 4, 2018

FYI: We are using QCK for our second Quantum Computing Hackathon in London this weekend, with participants having priority access to IBM QX devices. (@quantumjim is going to be around.)

@1ucian0
Copy link
Member

1ucian0 commented Oct 23, 2018

Some notes on continuous benchmarking can be found here: #930

@ajavadia ajavadia assigned mtreinish and unassigned ewinston Oct 30, 2018
@mtreinish
Copy link
Member

Just to point out this is in progress here: #30 there's one issue to debug related to installing qiskit. Once that's resolved and it merges we can work on running that on a continuous basis.

@ajavadia
Copy link
Member

Ok let's close this as it will be tracked in the meta package and that's where benchmarks will live for the whole Qiskit project.

We will continue adding to these benchmarks overtime, and contributions are welcome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority: low type: discussion type: qa Issues and PRs that relate to testing and code quality
Projects
None yet
Development

No branches or pull requests

8 participants