-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create benchmark framework #325
Comments
Can you make a proposal for this now we have the three simulator backends in place. Qasm statevector and unitary. |
Dear QISKit developers, We have recently started a project called Quantum Collective Knowledge to benchmark existing quantum computing systems to pinpoint the state-of-the-art and forecast future developments. Quantum Collective Knowledge (QCK) builds upon Collective Knowledge (CK), a universal open framework for reproducible and collaborative R&D. CK is being contributed to by a growing community across industry (including semiconductor companies) and academia (including the Artifact Evaluation process at leading systems conferences and pilot integration with the ACM Digital Library). Please feel free to browse through public results from various workflows at http://cknowledge.org/repo. As part of QCK, we are providing CK workflows for several popular quantum packages, including CK-QISKit, CK-Rigetti, CK-ProjectQ, etc. We started to provide support for multiple QISKit backends a couple of months ago, but then decided to wait until the dust from the major rework on simulators settled. It sounds now is a good time to resume this work! Supporting multiple backends is only one part of the story. What we need is realistic, useful and diverse workloads! We have spotted a comprehensive list of standard algorithms. Would benchmarking these be of interest? Or would perhaps benchmarking and optimising variational algorithms be more interesting? Please share your thoughts! /cc @fvella @ens-lg4 @gfursin @stevebrierley @oscarhiggott @daochenw |
Many thanks @delapuente! This would be awesome. Speaking of the project internal benchmarking needs, we do develop various Collective Knowledge plugins for micro- and macro-level benchmarking and optimization (although it would probably be atom-level for quantum benchmarking :)). For example, in collaboration with Arm we have developed and released CK-NNTest, a test suit for collaboratively validating, benchmarking and optimizing neural net operators across platforms, frameworks and datasets. We have successfully used it to optimize the Arm Compute Library for their latest GPU architecture (Bifrost), achieving up to 10x kernel-level speedups, up to 5x operator-level speedups and up to 3x network-level speedups (e.g. see our ReQuEST@ASPLOS'18 paper). Moreover, we keep using it to detect performance anomalies and regressions. |
FYI: We are using QCK for our second Quantum Computing Hackathon in London this weekend, with participants having priority access to IBM QX devices. (@quantumjim is going to be around.) |
Some notes on continuous benchmarking can be found here: #930 |
Just to point out this is in progress here: #30 there's one issue to debug related to installing qiskit. Once that's resolved and it merges we can work on running that on a continuous basis. |
Ok let's close this as it will be tracked in the meta package and that's where benchmarks will live for the whole Qiskit project. We will continue adding to these benchmarks overtime, and contributions are welcome. |
The project needs a way to profile changes to the SDK (compilation, simulation, etc).
Possible framework: airspeedvelocity.
The text was updated successfully, but these errors were encountered: