Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expand Quilt Calibrations on Client #1264

Open
kalzoo opened this issue Oct 26, 2020 · 1 comment · Fixed by #1271
Open

Expand Quilt Calibrations on Client #1264

kalzoo opened this issue Oct 26, 2020 · 1 comment · Fixed by #1271
Assignees
Labels
enhancement ✨ A request for a new feature.

Comments

@kalzoo
Copy link
Contributor

kalzoo commented Oct 26, 2020

Issue Description

Currently, Pyquil can request calibrations from the Rigetti Translation Service so that a user can view the current calibrations that may be applied to their program at translation time. However, those calibrations are not actually applied on the client - the user's program is shipped off unchanged to the service, at which time the service's versions of the calibrations are applied to fully expand instructions into what we call "simple quilt." This has a few advantages and disadvantages:

Advantages:

  1. The most-up-to-date version of the calibrations is always applied
  2. Because of this pattern, the user doesn't actually have to be aware of quilt or calibrations at all. This mechanism allows transparent support of native, un-calibrated quil.

Disadvantages:

  1. "Magic" / "action at a distance" in that calibrations are applied to a program which is then returned in encrypted format to the user. The user must assume that the calibrations were applied in the expected way. That's currently a reliable assumption given that the service uses Pyquil to do the expansion, but that's not guaranteed, and there could still be a version or environment mismatch. We've seen this with certain users - calibrations are not being expanded how they might expect.
  2. An identical base program might be translated differently over the course of time as the system-provided calibrations change. Granted, this is already the expected behavior with native quil, because the program is translated with the latest settings available - but given the lower-level control of quilt, it makes sense to put that control in the hands of the user, and allow them to continue calibrating their program using identical calibrations over the course of a session.

One use case to consider is when users perform translation not directly by using Pyquil but through the use of consuming software such as Forest Benchmarking.

Proposed Solution

  1. QPUCompiler#_calibrations is a cache of calibrations fetched from the service
  2. QPUCompiler#get_calibrations fetches calibrations from the service and returns them
  3. QPUCompiler#refresh_calibrations fetches calibrations from the service (get_calibrations) and caches them (on _calibrations)
  4. QPUCompiler#native_quil_to_executable will call program.calibrate to expand calibrations prior to submission.
  5. Bodies of requests made by QPUCompiler are logged at debug level to allow troubleshooting of Pyquil when used as a dependency.
  6. The translation service will still apply system calibrations and expand calibrated instructions on the client side. This will allow native, uncalibrated quil to be translated in the same way, naive to the existence of quilt.
@kalzoo kalzoo added the enhancement ✨ A request for a new feature. label Oct 26, 2020
@notmgsk notmgsk self-assigned this Oct 28, 2020
@AlexanderGroeger
Copy link

I was reviewing Rigetti's QCS API for getting calibration data. I want to access earlier versions of calibration to model how it changes over time, but I don't see an option for that in their base API. I assume PyQuil has no way to do this either? Is there someone I can contact to get access to this data?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement ✨ A request for a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants