-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate to pyQuil v3 #107
Migrate to pyQuil v3 #107
Conversation
@@ -190,7 +163,7 @@ def __init__( | |||
self.calibrate_readout = calibrate_readout | |||
self.wiring = {i: q for i, q in enumerate(self.qc.qubits())} | |||
|
|||
def expval(self, observable): | |||
def expval(self, observable, shot_range=None, bin_size=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When parametric compilation is enabled, the expval for results from all devices (QVM, QPU, WFS) are calculated in the same way. However, when parametric compilation is disabled on the QPU device, we take a different approach to calculating the expval. Why would this differentiation exist, and can we consolidate the implementation across all devices?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly I don't know why are we doing this. It seems like we are creating and executing a new program from scratch, which is not very effective...
@josh146 do you know why we don't use the generated self._samples
to compute the expectation value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the logic inside the measure_observables
function:
https://github.com/PennyLaneAI/pennylane-forest/blob/14709333ed6bb4fa134340f524d89aea44f3cb56/pennylane_forest/qpu.py#L255
It seems it uses a different logic to run the quantum program that the one defined in the generate_samples
method of the QVMDevice
class:
https://github.com/PennyLaneAI/pennylane-forest/blob/14709333ed6bb4fa134340f524d89aea44f3cb56/pennylane_forest/qvm.py#L230
Maybe we need to override this method in the QPUDevice
class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a brief look at doing this while refactoring the devices, but quickly ran into a wall when I realized that the logic in expval
is dependent on the observable
parameter to generate the right PauliTerm
for the Experiment
passed to measure_observables
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would leave this as it is for now. We can create a follow-up issue saying that this should be optimised.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The QPU device allows operator estimation and uses measure_observables
from PyQuil under the hood. This feature in the plugin is, however, not compatible with parametric compilation. Therefore, once parametric compilation is turned on, we're falling back to PennyLane's implementation using generate_samples
.
A major shortcoming hinted at above too is that when doing operator estimation, we're effectively generating samples using both PennyLane's pipeline and using measure_observables
- the first of which should not be done.
See #45 that has just been re-opened.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments on the README.rst
file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added more comments on the doc
files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments in the code. Still need to look at the tests.
@@ -190,7 +163,7 @@ def __init__( | |||
self.calibrate_readout = calibrate_readout | |||
self.wiring = {i: q for i, q in enumerate(self.qc.qubits())} | |||
|
|||
def expval(self, observable): | |||
def expval(self, observable, shot_range=None, bin_size=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly I don't know why are we doing this. It seems like we are creating and executing a new program from scratch, which is not very effective...
@josh146 do you know why we don't use the generated self._samples
to compute the expectation value?
@@ -190,7 +163,7 @@ def __init__( | |||
self.calibrate_readout = calibrate_readout | |||
self.wiring = {i: q for i, q in enumerate(self.qc.qubits())} | |||
|
|||
def expval(self, observable): | |||
def expval(self, observable, shot_range=None, bin_size=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the logic inside the measure_observables
function:
https://github.com/PennyLaneAI/pennylane-forest/blob/14709333ed6bb4fa134340f524d89aea44f3cb56/pennylane_forest/qpu.py#L255
It seems it uses a different logic to run the quantum program that the one defined in the generate_samples
method of the QVMDevice
class:
https://github.com/PennyLaneAI/pennylane-forest/blob/14709333ed6bb4fa134340f524d89aea44f3cb56/pennylane_forest/qvm.py#L230
Maybe we need to override this method in the QPUDevice
class?
The Please run |
Co-authored-by: Albert Mitjans <a.mitjanscoma@gmail.com>
Co-authored-by: Albert Mitjans <a.mitjanscoma@gmail.com>
Co-authored-by: Albert Mitjans <a.mitjanscoma@gmail.com>
@@ -190,7 +163,7 @@ def __init__( | |||
self.calibrate_readout = calibrate_readout | |||
self.wiring = {i: q for i, q in enumerate(self.qc.qubits())} | |||
|
|||
def expval(self, observable): | |||
def expval(self, observable, shot_range=None, bin_size=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a brief look at doing this while refactoring the devices, but quickly ran into a wall when I realized that the logic in expval
is dependent on the observable
parameter to generate the right PauliTerm
for the Experiment
passed to measure_observables
.
I just realised that the github workflows run the tests using the Probably that is why some of the tests are passing locally but failing here. I think we should change the following line in
to
This line is duplicated under We should probably change the line in |
Regarding the docs, could you try updating the
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update minimum pennylane version to 0.18
(ISWAP gate was added in the 0.18 release)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing job with the refactor!
Added some comments.
There was a timeout error in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved!
Here are the changes required to get this plugin on to pyQuil v3! I'm including some of my own targeted questions as comments below.