-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dta2-255 consistent naming #139
Conversation
🚀 Code Coverage
|
Hey @AthenaCaesura, I just spotted that you have typo in |
@mstechly 😆 Not the first time that's happened. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we might want to move this to problem_embeddings instead of problem_ingestion because it is generating a circuit that encodes the problem. I imagine that problem ingestion in this context would be logic for choosing values of n, a, b, and c for some use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree! We haven't discussed problem_ingestion vs problem_embedding in detail yet. But the difference I see is:
ingestion - code that converts a classical problem instance into problem instance data for embedding (e.g. pyscf converting the mol geometries into an active space truncated second quantized Hamiltonian or predicting the condition number or norms of the various matrices and vectors in a diff eq problem instance)
embedding - code that converts classical problem instance data into quantum operations (e.g. block encoding the Hamiltonian, oracles for the diff eq A matrix or the b state, trotter steps)
What I'm not sure of though is, based on this terminology, what is the best way to organize the directory structure.
self.surface_code_cycle_time_in_seconds = surface_code_cycle_time_in_seconds | ||
|
||
def get_hardware_resource_estimates(self, resource_info: ResourceInfo): | ||
def get_hardware_resource_estimates(self, resource_info): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we keep the type hint for resource_info
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! Not sure why that got deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of using kW/kJ everywhere, would it be better to use W/J? My guess is that using W/J will be less confusing, especially since the decoder energy and power is usually much less than 1 kW/kJ.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to keep the power and energy units as kW/kJ in the case of the IonTrap hardware model as that range is more readable for the typical output. However I did change the decoder to W/J.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering whether it really makes sense to put the Azure integration under footprint_estimators
. In the context of DARPA QB, I've been interpreting footprint analysis to mean that you estimate physical costs based on the gate count and number of qubits, and not any information about the order of gates or what qubits they act on. Is this what you are thinking the scope of the footprint_estimators
module to be?
My impression is that Azure does actually look at the circuit itself, not just gate/qubit counts. (At least from an API point of view it does; it's less clear to me what information about the circuit gets used under the hood.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the last presentation I saw from them, it seems that their tool is "compiling down to an instruction set architecture (ISA)" and using this in their resource estimates. I assume this ends up taking Clifford costs into account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I would rather work on the level of resource estimation methods that require ordering vs. don't require ordering. But this is something we should chat about later on.
One question on my mind is if we can make the organization of the problem ingestion submodule a bit more logical. I'm not sure what the best solution is but a few thoughts:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall I think these changes you made will help people more easily navigate benchq! Made a few suggestions. Happy to chat about them :-)
examples/ex_2_time_evolution.py
Outdated
from benchq.problem_ingestion import get_vlasov_hamiltonian | ||
from benchq.problem_ingestion.hamiltonian_generation import ( | ||
from benchq.problem_ingestion.hamiltonians.hiesenburg import ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo: "hiesenburg" to "heisenberg"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree! We haven't discussed problem_ingestion vs problem_embedding in detail yet. But the difference I see is:
ingestion - code that converts a classical problem instance into problem instance data for embedding (e.g. pyscf converting the mol geometries into an active space truncated second quantized Hamiltonian or predicting the condition number or norms of the various matrices and vectors in a diff eq problem instance)
embedding - code that converts classical problem instance data into quantum operations (e.g. block encoding the Hamiltonian, oracles for the diff eq A matrix or the b state, trotter steps)
What I'm not sure of though is, based on this terminology, what is the best way to organize the directory structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest that block encoding related material go into problem_embedding rather than problem_ingestion (see related comment above).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would think that this fits better in problem_ingestion
since it is not computing a circuit, but rather is computing the data required to make the circuit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Same comment as above) I would suggest that block encoding related material go into problem_embedding rather than problem_ingestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If possible, it would be great to add links to arxiv papers as references for the numbers used in these models. We should ask Simon if there is a specific reference he can give us for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just going to link to the overleaf for now. 😆
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, simon hasn't granted access for sharing the overleaf file. We should bug him about it this week.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be more appropriate to call this fowler_surface_code_model.py since I believe the numbers comes from earlier work by fowler. Also, we should add a reference for the numbers used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done! I just cited the openfermion package where all these equations can be found. I'm not sure there is published literature about their exact assumptions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the last presentation I saw from them, it seems that their tool is "compiling down to an instruction set architecture (ISA)" and using this in their resource estimates. I assume this ends up taking Clifford costs into account.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this should live in problem_embeddings (see similar suggestions above)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
@@ -11,7 +11,7 @@ | |||
from orquestra.quantum.circuits import ZZ, Circuit | |||
from qiskit import Aer, execute | |||
|
|||
from benchq.algorithms.lde_solver import ( | |||
from benchq.algorithms.lde_solver.lde_solver import ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the folder be "lde_solvers" if we're intending to put more than one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just going to keep it as the singular lde solver for now. In that case, I'll probably change the name to time-marching.
acabaaf
to
c9d8a71
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one minor comment: hiesenberg.py should be heisenberg.py.
Description
Please verify that you have completed the following steps