Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Additional features for error mitigation protocols #56

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

sungwoo-pasqal
Copy link
Collaborator

@sungwoo-pasqal sungwoo-pasqal commented Feb 24, 2025

Closes #30

  • Adds exponential decay extrapolation in analog ZNE method
  • Adds hamming distance that increases the sparsity in the MTHREE method
  • Adds finite random sampling in the Twirl method

@sungwoo-pasqal sungwoo-pasqal self-assigned this Feb 24, 2025
@sungwoo-pasqal sungwoo-pasqal changed the title [Feature] Additional features for readout mitigation protocols [Feature] Additional features for error mitigation protocols Feb 25, 2025
@sungwoo-pasqal sungwoo-pasqal marked this pull request as ready for review February 27, 2025 15:36
Copy link
Collaborator

@RolandMacDoland RolandMacDoland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @sungwoo-pasqal, it feels much cleaner :) Some comments from me but generally I think it'd be good to integrate the subspace redistribution logic into the function that creates the confusion matrix. Also maybe best to use hamming whenever possible. In other parts of Qadence, ham is a shortcut for Hamiltonian. This might cause confusion.

@@ -29,13 +31,13 @@ model = QuantumModel(
circuit=circuit, observable=observable, backend=BackendName.PULSER, diff_mode=DiffMode.GPSR
)

for data_points in [2,5]:
options = {"stretches": torch.linspace(1, 3, data_points)}
for data_points, zne_type in zip([2,5], ["poly", "exp"]):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this presuppose the number of datapoints for each ZNE method ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The exponential method needs at least three data points, whereas poly needs two. I'll put that stated in the docs.

confusion_matrix_subspace = normalized_subspace_kron(noise_matrices, observed_prob.nonzero()[0])

# we consider a small hamming distance for this method and set it to 3
confusion_matrix_subspace_ham = ham_dist_redistribution(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of another function, would it make sense to integrate it into normalized_subspace_kron ? If I understand correctly, you use this latter function to generate a confusion matrix. But maybe if there is an extra ham_dist parameter that could trigger the subspace redistribution without having to rely on some logic defined elsewhere ?

confusion_matrix_subspace_ham = ham_dist_redistribution(
confusion_matrix_subspace, ham_dist=3
)
p_corr_mthree_gmres_ham = gmres(confusion_matrix_subspace_ham, input_csr.toarray())[0]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm these variables names are a bit hard to understand. Could you please use more meaningful names or comment what you're doing here ?

@@ -27,6 +30,42 @@ def zne(noise_levels: Tensor, zne_datasets: list[list]) -> Tensor:
return torch.tensor(poly_fits)


def exp_func(x: np.ndarray, a: float, b: float, c: float) -> np.ndarray:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def exp_func(x: np.ndarray, a: float, b: float, c: float) -> np.ndarray:
def exp_fn(x: np.ndarray, a: float, b: float, c: float) -> np.ndarray:

Nothing major but there are few places in Qadence where we use fn for functions.

elif zne_type == "poly" or zne_type is None:
zne_func = zne
else:
raise ValueError("analog zne supports only polynomial or exponential extrapolation.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
raise ValueError("analog zne supports only polynomial or exponential extrapolation.")
raise ValueError(f"Analog ZNE supports only polynomial or exponential extrapolation. Got {zne_type}.")

@@ -52,6 +52,34 @@ def normalized_subspace_kron(noise_matrices: npt.NDArrary, subspace: npt.NDArray
return conf_matrix


def ham_dist_redistribution(conf_matrix: npt.NDArray, ham_dist: int) -> npt.NDArray:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you use npt.NDArray here, I'd suggest to keep it consistent elsewhere too.

@@ -237,6 +265,15 @@ def mitigation_minimization(

elif optimization_type == ReadOutOptimization.MTHREE:
confusion_matrix_subspace = normalized_subspace_kron(noise_matrices, p_raw.nonzero()[0])

ham_dist = options.get("ham_dist")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ham_dist = options.get("ham_dist")
ham_dist = options.get("ham_dist", None)

@@ -237,6 +265,15 @@ def mitigation_minimization(

elif optimization_type == ReadOutOptimization.MTHREE:
confusion_matrix_subspace = normalized_subspace_kron(noise_matrices, p_raw.nonzero()[0])

ham_dist = options.get("ham_dist")
if ham_dist:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if ham_dist:
if ham_dist is not None:


except RuntimeError:
print("Warning: Optimal parameters not found, using last known value.")
exp_fits.append(dataset_np[-1])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we choose to append the last value, and not something like Nan

if zne_type == "exp":
zne_func = zne_exp
elif zne_type == "poly" or zne_type is None:
zne_func = zne
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is doing a poly fit, can we call this zne_poly rather than just zne


if partial_sum > 0: # Avoid division by zero
for row in valid_rows:
redistributed_matrix[row, col] = conf_matrix[row, col] / partial_sum
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isnt it possible to just get the sub_matrix
redistributed_matrix = conf_matrix[valid_rows, col] or something ?

this one is m3 with increased hamming distance subspace confusion matrix, correct ? I was under the impression it would be a square matrix with both rows and columns matching and subsampled

I dont think partial sum can add to 0.. entries of confusion matrix are positive


confusion_matrix_subspace = normalized_subspace_kron(noise_matrices, observed_prob.nonzero()[0])

confusion_matrix_subspace_ham = ham_dist_redistribution(confusion_matrix_subspace, 3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think you should first get the rows surrounding the non zero locations, expand by hamming distance and then call normalized_subspace over these locations

# Random sample gate strings for scalability
twirl_samples = options.get("twirl_samples")
if isinstance(twirl_samples, int) and twirl_samples > 0:
twirls = random.sample(twirls, twirl_samples)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be clever to not sample from the whole twirl space, but rather generate as much needed, as that would be scalable. Not sure if this is straightforward here, feel free to ignore

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] Additional features to readout mitigation protocols
3 participants