-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix release action #722
Fix release action #722
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -33,6 +33,10 @@ | |
from string import ascii_letters as indices_full | ||
|
||
import tensorflow as tf | ||
|
||
# only used for `conditional_state`; remove when working with `tf.einsum` | ||
from tensorflow.python.ops.special_math_ops import _einsum_v1 | ||
|
||
import numpy as np | ||
from scipy.special import factorial | ||
from scipy.linalg import expm | ||
|
@@ -53,19 +57,6 @@ | |
) | ||
from thewalrus.symplectic import is_symplectic, sympmat | ||
|
||
# With TF 2.1+, the legacy tf.einsum was renamed to _einsum_v1, while | ||
# the replacement tf.einsum introduced the bug. This try-except block | ||
# will dynamically patch TensorFlow versions where _einsum_v1 exists, to make it the | ||
# default einsum implementation. | ||
# | ||
# For more details, see https://github.com/tensorflow/tensorflow/issues/37307 | ||
try: | ||
from tensorflow.python.ops.special_math_ops import _einsum_v1 | ||
|
||
tf.einsum = _einsum_v1 | ||
except ImportError: | ||
pass | ||
|
||
max_num_indices = len(indices) | ||
|
||
################################################################### | ||
|
@@ -1464,7 +1455,9 @@ def conditional_state(system, projector, mode, state_is_pure, batched=False): | |
einsum_args = [system, tf.math.conj(projector)] | ||
if not state_is_pure: | ||
einsum_args.append(projector) | ||
cond_state = tf.einsum(eqn, *einsum_args) | ||
|
||
# does not work with `tf.einsum`; are the `einsum_args` shapes wrong? | ||
cond_state = _einsum_v1(eqn, *einsum_args) | ||
Comment on lines
+1459
to
+1460
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should work using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Unless the shape of the input tensors in changed, I there's no way around the issue. You're correct in the sense that the shape is the problem, from the new
Some of the inputs are, for example, of sizes [1, 6, 6, 6, 6, 6, 6] and [1, 6], and then won't broadcast and work with |
||
if not batched: | ||
cond_state = tf.squeeze(cond_state, 0) # drop fake batch dimension | ||
return cond_state | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -324,7 +324,7 @@ def test_gradient(self, dim, n_mean, simple_embedding): | |
This can be differentiated to give the derivative: | ||
d/dx E((s - x) ** 2) = 6 * n_mean + 2 * (1 - x). | ||
""" | ||
n_samples = 10000 # We need a lot of shots due to the high variance in the distribution | ||
n_samples = 20000 # We need a lot of shots due to the high variance in the distribution | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This takes ~3 seconds to run on my laptop (probably longer on CI). Test failed often with only 10000 samples, and even more than 20000 would be better, and would allow for a lower tolerance below, but would take much longer to run. |
||
objectives = np.linspace(0.5, 1.5, dim) | ||
h = self.h_setup(objectives) | ||
A = np.eye(dim) | ||
|
@@ -352,4 +352,4 @@ def test_gradient(self, dim, n_mean, simple_embedding): | |
|
||
dcost_by_dn_expected = 6 * n_mean_by_mode + 2 * (1 - objectives) | ||
|
||
assert np.allclose(dcost_by_dn, dcost_by_dn_expected, 0.1) | ||
assert np.allclose(dcost_by_dn, dcost_by_dn_expected, 0.5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For future reference I'm attaching this comment from PR #480: