-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix release action #722
Fix release action #722
Conversation
# does not work with `tf.einsum`; are the `einsum_args` shapes wrong? | ||
cond_state = _einsum_v1(eqn, *einsum_args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should work using tf.einsum
, but there were issues with the shape of the einsum_args
arrays. It looked like they were of the wrong shape, but I couldn't figure it out. It works (for now) using _einsum_v1
as before, but it'd be nicer to simply use tf.einsum
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless the shape of the input tensors in changed, I there's no way around the issue. You're correct in the sense that the shape is the problem, from the new einsum
(_einsum_v2
) code comments
# Validate and sanitize the equation and resolve static input shapes, as
# opt_einsum requires that all shapes be a tuple of positive integers.
# Also remove ellipsis from the equation as opt_einsum will replace them
# with named labels. Then broadcasting between different shapes or ranks
# wouldn't work. (E.g. [1, 1, 2] wouldn't broadcast with [3, 1]).
Some of the inputs are, for example, of sizes [1, 6, 6, 6, 6, 6, 6] and [1, 6], and then won't broadcast and work with einsum
.
Codecov Report
@@ Coverage Diff @@
## master #722 +/- ##
==========================================
+ Coverage 98.30% 98.33% +0.02%
==========================================
Files 78 78
Lines 9614 9603 -11
==========================================
- Hits 9451 9443 -8
+ Misses 163 160 -3
Continue to review full report at Codecov.
|
@@ -324,7 +324,7 @@ def test_gradient(self, dim, n_mean, simple_embedding): | |||
This can be differentiated to give the derivative: | |||
d/dx E((s - x) ** 2) = 6 * n_mean + 2 * (1 - x). | |||
""" | |||
n_samples = 10000 # We need a lot of shots due to the high variance in the distribution | |||
n_samples = 20000 # We need a lot of shots due to the high variance in the distribution |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This takes ~3 seconds to run on my laptop (probably longer on CI). Test failed often with only 10000 samples, and even more than 20000 would be better, and would allow for a lower tolerance below, but would take much longer to run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @thisac! Hopefully this will make the release pipeline more streamlined 🚀 I left some comments just as potential future references.
# does not work with `tf.einsum`; are the `einsum_args` shapes wrong? | ||
cond_state = _einsum_v1(eqn, *einsum_args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless the shape of the input tensors in changed, I there's no way around the issue. You're correct in the sense that the shape is the problem, from the new einsum
(_einsum_v2
) code comments
# Validate and sanitize the equation and resolve static input shapes, as
# opt_einsum requires that all shapes be a tuple of positive integers.
# Also remove ellipsis from the equation as opt_einsum will replace them
# with named labels. Then broadcasting between different shapes or ranks
# wouldn't work. (E.g. [1, 1, 2] wouldn't broadcast with [3, 1]).
Some of the inputs are, for example, of sizes [1, 6, 6, 6, 6, 6, 6] and [1, 6], and then won't broadcast and work with einsum
.
@@ -27,7 +31,7 @@ jobs: | |||
|
|||
- name: Run tests | |||
run: | | |||
python -m pytest tests --tb=native | |||
python -m pytest tests -p no:warnings --randomly-seed=42 --tb=native |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For future reference I'm attaching this comment from PR #480:
Optimally, we would remove the
randomly-seed=42
from the tests, causing the seed to be set randomly each time, helping us discover potential bugs due to the stochastic nature of some tests. By resetting the seed before each test, we may miss failures that might be present. This might cause some headaches, though.
Context:
Automatic PyPI release action is failing due to failing tests when simply running tests with
pytest tests
(instead ofmake tests
).Description of the Change:
SYMPY_USE_CACHE
to"no"
to avoid issue with sympy caching (see SetSYMPY_USE_CACHE
environment variable to "no" #684 for details)._einsum_v1
withtf.einsum
in all places but one (still an issue withsf.backends.tfbackend.ops.conditional_state()
which I wasn't able to solve) and also removed the einsum test marked XFAIL (was XPASSing).Benefits:
pytest tests
.Possible Drawbacks:
None
Related GitHub Issues:
None