Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix release action #722

Merged
merged 4 commits into from
Jun 20, 2022
Merged

Fix release action #722

merged 4 commits into from
Jun 20, 2022

Conversation

thisac
Copy link
Contributor

@thisac thisac commented Jun 16, 2022

Context:
Automatic PyPI release action is failing due to failing tests when simply running tests with pytest tests (instead of make tests).

Description of the Change:

  • Adds a random seed to the tests run on the upload action.
  • Sets the env var SYMPY_USE_CACHE to "no" to avoid issue with sympy caching (see Set SYMPY_USE_CACHE environment variable to "no" #684 for details).
  • Fixes tolerances in a few tests that were failing too often (specifically when running without setting a seed).
  • Replaces _einsum_v1 with tf.einsum in all places but one (still an issue with sf.backends.tfbackend.ops.conditional_state() which I wasn't able to solve) and also removed the einsum test marked XFAIL (was XPASSing).

Benefits:

  • It should now be possible to run all tests directly using pytest tests.
  • The automatic PyPI release on GitHub published should now work.

Possible Drawbacks:
None

Related GitHub Issues:
None

@thisac thisac requested review from josh146 and sduquemesa June 16, 2022 22:19
Comment on lines +1459 to +1460
# does not work with `tf.einsum`; are the `einsum_args` shapes wrong?
cond_state = _einsum_v1(eqn, *einsum_args)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should work using tf.einsum, but there were issues with the shape of the einsum_args arrays. It looked like they were of the wrong shape, but I couldn't figure it out. It works (for now) using _einsum_v1 as before, but it'd be nicer to simply use tf.einsum instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless the shape of the input tensors in changed, I there's no way around the issue. You're correct in the sense that the shape is the problem, from the new einsum (_einsum_v2) code comments

    # Validate and sanitize the equation and resolve static input shapes, as
    # opt_einsum requires that all shapes be a tuple of positive integers.
    # Also remove ellipsis from the equation as opt_einsum will replace them
    # with named labels. Then broadcasting between different shapes or ranks
    # wouldn't work. (E.g. [1, 1, 2] wouldn't broadcast with [3, 1]).

Some of the inputs are, for example, of sizes [1, 6, 6, 6, 6, 6, 6] and [1, 6], and then won't broadcast and work with einsum.

@codecov
Copy link

codecov bot commented Jun 16, 2022

Codecov Report

Merging #722 (606a50b) into master (d2d5a3c) will increase coverage by 0.02%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master     #722      +/-   ##
==========================================
+ Coverage   98.30%   98.33%   +0.02%     
==========================================
  Files          78       78              
  Lines        9614     9603      -11     
==========================================
- Hits         9451     9443       -8     
+ Misses        163      160       -3     
Impacted Files Coverage Δ
strawberryfields/backends/tfbackend/circuit.py 96.62% <ø> (+0.18%) ⬆️
strawberryfields/backends/tfbackend/states.py 98.97% <ø> (+0.48%) ⬆️
strawberryfields/backends/tfbackend/ops.py 98.27% <100.00%> (+0.12%) ⬆️
strawberryfields/decompositions.py 99.44% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d2d5a3c...606a50b. Read the comment docs.

@@ -324,7 +324,7 @@ def test_gradient(self, dim, n_mean, simple_embedding):
This can be differentiated to give the derivative:
d/dx E((s - x) ** 2) = 6 * n_mean + 2 * (1 - x).
"""
n_samples = 10000 # We need a lot of shots due to the high variance in the distribution
n_samples = 20000 # We need a lot of shots due to the high variance in the distribution
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This takes ~3 seconds to run on my laptop (probably longer on CI). Test failed often with only 10000 samples, and even more than 20000 would be better, and would allow for a lower tolerance below, but would take much longer to run.

Copy link
Contributor

@sduquemesa sduquemesa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @thisac! Hopefully this will make the release pipeline more streamlined 🚀 I left some comments just as potential future references.

Comment on lines +1459 to +1460
# does not work with `tf.einsum`; are the `einsum_args` shapes wrong?
cond_state = _einsum_v1(eqn, *einsum_args)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless the shape of the input tensors in changed, I there's no way around the issue. You're correct in the sense that the shape is the problem, from the new einsum (_einsum_v2) code comments

    # Validate and sanitize the equation and resolve static input shapes, as
    # opt_einsum requires that all shapes be a tuple of positive integers.
    # Also remove ellipsis from the equation as opt_einsum will replace them
    # with named labels. Then broadcasting between different shapes or ranks
    # wouldn't work. (E.g. [1, 1, 2] wouldn't broadcast with [3, 1]).

Some of the inputs are, for example, of sizes [1, 6, 6, 6, 6, 6, 6] and [1, 6], and then won't broadcast and work with einsum.

@@ -27,7 +31,7 @@ jobs:

- name: Run tests
run: |
python -m pytest tests --tb=native
python -m pytest tests -p no:warnings --randomly-seed=42 --tb=native
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For future reference I'm attaching this comment from PR #480:

Optimally, we would remove the randomly-seed=42 from the tests, causing the seed to be set randomly each time, helping us discover potential bugs due to the stochastic nature of some tests. By resetting the seed before each test, we may miss failures that might be present. This might cause some headaches, though.

@sduquemesa sduquemesa merged commit 9a9a352 into master Jun 20, 2022
@sduquemesa sduquemesa deleted the fix-release-action branch June 20, 2022 20:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants