Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert reliability calibration tests to pytest #1

Merged
merged 4 commits into from
Feb 21, 2022

Conversation

tjtg
Copy link

@tjtg tjtg commented Feb 14, 2022

Conversion of reliability calibration tests to use pytest. This is intended so that the tests can be extended to cover processing of site data as well as gridded data, such as via pytest parametrize.

The changes in this PR are just conversion from unittest classes to pytest fixtures and functions. Some further refactoring is recommended to take best advantage of pytest features and remove duplication.

Testing:

  • Ran tests and they passed OK
  • Added new tests for the new feature(s)

@codecov-commenter
Copy link

codecov-commenter commented Feb 14, 2022

Codecov Report

❗ No coverage uploaded for pull request base (rel_table@4f9c0aa). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff              @@
##             rel_table       #1   +/-   ##
============================================
  Coverage             ?   98.16%           
============================================
  Files                ?      110           
  Lines                ?    10181           
  Branches             ?        0           
============================================
  Hits                 ?     9994           
  Misses               ?      187           
  Partials             ?        0           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4f9c0aa...d8f53ce. Read the comment docs.

Copy link
Owner

@lucyleeow lucyleeow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Fixtures for construct and aggregate reliability calibration tests."""
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the record, do we intend this to contain fixtures for ManipulateReliabilityTable and ApplyReliabilityCalibration too?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed the specific wording in fb4c1a3


Each forecast cube in conjunction with the contemporaneous truth cube
will be used to produce a reliability calibration table. When testing
the process method here we expect the final reliability calibration
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe specify the process method of ConstructReliabilityCalibrationTables ? I also wonder if this information may be better placed in test_ConstructReliabilityCalibrationTables ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment is moved to test_ConstructReliabilityCalibrationTables in fb4c1a3.

forecast_data = forecast_grid.data[0, ...]
spot_probabilities = forecast_data.reshape((2, 9))
forecasts_spot_list = CubeList()
for day in range(5, 7):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self - I think I should probably change this to be the same time as in the grids above

improver_tests/calibration/utilities/conftest.py Outdated Show resolved Hide resolved
Copy link

@bayliffe bayliffe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @tjtg, this looks good, including the slightly expanded coverage for spot inputs. Happy for this to be the basis of Lucy's changes.

@lucyleeow
Copy link
Owner

CI failure is unrelated, it is due to incorrect intersphinx URL in docs

Copy link

@benowen-bom benowen-bom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Only one minor comment.

@lucyleeow
Copy link
Owner

Have double checked in improver repo (metoppv#1675) and have no idea why the tests are failing here. Will try pushing an empty commit, otherwise are you happy to merge @tjtg ?

lucyleeow pushed a commit that referenced this pull request Feb 21, 2022
* Remove reference to LimitedAttributeDict

This was used for type annotations and relied on a "private" module.

* Add conda env for Iris 3.0

* Remove extract_strict

No longer available in Iris 3.0. Also removed "strict" keyword arg to
extract.

* Add Iris 3.0 conda env to testing

* Skip failing test for now

* Mark test as xfail rather than skip

* Drop Iris 2.* envs from the test matrix

* Update setup.cfg and docs to Iris 3.0

* Add Python 3.8 environment

* Use function instead of method in test

* Update scipy version and unpin cftime

* Remove nimrod patching for Iris

* Remove unused imports

* Update pysteps version.

Requires specification of the interpolation order, which was previously 
always 0 (nearest neighbour).

* Update checksums.

* Remove IMPROVER equalise attributes function.

Iris 3 includes functionality that equalises attributes in the way that 
we require, so the maintenance of our own method is no longer required.

* Remove Iris 2 environments with which IMPROVER is no longer backwards compatible.

* Add pytest-xdist to environments.

* Pin numpy version to 1.20.*

* Pin cf-units to 2.1.5

Version 3.0.0 was recently released and using it in IMPROVER results in
~40 unit test failures.

* Allow numpy versions < 1.21 in conda envs

Plus, replace "==" with "=" in pinning cf-units.

* Remove Iris 2 env

* Adjust cftime and cf-units versions in envs

* Make Python 3.8 env consistent with 3.7 env

* Minor edits to support use of cftime in datetime_to_iris_time function (#1)

* Minor edits to support use of cftime in datetime_to_iris_time function.

* Adjust type annotations

Co-authored-by: Daniel Mentiplay <daniel.mentiplay@bom.gov.au>

* Spot extract KGOs metoppv#1521 plus units attribute

Co-authored-by: benjamin.ayliffe <benjamin.ayliffe@metoffice.gov.uk>
Co-authored-by: gavinevans <gavin.evans@metoffice.gov.uk>
Co-authored-by: Tom Gale <tom.gale@bom.gov.au>
@tjtg
Copy link
Author

tjtg commented Feb 21, 2022

I'm happy for this to get merged. The CI failures were previously due to the intersphinx links issue fixed in metoppv#1671.

As this is a PR-to-PR, the CI here will run using the proposed merge into lucyleeow/improver rel_table branch. It won't pick up changes from metoppv/improver master branch like a typical PR on the metoppv/improver repository would.

@tjtg
Copy link
Author

tjtg commented Feb 21, 2022

The most recently added merge commits 06dee8d, 46bce91 and d8f53ce are making the diff of this PR significantly larger.

I'd recommend avoiding merges of metoppv master branch here to keep the history neat. Options are to merge metoppv master branch into rel_table or to rebase rel_table on latest master (and then I'll rebase this one on top).

@lucyleeow
Copy link
Owner

Yes, I didn't think and I see that I have mucked up the history. Have merged metoppv master into rel_table, but it does not seem to be coming through

@lucyleeow lucyleeow merged commit a25050e into lucyleeow:rel_table Feb 21, 2022
@lucyleeow
Copy link
Owner

Thanks @benowen-bom @tjtg !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants