Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable site cube input to ConstructReliabilityCalibrationTables #1667

Merged
merged 15 commits into from
Feb 23, 2022

Conversation

lucyleeow
Copy link
Contributor

@lucyleeow lucyleeow commented Feb 8, 2022

Closes #1658

  • ConstructReliabilityCalibrationTables accepts site cubes as well as grid cubes
  • Current tests pass
  • Add tests for site functionality

Testing:

  • Ran tests and they passed OK
  • Added new tests for the new feature(s)

@lucyleeow
Copy link
Contributor Author

Note tests are failing due to docs, fixed in #1668

@tjtg
Copy link
Contributor

tjtg commented Feb 9, 2022

#1668 is merged and I've re-run the CI tasks to pick up the fixes from that PR.

@codecov
Copy link

codecov bot commented Feb 10, 2022

Codecov Report

Merging #1667 (c7ce213) into master (4ab4120) will decrease coverage by 0.03%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1667      +/-   ##
==========================================
- Coverage   98.20%   98.16%   -0.04%     
==========================================
  Files         110      110              
  Lines       10173    10181       +8     
==========================================
+ Hits         9990     9994       +4     
- Misses        183      187       +4     
Impacted Files Coverage Δ
improver/calibration/reliability_calibration.py 98.79% <100.00%> (-1.21%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4ab4120...c7ce213. Read the comment docs.

@lucyleeow
Copy link
Contributor Author

Note there is a PR to this PR that converts the unittests to pytest: lucyleeow#1

@lucyleeow lucyleeow changed the title WIP Enable site cube input to ConstructReliabilityCalibrationTables Enable site cube input to ConstructReliabilityCalibrationTables Feb 22, 2022
@lucyleeow lucyleeow added the MO review required PRs opened by non-Met Office developers that require a Met Office review label Feb 22, 2022
@lucyleeow
Copy link
Contributor Author

lucyleeow commented Feb 22, 2022

This is ready for review now.

Note the conversion of the unit tests to pytest was done in this PR and has been reviewed already by myself, @benowen-bom and @bayliffe

Three tests have been parametrized in test_ConstructReliabilityCalibrationTables.py to test both grid and spot cubes. Due to fixtures not being accepted by pytest.mark.parametrize (see long standing issue: pytest-dev/pytest#349), I have used request.getfixturevalue. There is another option available but requires 3rd party package and I did not wish to add another package to the env. Happy to amend how this is handled.

cc @tjtg



@pytest.mark.parametrize(
"input_cube, expected_shape",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works OK, but the need to use string parameters and then getfixturevalue inside the test function is a little messy.
As mentioned in another comment, it'd be nice if fixtures could be used directly in a parametrize, but that doesn't exist.

The main alternative I can think of would be to create a parametrized fixture which gathers the relevant other fixtures together.
For example:

RelTableInputs = namedtuple("RelTableInputs", ["forecast", "truth", "expected_shape"])

@pytest.fixture(params=["grid", "spot"])
def create_rel_table_inputs(request, forecast_grid, forecast_spot, truth_grid, truth_spot, expected_table_shape_grid, expected_table_shape_spot):
    if request.param == "grid":
        return RelTableInputs(forecast=forecast_grid, truth=truth_grid, expected_shape=expected_table_shape_grid)
    return RelTableInputs(forecast=forecast_spot, truth=truth_spot, expected_shape=expected_table_shape_spot)

This is more code to set up, but it would avoid duplicated lines between this test function and others below that want to test with both grid and site.
I'm not sure what's better overall - interested in others opinions about this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has thrown me for six. Who knew I could parameterize the fixture rather than the test!!! Okay, so lots of people knew, but not me.

I implemented this to play with it and I think it is rather elegant, though it is a little opaque as there is no evidence when reviewing the tests that they are parameterized without having sight of the fixture. That said, I think I would adopt Tom's suggestion, perhaps noting in the doc-string that create_rel_table_inputs is a parameterized fixture

For my own reference in future, in this case it looks like this:

def test_valid_inputs(create_rel_table_inputs, expected_attributes):
    """Tests correct reliability cube generated. Parameterized using create_rel_table_inputs fixture."""
    forecast_1 = create_rel_table_inputs.forecast[0]
    forecast_slice = next(forecast_1.slices_over("air_temperature"))
    result = Plugin()._create_reliability_table_cube(
        forecast_slice, forecast_slice.coord(var_name="threshold")
    )
    assert isinstance(result, Cube)
    assert result.shape == create_rel_table_inputs.expected_shape
    assert result.name() == "reliability_calibration_table"
    assert result.attributes == expected_attributes

Copy link
Contributor Author

@lucyleeow lucyleeow Feb 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I think this may be cleaner and make less code overall.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that fixture parametrization made the tests a little bit slower so timed them out of curiosity:

Parametrization:

0.04s setup    improver_tests/calibration/reliability_calibration/test_ConstructReliabilityCalibrationTables.py::test_prb_table_values[grid]
0.03s setup    improver_tests/calibration/reliability_calibration/test_ConstructReliabilityCalibrationTables.py::test_prb_table_values[spot]

Fixture parametrization:

0.04s setup    improver_tests/calibration/reliability_calibration/test_ConstructReliabilityCalibrationTables.py::test_prb_table_values[grid]
0.04s setup    improver_tests/calibration/reliability_calibration/test_ConstructReliabilityCalibrationTables.py::test_prb_table_values[spot]
0.02s call     improver_tests/calibration/reliability_calibration/test_ConstructReliabilityCalibrationTables.py::test_prb_table_values[spot]
0.02s call     improver_tests/calibration/reliability_calibration/test_ConstructReliabilityCalibrationTables.py::test_prb_table_values[grid]

It doesn't make enough of a difference to matter but just thought I would share.

aux_coords_and_dims.append((reliability_name_coord, 0))
spatial_coords = [forecast.coord(axis=dim).name() for dim in ["x", "y"]]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes to the non-test code look OK to me.

@bayliffe bayliffe self-assigned this Feb 22, 2022
bayliffe
bayliffe previously approved these changes Feb 22, 2022
Copy link
Contributor

@bayliffe bayliffe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a nice simple bit of re-engineering. I'm happy with it as it is, but I like Tom's test suggestion and have wittered on about that in a comment; so happier still with that change implemented.



@pytest.mark.parametrize(
"input_cube, expected_shape",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has thrown me for six. Who knew I could parameterize the fixture rather than the test!!! Okay, so lots of people knew, but not me.

I implemented this to play with it and I think it is rather elegant, though it is a little opaque as there is no evidence when reviewing the tests that they are parameterized without having sight of the fixture. That said, I think I would adopt Tom's suggestion, perhaps noting in the doc-string that create_rel_table_inputs is a parameterized fixture

For my own reference in future, in this case it looks like this:

def test_valid_inputs(create_rel_table_inputs, expected_attributes):
    """Tests correct reliability cube generated. Parameterized using create_rel_table_inputs fixture."""
    forecast_1 = create_rel_table_inputs.forecast[0]
    forecast_slice = next(forecast_1.slices_over("air_temperature"))
    result = Plugin()._create_reliability_table_cube(
        forecast_slice, forecast_slice.coord(var_name="threshold")
    )
    assert isinstance(result, Cube)
    assert result.shape == create_rel_table_inputs.expected_shape
    assert result.name() == "reliability_calibration_table"
    assert result.attributes == expected_attributes

@bayliffe bayliffe assigned lucyleeow and unassigned bayliffe Feb 22, 2022
@lucyleeow
Copy link
Contributor Author

@tjtg changes made, are you happy to approve?

@lucyleeow lucyleeow assigned tjtg and unassigned lucyleeow Feb 23, 2022
Copy link
Contributor

@tjtg tjtg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with the tests now. Non-test code is good as previously mentioned.

@lucyleeow lucyleeow merged commit 2241786 into metoppv:master Feb 23, 2022
@lucyleeow
Copy link
Contributor Author

Thanks @tjtg @bayliffe !

@lucyleeow lucyleeow deleted the rel_table branch February 23, 2022 03:17
MoseleyS added a commit to MoseleyS/improver that referenced this pull request Mar 2, 2022
* master:
  MOBT127 tiny tweak to filter_realizations (metoppv#1682)
  Mobt 160 ecc masked data (metoppv#1662)
  Convert test_ManipulateReliabilityTable to pytest (metoppv#1678)
  Alter spot extract cli (metoppv#1666)
  Enable site cube input to ConstructReliabilityCalibrationTables (metoppv#1667)
  Corrects example of a CLI in the Read the Doc documentation (metoppv#1673)
  MOBT-211: mosg__model_run attribute handling in weather symbols (metoppv#1670)

# Conflicts:
#	improver_tests/wxcode/wxcode/test_WeatherSymbols.py
fionaRust added a commit that referenced this pull request Mar 9, 2022
* master:
  Remove flawed interpretation of the blend_time coordinate. (#1690)
  DOC: Removal of unnecessary exclusions in sphinx apidoc build (#1689)
  MOBT127 tiny tweak to filter_realizations (#1682)
  Mobt 160 ecc masked data (#1662)
  Convert test_ManipulateReliabilityTable to pytest (#1678)
  Alter spot extract cli (#1666)
  Enable site cube input to ConstructReliabilityCalibrationTables (#1667)
  Corrects example of a CLI in the Read the Doc documentation (#1673)
  MOBT-211: mosg__model_run attribute handling in weather symbols (#1670)
MoseleyS pushed a commit to MoseleyS/improver that referenced this pull request Aug 22, 2024
…ppv#1667)

* wip

* rel cube

* rel cube spot + test

* add tests

* linting

* update forecast grid name

* Convert reliability calibration tests to pytest

* Spelling fix

Co-authored-by: Lucy Liu <jliu176@gmail.com>

* Review changes

* Remove leftover prints

* param tests

* standardise param order

* use fixture parametrization

* linting

Co-authored-by: Tom Gale <tom.gale@bom.gov.au>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
MO review required PRs opened by non-Met Office developers that require a Met Office review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable site cube input to ConstructReliabilityCalibrationTables
3 participants