Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert unittest to pytest #478

Merged
merged 32 commits into from
Oct 26, 2023
Merged

Convert unittest to pytest #478

merged 32 commits into from
Oct 26, 2023

Conversation

cyschneck
Copy link
Contributor

@cyschneck cyschneck commented Sep 21, 2023

PR Summary

Closes #462

PR Checklist

General

  • Make an issue if one doesn't already exist
  • Link the issue this PR resolves by adding closes #XXX to the PR description where XXX is the number of the issue.
  • Add a brief summary of changes to docs/release-notes.rst in a relevant section for the next unreleased release. Possible sections include: Documentation, New Features, Bug Fixes, Internal Changes, Breaking Changes/Deprecated
  • Add appropriate labels to this PR
  • Make your changes in a forked repository rather than directly in this repo
  • Open this PR as a draft if it is not ready for review
  • Convert this PR from a draft to a full PR before requesting reviewers
  • Passes precommit. To set up on your local, run pre-commit install from the top level of the repository. To manually run pre-commits, use pre-commit run --all-files and re-add any changed files before committing again and pushing.
  • [n/a] If needed, squash and merge PR commits into a single commit to clean up commit history

@cyschneck
Copy link
Contributor Author

cyschneck commented Sep 22, 2023

Original testing output via pytest test/ -x -v:

========================= 251 passed, 47 warnings in 240.58s (0:04:00) ==========================

Converted to pytest output via pytest test/ -x -v:

========================= 273 passed, 40 warnings in 146.09s (0:02:26) ==========================

All original tests are maintained with some added to keep the original structure

Additional warnings will be addressed as part of #477

@cyschneck cyschneck marked this pull request as ready for review September 22, 2023 18:01
@anissa111 anissa111 added the testing Issue related to testing label Sep 22, 2023
Copy link
Member

@anissa111 anissa111 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is good! I've left a few comments below. Also, I think we should be able to remove the flags: unittests line in .github/workflows/ci.yml since everything should be pure pytest now.

test/test_stats.py Outdated Show resolved Hide resolved
test/test_gradient.py Outdated Show resolved Hide resolved
test/test_meteorology.py Outdated Show resolved Hide resolved
test/test_stats.py Show resolved Hide resolved
@cyschneck
Copy link
Contributor Author

cyschneck commented Sep 25, 2023

========================= 251 passed, 40 warnings in 240.58s (0:04:00) ==========================
  • removed the flags: unittests line in .github/workflows/ci.yml
  • removed #from unittest import TestCase from test/test_stats.py
  • converted classmethod to pytest.fixtures() (for setupClass())
  • Remove closeClient function, replaced with yield to close client

@cyschneck cyschneck self-assigned this Sep 25, 2023
@cyschneck cyschneck requested a review from anissa111 September 27, 2023 17:38
Copy link
Member

@anissa111 anissa111 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In summary from our convo:

  • fixtures that just create class variables can likely be eliminated by just creating these variables in the class
  • fixtures that read in data from files should be kept and provide these variables with return or yield instead of assigning to the class
  • set ups with dask client creation should be pulled into their own fixtures and only provided to the tests that require them instead of lumping them in with an autouse=True fixture.

Additionally, where possible I think we should use more descriptive function names for these fixtures where possible.

@cyschneck
Copy link
Contributor Author

cyschneck commented Oct 3, 2023

Improved runtime from 240s -> 140s with new fixtures

========================= 251 passed, 41 warnings in 140.58s (0:02:20) ==========================

@cyschneck cyschneck requested a review from anissa111 October 3, 2023 21:03
@cyschneck
Copy link
Contributor Author

Further fixtures for files I/O continue to improve runtime (from 140s -> 135s) from the original 240s

========================= 251 passed, 41 warnings in 135.87s (0:02:15) ==========================

@anissa111
Copy link
Member

I've made a branch off of this one that shows the type of changes I'm looking for on test/test_gradient.py and test/test_meteorology.py.

My formatter took over and made some additional changes, but I hope it's still possible to see the substantive changes over the formatting differences.

Copy link
Member

@anissa111 anissa111 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes! I've included one last request here and then I think we'll be good to go.

Comment on lines 373 to 421
@pytest.fixture(scope="class")
def test_input(self):
return xr.load_dataset(
gdf.get("netcdf_files/interpolation_test_input_data.nc"))

cls.test_output = xr.load_dataset(
@pytest.fixture(scope="class")
def test_output(self):
return xr.load_dataset(
gdf.get("netcdf_files/interpolation_test_output_data.nc"))

cls.data_in = cls.test_input['normal']
cls.data_out = cls.test_output['normal']

cls.lat_in = cls.data_in['lat'].values
cls.lat_out = cls.data_out['lat'].values
cls.lon_in = cls.data_in['lon'].values
cls.lon_out = cls.data_out['lon'].values

cls.data_in_nan = cls.test_input['nan']
cls.data_out_nan = cls.test_output['nan']

cls.data_in_nan_2 = cls.test_input['nan_2']
cls.data_out_nan_2 = cls.test_output['nan_2']

cls.data_in_missing = cls.test_input['missing']
cls.data_out_missing = cls.test_output['missing']

cls.data_in_mask = cls.test_input['mask']
cls.data_out_mask = cls.test_output['mask']

def test_float32(self):
np.testing.assert_almost_equal(
self.data_out.values.astype(np.float32),
interp_multidim(xr.DataArray(self.data_in.values.astype(np.float32),
dims=['lat', 'lon'],
coords={
'lat': self.lat_in,
'lon': self.lon_in,
}),
self.lat_out,
self.lon_out,
cyclic=True).values,
decimal=7)

def test_float64(self):
@pytest.fixture(scope="class")
def data_in(self, test_input):
return test_input['normal']

@pytest.fixture(scope="class")
def data_out(self, test_output):
return test_output['normal']

@pytest.fixture(scope="class")
def data_in_nan(self, test_input):
return test_input['nan']

@pytest.fixture(scope="class")
def data_out_nan(self, test_output):
return test_output['nan']

@pytest.fixture(scope="class")
def data_in_nan_2(self, test_input):
return test_input['nan_2']

@pytest.fixture(scope="class")
def data_out_nan_2(self, test_output):
return test_output['nan_2']

@pytest.fixture(scope="class")
def data_in_missing(self, test_input):
return test_input['missing']

@pytest.fixture(scope="class")
def data_out_missing(self, test_output):
return test_output['missing']

@pytest.fixture(scope="class")
def data_in_mask(self, test_input):
return test_input['mask']

@pytest.fixture(scope="class")
def data_out_mask(self, test_output):
return test_output['mask']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can reduce the number of fixtures here. Unlike in other places where the slicing is non-obvious and creating named fixtures improved readability, here the fixtures are largely just named after the index and variable, which we could access directly from tests. Since these fixtures request all the original test_input and test_output, separating them shouldn't give us any significant performance improvements.

@anissa111 anissa111 self-requested a review October 25, 2023 21:53
Copy link
Member

@anissa111 anissa111 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks for that last change! This should be good to merge after #495 to clear the link check failure!

@anissa111 anissa111 added the blocked Work got blocked waiting the output of some other source/work label Oct 25, 2023
@anissa111
Copy link
Member

Oh! Also just noting that this should be moved to the next release section of docs/release-notes.rst, but that can be handled with the merge conflicts if you'd prefer

@anissa111 anissa111 merged commit 57ca26a into NCAR:main Oct 26, 2023
8 checks passed
@cyschneck cyschneck deleted the convert_unittest branch October 26, 2023 16:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked Work got blocked waiting the output of some other source/work testing Issue related to testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Convert all unit tests to pytest
2 participants