Skip to content

Commit

Permalink
Feature 685 group pytests (#1692)
Browse files Browse the repository at this point in the history
* per #685, added custom pytest markers for many tests so we can run groups of tests in automation to speed things up. removed unused imports from tests and removed parenthesis from assert statements when they are not needed

* per #685, change logic to check if test category is not equal to 'pytest' to does not start with 'pytest' to allow groups of pytests

* per #685, run pytests with markers to subset tests into groups

* fixed check if string starts with pytests

* added missing pytest marker name

* added logic to support running all pytests that do not match a given marker with the 'not <marker>' syntax

* change pytest group to wrapper because the test expects another test to have run prior to running

* fix 'not' logic by adding quotation marks around value

* another approach to fixing not functionality for tests

* added util marker to more tests

* fixed typo in not logic

* added util marker to more tests again

* fixed logic to split string

* marked rest of util tests with util marker

* fixed another typo in string splitting logic

* tested change that should properly split strings

* moved wrapper tests into wrapper directory

* changed marker for plotting tests

* added plotting marker

* improved logic for removing underscore after 'not' and around 'or' to specify more complex marker strings

* test running group of 3 markers

* fixed path the broke when test file was moved into a lower directory

* Changed StatAnalysis tests to use plotting marker because some of the tests involve plotting but the other StatAnalysis tests produce output that are used in the plotting tests

* changed some tests from marker 'wrapper' to 'wrapper_a' to split up some of these tests into separate runs

* test to see if running pytests in single job but split into groups by marker will improve the timing enough

* fixed typos in new logic

* removed code that is no longer needed (added comment in issue #685 if this logic is desired in the future)

* per #685, divided pytests into smaller groups

* added a test that will fail to test that the entire pytest job will fail as expected

* add error message if any pytests failed to help reviewer search for failed tests

* removed failing test after confirming that entire pytest job properly reports an error if any test fails

* turn on single use case group to make sure logic to build matrix of test jobs to run still works as expected

* turn off use case after confirming tests were created properly

* added documentation to contributor's guide to describe changes to unit test logic

* added note about adding new pytest markers
  • Loading branch information
georgemccabe authored Jul 14, 2022
1 parent f79c809 commit c7e275b
Show file tree
Hide file tree
Showing 67 changed files with 1,075 additions and 1,314 deletions.
18 changes: 12 additions & 6 deletions .github/actions/run_tests/entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ WS_PATH=$RUNNER_WORKSPACE/$REPO_NAME
# set CI jobs directory variable to easily move it
CI_JOBS_DIR=.github/jobs

PYTESTS_GROUPS_FILEPATH=.github/parm/pytest_groups.txt

source ${GITHUB_WORKSPACE}/${CI_JOBS_DIR}/bash_functions.sh

# get branch name for push or pull request events
Expand All @@ -30,10 +32,8 @@ if [ $? != 0 ]; then
${GITHUB_WORKSPACE}/${CI_JOBS_DIR}/docker_setup.sh
fi

#
# running unit tests (pytests)
#
if [ "$INPUT_CATEGORIES" == "pytests" ]; then
if [[ "$INPUT_CATEGORIES" == pytests* ]]; then
export METPLUS_ENV_TAG="pytest"
export METPLUS_IMG_TAG=${branch_name}
echo METPLUS_ENV_TAG=${METPLUS_ENV_TAG}
Expand All @@ -56,14 +56,20 @@ if [ "$INPUT_CATEGORIES" == "pytests" ]; then
.

echo Running Pytests
command="export METPLUS_PYTEST_HOST=docker; cd internal_tests/pytests; /usr/local/envs/pytest/bin/pytest -vv --cov=../../metplus"
command="export METPLUS_PYTEST_HOST=docker; cd internal_tests/pytests;"
command+="status=0;"
for x in `cat $PYTESTS_GROUPS_FILEPATH`; do
marker="${x//_or_/ or }"
marker="${marker//not_/not }"
command+="/usr/local/envs/pytest/bin/pytest -vv --cov=../../metplus -m \"$marker\""
command+=";if [ \$? != 0 ]; then status=1; fi;"
done
command+="if [ \$status != 0 ]; then echo ERROR: Some pytests failed. Search for FAILED to review; false; fi"
time_command docker run -v $WS_PATH:$GITHUB_WORKSPACE --workdir $GITHUB_WORKSPACE $RUN_TAG bash -c "$command"
exit $?
fi

#
# running use case tests
#

# split apart use case category and subset list from input
CATEGORIES=`echo $INPUT_CATEGORIES | awk -F: '{print $1}'`
Expand Down
7 changes: 5 additions & 2 deletions .github/jobs/get_use_cases_to_run.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
#! /bin/bash

use_case_groups_filepath=.github/parm/use_case_groups.json

# set matrix to string of an empty array in case no use cases will be run
matrix="[]"

Expand Down Expand Up @@ -31,12 +32,14 @@ fi
if [ "$run_unit_tests" == "true" ]; then
echo Adding unit tests to list to run

pytests="\"pytests\","

# if matrix is empty, set to an array that only includes pytests
if [ "$matrix" == "[]" ]; then
matrix="[\"pytests\"]"
matrix="[${pytests:0: -1}]"
# otherwise prepend item to list
else
matrix="[\"pytests\", ${matrix:1}"
matrix="[${pytests}${matrix:1}"
fi
fi

Expand Down
6 changes: 6 additions & 0 deletions .github/parm/pytest_groups.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
util
wrapper
wrapper_a
wrapper_b
wrapper_c
plotting_or_long
8 changes: 4 additions & 4 deletions .github/workflows/testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -139,24 +139,24 @@ jobs:
# copy logs with errors to error_logs directory to save as artifact
- name: Save error logs
id: save-errors
if: ${{ always() && steps.run_tests.conclusion == 'failure' && matrix.categories != 'pytests' }}
if: ${{ always() && steps.run_tests.conclusion == 'failure' && !startsWith(matrix.categories,'pytests') }}
run: .github/jobs/save_error_logs.sh

# run difference testing
- name: Run difference tests
id: run-diff
if: ${{ needs.job_control.outputs.run_diff == 'true' && steps.run_tests.conclusion == 'success' && matrix.categories != 'pytests' }}
if: ${{ needs.job_control.outputs.run_diff == 'true' && steps.run_tests.conclusion == 'success' && !startsWith(matrix.categories,'pytests') }}
run: .github/jobs/run_difference_tests.sh ${{ matrix.categories }} ${{ steps.get-artifact-name.outputs.artifact_name }}

# copy output data to save as artifact
- name: Save output data
id: save-output
if: ${{ always() && steps.run_tests.conclusion != 'skipped' && matrix.categories != 'pytests' }}
if: ${{ always() && steps.run_tests.conclusion != 'skipped' && !startsWith(matrix.categories,'pytests') }}
run: .github/jobs/copy_output_to_artifact.sh ${{ steps.get-artifact-name.outputs.artifact_name }}

- name: Upload output data artifact
uses: actions/upload-artifact@v2
if: ${{ always() && steps.run_tests.conclusion != 'skipped' && matrix.categories != 'pytests' }}
if: ${{ always() && steps.run_tests.conclusion != 'skipped' && !startsWith(matrix.categories,'pytests') }}
with:
name: ${{ steps.get-artifact-name.outputs.artifact_name }}
path: artifact/${{ steps.get-artifact-name.outputs.artifact_name }}
Expand Down
32 changes: 32 additions & 0 deletions docs/Contributors_Guide/continuous_integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -558,6 +558,38 @@ process can be found in the :ref:`use_case_input_data` section of the
Add Use Cases chapter of the Contributor's Guide.


.. _cg-ci-unit-tests:

Unit Tests
----------

Unit tests are run via pytest.
Groups of pytests are run in the 'pytests' job.
The list of groups that will be run in the automated tests are found in
.github/parm/pytest_groups.txt.
See :ref:`cg-unit-tests` for more information on pytest groups.

Items in pytest_groups.txt can include::

* A single group marker name, i.e. wrapper_a
* Multiple group marker names separated by _or_, i.e. plotting_or_long
* A group marker name to exclude starting with not_, i.e. not_wrapper

All pytest groups are currently run in a single GitHub Actions job.
This was done because the existing automation logic builds a Docker
environment to run the tests and each testing environment takes a few minutes
to create (future improvements may speed up execution time by running the
pytests directly in the GitHub Actions environment instead of Docker).
Running the pytests in smaller groups serially takes substantially less time
than calling all of the existing pytests in a single call to pytest,
so dividing tests into groups is recommended to improve performance.
Searching for the string "deselected in" in the pytests job log can be used
to see how long each group took to run.

Future enhancements could be made to save and parse this information for each
run to output a summary at the end of the log file to more easily see which
groups could be broken up to improve performance.

.. _cg-ci-use-case-tests:

Use Case Tests
Expand Down
48 changes: 44 additions & 4 deletions docs/Contributors_Guide/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,59 @@ Testing
Test scripts are found in the GitHub repository in the internal_tests
directory.

.. _cg-unit-tests:

Unit Tests
----------

Unit tests are run with pytest. They are found in the *pytests* directory.
Each tool has its own subdirectory containing its test files.

**run_pytests.sh** is a bash script that can be run to execute all of the
pytests. A report will be output showing which pytest categories failed.
When running on a new computer, a
**minimum_pytest.<HOST>.sh**
Unit tests can be run by running the 'pytest' command from the
internal_tests/pytests directory of the repository.
The 'pytest' Python package must be available.
A report will be output showing which pytest categories failed.
When running on a new computer, a **minimum_pytest.<HOST>.sh**
file must be created to be able to run the script. This file contains
information about the local environment so that the tests can run.

All unit tests must include one of the custom markers listed in the
internal_tests/pytests/pytest.ini file. Some examples include:

* util
* wrapper_a
* wrapper_b
* wrapper_c
* wrapper
* long
* plotting

To apply a marker to a unit test function, add the following on the line before
the function definition::

@pytest.mark.<MARKER-NAME>

where <MARKER-NAME> is one of the custom marker strings listed in pytest.ini.

New pytest markers should be added to the pytest.ini file with a brief
description. If they are not added to the markers list, then a warning will
be output when running the tests.

There are many unit tests for METplus and false failures can occur if all of
the are attempted to run at once.
To run only tests with a given marker, run::

pytest -m <MARKER-NAME>

To run all tests that do not have a given marker, run::

pytest -m "not <MARKER-NAME>"

Multiple marker groups can be run by using the 'or' keyword::

pytest -m "<MARKER-NAME1> or <MARKER-NAME2>"


Use Case Tests
--------------

Expand Down
Original file line number Diff line number Diff line change
@@ -1,47 +1,14 @@
#!/usr/bin/env python
#!/usr/bin/env python3

import os
import datetime
import sys
import logging
import pytest
import datetime

import produtil.setup
import os

from metplus.wrappers.make_plots_wrapper import MakePlotsWrapper
from metplus.util import met_util as util

#
# These are tests (not necessarily unit tests) for the
# wrapper to make plots, make_plots_wrapper.py
# NOTE: This test requires pytest, which is NOT part of the standard Python
# library.
# These tests require one configuration file in addition to the three
# required METplus configuration files: test_make_plots.conf. This contains
# the information necessary for running all the tests. Each test can be
# customized to replace various settings if needed.
#

#
# -----------Mandatory-----------
# configuration and fixture to support METplus configuration files beyond
# the metplus_data, metplus_system, and metplus_runtime conf files.
#

METPLUS_BASE = os.getcwd().split('/internal_tests')[0]

# Add a test configuration
def pytest_addoption(parser):
parser.addoption("-c", action="store", help=" -c <test config file>")

# @pytest.fixture
def cmdopt(request):
return request.config.getoption("-c")

#
# ------------Pytest fixtures that can be used for all tests ---------------
#
#@pytest.fixture
def make_plots_wrapper(metplus_config):
"""! Returns a default MakePlotsWrapper with /path/to entries in the
metplus_system.conf and metplus_runtime.conf configuration
Expand All @@ -55,35 +22,8 @@ def make_plots_wrapper(metplus_config):
config = metplus_config(extra_configs)
return MakePlotsWrapper(config)

# ------------------TESTS GO BELOW ---------------------------
#

#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# To test numerous files for filesize, use parametrization:
# @pytest.mark.parametrize(
# 'key, value', [
# ('/usr/local/met-6.1/bin/point_stat', 382180),
# ('/usr/local/met-6.1/bin/stat_analysis', 3438944),
# ('/usr/local/met-6.1/bin/pb2nc', 3009056)
#
# ]
# )
# def test_file_sizes(key, value):
# st = stat_analysis_wrapper()
# # Retrieve the value of the class attribute that corresponds
# # to the key in the parametrization
# files_in_dir = []
# for dirpath, dirnames, files in os.walk("/usr/local/met-6.1/bin"):
# for name in files:
# files_in_dir.append(os.path.join(dirpath, name))
# if actual_key in files_in_dir:
# # The actual_key is one of the files of interest we retrieved from
# # the output directory. Verify that it's file size is what we
# # expected.
# assert actual_key == key
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
METPLUS_BASE = os.getcwd().split('/internal_tests')[0]

@pytest.mark.plotting
def test_get_command(metplus_config):
# Independently test that the make_plots python
# command is being put together correctly with
Expand All @@ -98,6 +38,8 @@ def test_get_command(metplus_config):
test_command = mp.get_command()
assert(expected_command == test_command)


@pytest.mark.plotting
def test_create_c_dict(metplus_config):
# Independently test that c_dict is being created
# and that the wrapper and config reader
Expand Down
Loading

0 comments on commit c7e275b

Please sign in to comment.