Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Answer test refactor #245

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 57 additions & 26 deletions doc/source/Testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,36 +37,62 @@ include:
- all grackle 'calculate' functions return correct results for sets
of random field values

.. _test_without_answer_verification:

Tests Without Answer Verification
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If you only want to quickly verify that everything runs, you can skip
generating the answer test results using the latest gold standard. To
run this way, first set the ``GENERATE_PYGRACKLE_TEST_RESULTS``
environment variable to 1.
generating the answer test results using the latest gold standard.

.. code-block:: shell-session
Once you have installed :ref:`pygrackle and the development dependencies <pygrackle-dev>`,
there are currently 2 ways to do this:

~ $ export GENERATE_PYGRACKLE_TEST_RESULTS=1
.. tabs::

This will tell the test suite to simply generate new answers and not
compare with previously existing ones.
.. group-tab:: Command Line Flag

Once you have installed :ref:`pygrackle and the development
dependencies <pygrackle-dev>`, the tests can be run from the **src**
directory by typing ``make test``:
Simply invoke ``py.test`` from the **src/python** directory with the ``--answer-store`` flag:

.. code-block:: shell-session
.. code-block:: shell-session

~ $ cd grackle/src
~/grackle/src $ make test
~ $ cd grackle/src/python
~/grackle/src $ py.test --answer-store

or from the **src/python** directory by typing ``py.test``:
This will both invoke the test suite and tell it to simply generate new answers and not compare with previously existing ones.

.. code-block:: shell-session
.. group-tab:: Environment Variable

First set the ``GENERATE_PYGRACKLE_TEST_RESULTS`` environment variable to 1.

.. code-block:: shell-session

~ $ export GENERATE_PYGRACKLE_TEST_RESULTS=1

This will tell the test suite to simply generate new answers and not
compare with previously existing ones.

The tests can then be run from the **src** directory by typing ``make test``:

.. code-block:: shell-session

~ $ cd grackle/src
~/grackle/src $ make test

or from the **src/python** directory by typing ``py.test``:

~ $ cd grackle/src/python
~/grackle/src $ py.test
.. code-block:: shell-session

~ $ cd grackle/src/python
~/grackle/src $ py.test

.. warning::

The ability to use an environment variable may be removed.

Once you launch the test, the output will look like the following:

.. code-block:: shell-session

==================================== test session starts ====================================
platform darwin -- Python 3.11.9, pytest-8.2.1, pluggy-1.5.0
Expand Down Expand Up @@ -112,11 +138,9 @@ steps:
#. Re-compile the Grackle library and :ref:`re-install pygrackle
<install-pygrackle>`.

#. Set the ``GENERATE_PYGRACKLE_TEST_RESULTS`` environment variable to
1.

#. Run the test suite as described above. This will create test result
files in the directory **src/python/tests/test_answers**.
#. Execute the test suite with instructions to generate test results.
The 2 ways to do this are described :ref:`above <test_without_answer_verification>`: (i) execute ``py.test`` with the ``--answer-store`` flag or (ii) set the ``GENERATE_PYGRACKLE_TEST_RESULTS`` environment variable to 1 before executing the test suite.
By default, this will create test result files in the directory **src/python/tests/test_answers**.

#. Return to the branch of the repository you started with. If you just
cloned the main repository, this will be called 'main', in which
Expand All @@ -125,8 +149,15 @@ steps:
#. Re-compile the Grackle library and :ref:`re-install pygrackle
<install-pygrackle>`.

#. Set the ``GENERATE_PYGRACKLE_TEST_RESULTS`` environment variable to
0.
#. If you previously assigned a value to the ``GENERATE_PYGRACKLE_TEST_RESULTS`` variable, you must now unset the variable **OR** assign it a value of 0.

#. Run the test suite again (do **NOT** pass the ``--answer-store`` flags.
This time, the answer tests will be compared with the previously generated results.

Other Test Configuration
^^^^^^^^^^^^^^^^^^^^^^^^
To run the test-suite without any answer tests, invoke ``py.test`` with the ``--answer-skip`` flag.

To control the location of the directory where the test answers are save, you can invoke ``py.test`` with the ``--local-dir`` flag
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To control the location of the directory where the test answers are save, you can invoke ``py.test`` with the ``--local-dir`` flag
To control the location of the directory where the test answers are saved, you can invoke ``py.test`` with the ``--local-dir`` flag



#. Run the test suite again. This time, the answer tests will be
compared with the previously generated results.
14 changes: 0 additions & 14 deletions src/python/pygrackle/utilities/testing.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,17 +85,3 @@ def temporary_directory():
finally:
os.chdir(curdir)
shutil.rmtree(tmpdir)

def ensure_dir(path):
r"""Parallel safe directory maker."""
if os.path.exists(path):
return path

try:
os.makedirs(path)
except OSError as e:
if e.errno == errno.EEXIST:
pass
else:
raise
return path
68 changes: 68 additions & 0 deletions src/python/tests/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# this defines some basic utilities shared among all of the tests
#
# DO NOT import pygrackle into global scope in this file (or in any file
# imported by this file). Some tests need to be runable without installing
# pygrackle

import os
from typing import NamedTuple

import pytest


# this hook is used to add more command line flags to the pytest launcher
def pytest_addoption(parser):
parser.addoption(
"--answer-skip",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to only have --answer-store since they do the same thing. Both variables are mentioned above for running tests without answer testing. I think it just adds confusion.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure I can do that!

I'm just spitballing here, but I think it could still be nice (but definitely not essential) to support disabling all of the answer-tests. If we aren't worried about backwards compatability, what if we made -local-dir a required argument to run the answer tests?1 In that picture:

  • pytest grackle/src/python wouldn't run all non-answer tests
  • pytest grackle/src/python --answer-store --local-dir=path/to/answers would store answers (and run all non-answer tests)
  • pytest grackle/src/python --answer-store would abort with an error.
  • pytest grackle/src/python --local-dir=path/to/answers would check answer-test answers (and run all non-answer tests)

To be clear, I'm definitely not wedded to this idea. I'm happy to move forward and just simply remove --answer-skip. In fact, I'll just plan to go ahead and simply remove --answer-store without any sort of replacement unless I hear that you like this alternative idea.

Footnotes

  1. If we went that route, maybe we should rename --local-dir to --answer-dir

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I like this. I also really like changing --local-dir to --answer-dir. "local-dir" can be interpreted in a variety of different ways, but to me "answer-dir" can only mean "the answers are/go here." If we do this, the meaning of "will not run answer tests" needs clarifying to either mean 1) the tests won't run at all or 2) the code to generate the answers will be run, but no comparison will happen.

action="store_true",
help="Indicates that we should skip all answer tests.",
)
parser.addoption(
"--answer-store",
action="store_true",
help="Indicates that we should generate test results.",
)

# in the future, I think we should revisit the default behavior when this
# is omitted. Instead of providing a default location, I think we should
# just skip all answer tests (and maybe raise an error if the
# --answer-store flag is provided without this flag)
parser.addoption(
"--local-dir",
action="store",
default=os.path.join(
os.path.dirname(os.path.abspath(__file__)), "test_answers"
),
help="Path to directory where answers are/will be stored.",
)


class AnswerTestSpec(NamedTuple):
generate_answers: bool
answer_dir: str


@pytest.fixture(scope="session")
def answertestspec(request):
"""
Return an object specifying all user-specified directives regarding
answer tests (whether to run them and where to store/find results)
"""
if request.config.getoption("--answer-skip"):
if request.config.getoption("--answer-store"):
raise RuntimeError(
"It is an error to specify both --answer-skip and --answer-store"
)
pytest.skip("--answer-skip was specified")

generate_answers = (
request.config.getoption("--answer-store")
or int(os.environ.get("GENERATE_PYGRACKLE_TEST_RESULTS", 0)) == 1
)
answer_dir = request.config.getoption("--local-dir")

if (not os.path.isdir(answer_dir)) and (not generate_answers):
pytest.skip(f"the directory of test answers can't be found, {answer_dir}")
os.makedirs(answer_dir, exist_ok=True)

return AnswerTestSpec(generate_answers=generate_answers, answer_dir=answer_dir)
25 changes: 13 additions & 12 deletions src/python/tests/test_code_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,7 @@
import re
import subprocess

from testing_common import \
generate_test_results, \
grackle_install_dir, \
test_answers_dir
from testing_common import grackle_install_dir

try:
from pygrackle.grackle_wrapper import uses_in_source_build
Expand Down Expand Up @@ -56,12 +53,16 @@
"temperature"
)

test_file = os.path.join(test_answers_dir, "code_examples.json")
if generate_test_results and os.path.exists(test_file):
os.remove(test_file)
if not generate_test_results and not os.path.exists(test_file):
raise RuntimeError(
f"Code example results file not found: {test_file}")
@pytest.fixture(scope="module")
def test_file(answertestspec):
# this fixture has module-scope so that we will delete the test file once
test_file = os.path.join(answertestspec.answer_dir, "code_examples.json")
if answertestspec.generate_answers and os.path.exists(test_file):
os.remove(test_file)
if not answertestspec.generate_answers and not os.path.exists(test_file):
raise RuntimeError(
f"Code example results file not found: {test_file}")
return test_file

def run_command(command, cwd, env, timeout=None):
proc = subprocess.run(
Expand Down Expand Up @@ -104,7 +105,7 @@ def parse_output(ostr):
return results

@pytest.mark.parametrize("example", code_examples)
def test_code_examples(example):
def test_code_examples(answertestspec, test_file, example):
# under the classic build system, we could just execute `make` in the examples
# directory and there is a good chance that it would work out...
# -> now we require the PYTEST_CODE_LINK_CHOICE environment variable to be
Expand Down Expand Up @@ -149,7 +150,7 @@ def test_code_examples(example):
if example not in compare_exclude:
results = parse_output(proc.stdout)

if generate_test_results:
if answertestspec.generate_answers:
if os.path.exists(test_file):
with open(test_file, mode="r") as f:
all_results = json.load(f)
Expand Down
26 changes: 11 additions & 15 deletions src/python/tests/test_initialisation.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,7 @@
from pygrackle import chemistry_data
#Necessary constants from grackle
from pygrackle.utilities.physical_constants import mass_hydrogen_cgs
from pygrackle.utilities.testing import \
assert_allclose, \
ensure_dir

from testing_common import \
generate_test_results, \
test_answers_dir

ensure_dir(test_answers_dir)
from pygrackle.utilities.testing import assert_allclose

#* Function which returns chemistry_data instance with default initialisation settings.
def get_defChem():
Expand Down Expand Up @@ -108,8 +100,12 @@ def set_parameters(parSet, my_chemistry):


#* Function which tests that the rates have been initialised correctly for each parameter set.
def test_rate_initialisation(printParameters=False, printOOMdiscrepanices=False, testCustomFile=False, parSets=[1,2,3,4,5,6,7],
fileName="rate_coefficients.h5"):
def test_rate_initialisation(answertestspec,
printParameters=False,
printOOMdiscrepanices=False,
testCustomFile=False,
parSets=[1,2,3,4,5,6,7],
fileName="rate_coefficients.h5"):
"""
Test that the rate tables are initialized correctly.

Expand All @@ -134,8 +130,8 @@ def test_rate_initialisation(printParameters=False, printOOMdiscrepanices=False,

#* Calculate rates for each parameter set and write to hdf5 file
#Create and open file. If the file already exists this will overwrite it.
if generate_test_results:
fileName = os.path.join(test_answers_dir, fileName)
if answertestspec.generate_answers:
fileName = os.path.join(answertestspec.answer_dir, fileName)
f = h5py.File(fileName, "w")

#Iterate over parameter sets.
Expand Down Expand Up @@ -163,11 +159,11 @@ def test_rate_initialisation(printParameters=False, printOOMdiscrepanices=False,
f.close()

# Just generate results and leave.
if generate_test_results:
if answertestspec.generate_answers:
return

#* Compare rates with the expected (correct) ones which are stored and check they are in agreement
expectedRates = h5py.File(os.path.join(test_answers_dir, fileName), "r")
expectedRates = h5py.File(os.path.join(answertestspec.answer_dir, fileName), "r")
initialisedRates = h5py.File(fileName, "r")


Expand Down
Loading