Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MNT] Simplify tests #62

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ on:
- develop
- release/*
workflow_dispatch:


jobs:
build:
Expand All @@ -34,13 +34,13 @@ jobs:

- name: Install dependencies
run: poetry install --no-interaction --no-root --all-extras

- name: Set PYTHONPATH
run: echo "PYTHONPATH=$GITHUB_WORKSPACE/src" >> $GITHUB_ENV

- name: Test with pytest
run: poetry run pytest --cov=prophetverse --cov-report=xml
run: poetry run pytest --cov=prophetverse --cov-report=xml -m "not smoke"

- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v4.0.1
with:
Expand Down
10 changes: 10 additions & 0 deletions docs/development/development-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,16 @@ Community slack: None yet.
### Code standards
Writing good code is not just about what you write. It is also about how you write it. During Continuous Integration testing, several tools will be run to check your code for stylistic errors. Generating any warnings will cause the test to fail. Thus, good style is a requirement for submitting code to Prophetverse.There are of tools in Prophetverse to help contributors verify their changes before contributing to the project

#### [Pytest](https://docs.pytest.org/en/7.1.x/contents.html)
You can test your code with pytest integration with the poetry command
<br> ```poetry run pytest```

The CI tests are computationally intensive, so if you want to do a faster test you can run a [smoke test](https://en.wikipedia.org/wiki/Smoke_testing_(software)) with the command
<br> ```poetry run pytest -m "not ci"```

If you also wanna run the tests even faster feel free to parallel processing the tests with [pytest-xdist](https://pytest-xdist.readthedocs.io/en/latest/how-to.html#making-session-scoped-fixtures-execute-only-once).


#### [Pre-commit](https://pre-commit.com/)

Additionally, Continuous Integration will run code formatting checks like black, isort, and mypy and more using pre-commit hooks. Any warnings from these checks will cause the Continuous Integration to fail; therefore, it is helpful to run the check yourself before submitting code. This can be done by installing pre-commit (which should already have happened if you followed the instructions in Setting up your development environment) and then running:
Expand Down
5 changes: 5 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,11 @@ pydocstyle = "^6.3.0"
mypy = "^1.10.0"
pylint = "^3.2.2"

[tool.pytest.ini_options]
markers = [
"ci: marks tests for Continuous Integration",
"smoke: marks tests for smoke testing",
]

[build-system]
requires = ["poetry-core"]
Expand Down
11 changes: 7 additions & 4 deletions tests/conftest.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
"""Configure tests and declare global fixtures."""

import warnings

import numpyro

warnings.filterwarnings("ignore")


def pytest_sessionstart(session):


"""Avoid NaNs in tests."""
print("Enabling x64")
# Avoid NaNs in tests
numpyro.enable_x64()

71 changes: 38 additions & 33 deletions tests/sktime/test_multivariate.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,38 +5,32 @@
from sktime.forecasting.base import ForecastingHorizon
from sktime.split import temporal_train_test_split
from sktime.transformations.hierarchical.aggregate import Aggregator
from sktime.utils._testing.hierarchical import (_bottom_hier_datagen,
_make_hierarchical)
from sktime.utils._testing.hierarchical import _bottom_hier_datagen, _make_hierarchical

from prophetverse.effects import LinearEffect
from prophetverse.sktime.multivariate import HierarchicalProphet
from prophetverse.sktime.seasonality import seasonal_transformer

from ._utils import (execute_extra_predict_methods_tests,
execute_fit_predict_test, make_empty_X, make_None_X,
make_random_X, make_y)
from ._utils import (
execute_extra_predict_methods_tests,
execute_fit_predict_test,
make_empty_X,
make_None_X,
make_random_X,
make_y,
)

HYPERPARAMS = [
dict(feature_transformer=seasonal_transformer(yearly_seasonality=True, weekly_seasonality=True)),
dict(
feature_transformer=seasonal_transformer(
yearly_seasonality=True, weekly_seasonality=True
)
),
dict(
feature_transformer=seasonal_transformer(
yearly_seasonality=True, weekly_seasonality=True
),
feature_transformer=seasonal_transformer(yearly_seasonality=True, weekly_seasonality=True),
default_effect=LinearEffect(effect_mode="multiplicative"),
),
dict(
feature_transformer=seasonal_transformer(
yearly_seasonality=True, weekly_seasonality=True
),
feature_transformer=seasonal_transformer(yearly_seasonality=True, weekly_seasonality=True),
exogenous_effects=[
LinearEffect(id="lineareffect1", regex=r"(x1).*"),
LinearEffect(
id="lineareffect2", regex=r"(x2).*", prior=dist.Laplace(0, 1)
),
LinearEffect(id="lineareffect2", regex=r"(x2).*", prior=dist.Laplace(0, 1)),
],
),
dict(
Expand All @@ -45,14 +39,34 @@
dict(trend="logistic"),
dict(inference_method="mcmc"),
dict(
feature_transformer=seasonal_transformer(
yearly_seasonality=True, weekly_seasonality=True
),
feature_transformer=seasonal_transformer(yearly_seasonality=True, weekly_seasonality=True),
shared_features=["x1"],
),
]


@pytest.mark.smoke
@pytest.mark.parametrize("hierarchy_levels", [0, (1,), (2, 1), (1, 2), (3, 2, 2)])
def test_hierarchy_levels(hierarchy_levels):
y = make_y(hierarchy_levels)
X = make_random_X(y)
forecaster = HierarchicalProphet(optimizer_steps=20, changepoint_interval=2, mcmc_samples=2, mcmc_warmup=2)
execute_fit_predict_test(forecaster, y, X)


@pytest.mark.smoke
@pytest.mark.parametrize("hyperparams", HYPERPARAMS)
def test_hyperparams(hyperparams):
hierarchy_levels = (2, 1)
y = make_y(hierarchy_levels)
X = make_random_X(y)
forecaster = HierarchicalProphet(
**hyperparams, optimizer_steps=20, changepoint_interval=2, mcmc_samples=2, mcmc_warmup=2
)
execute_fit_predict_test(forecaster, y, X)


@pytest.mark.ci
@pytest.mark.parametrize("hierarchy_levels", [0, (1,), (2, 1), (1, 2), (3, 2, 2)])
@pytest.mark.parametrize("make_X", [make_random_X, make_None_X, make_empty_X])
@pytest.mark.parametrize("hyperparams", HYPERPARAMS)
Expand All @@ -62,11 +76,7 @@ def test_prophet2_fit_with_different_nlevels(hierarchy_levels, make_X, hyperpara
X = make_X(y)

forecaster = HierarchicalProphet(
**hyperparams,
optimizer_steps=20,
changepoint_interval=2,
mcmc_samples=2,
mcmc_warmup=2
**hyperparams, optimizer_steps=20, changepoint_interval=2, mcmc_samples=2, mcmc_warmup=2
)

execute_fit_predict_test(forecaster, y, X)
Expand All @@ -76,11 +86,6 @@ def test_prophet2_fit_with_different_nlevels(hierarchy_levels, make_X, hyperpara
def test_extra_predict_methods(make_X):
y = make_y((2, 1))
X = make_X(y)
forecaster = HierarchicalProphet(
optimizer_steps=20,
changepoint_interval=2,
mcmc_samples=2,
mcmc_warmup=2
)
forecaster = HierarchicalProphet(optimizer_steps=20, changepoint_interval=2, mcmc_samples=2, mcmc_warmup=2)

execute_extra_predict_methods_tests(forecaster=forecaster, X=X, y=y)
111 changes: 69 additions & 42 deletions tests/sktime/test_univariate.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,21 @@
from numpyro import distributions as dist
from sktime.forecasting.base import ForecastingHorizon
from sktime.transformations.hierarchical.aggregate import Aggregator
from sktime.utils._testing.hierarchical import (_bottom_hier_datagen,
_make_hierarchical)
from sktime.utils._testing.hierarchical import _bottom_hier_datagen, _make_hierarchical

from prophetverse.effects import LinearEffect
from prophetverse.sktime.seasonality import seasonal_transformer
from prophetverse.sktime.univariate import (Prophet, ProphetGamma,
ProphetNegBinomial)
from prophetverse.sktime.univariate import Prophet, ProphetGamma, ProphetNegBinomial
from prophetverse.trend.flat import FlatTrend

from ._utils import (execute_extra_predict_methods_tests,
execute_fit_predict_test, make_empty_X, make_None_X,
make_random_X, make_y)
from ._utils import (
execute_extra_predict_methods_tests,
execute_fit_predict_test,
make_empty_X,
make_None_X,
make_random_X,
make_y,
)

MODELS = [
Prophet,
Expand All @@ -24,27 +27,16 @@
]

HYPERPARAMS = [
dict(trend=FlatTrend(), feature_transformer=seasonal_transformer(yearly_seasonality=True, weekly_seasonality=True)),
dict(
trend=FlatTrend(),
feature_transformer=seasonal_transformer(
yearly_seasonality=True, weekly_seasonality=True
)
),
dict(
feature_transformer=seasonal_transformer(
yearly_seasonality=True, weekly_seasonality=True
),
feature_transformer=seasonal_transformer(yearly_seasonality=True, weekly_seasonality=True),
default_effect=LinearEffect(effect_mode="multiplicative"),
),
dict(
feature_transformer=seasonal_transformer(
yearly_seasonality=True, weekly_seasonality=True
),
feature_transformer=seasonal_transformer(yearly_seasonality=True, weekly_seasonality=True),
exogenous_effects=[
LinearEffect(id="lineareffect1", regex=r"(x1).*"),
LinearEffect(
id="lineareffect2", regex=r"(x2).*", prior=dist.Laplace(0, 1)
),
LinearEffect(id="lineareffect2", regex=r"(x2).*", prior=dist.Laplace(0, 1)),
],
),
dict(
Expand All @@ -56,35 +48,70 @@
]


@pytest.mark.smoke
@pytest.mark.parametrize("model_class", MODELS)
def test_model_class_fit(model_class):
hierarchy_levels = (1,)
make_X = make_random_X
hyperparams = HYPERPARAMS[0]

y = make_y(hierarchy_levels)
X = make_X(y)
forecaster = model_class(**hyperparams, optimizer_steps=10, mcmc_samples=2, mcmc_warmup=2, mcmc_chains=1)

execute_fit_predict_test(forecaster, y, X, test_size=4)


@pytest.mark.smoke
@pytest.mark.parametrize("hierarchy_levels", [(1,), (2,), (2, 1)])
@pytest.mark.parametrize("make_X", [make_random_X, make_None_X, make_empty_X])
def test_hierarchy_levels_fit(hierarchy_levels):
model_class = MODELS[0]
make_X = make_random_X
hyperparams = HYPERPARAMS[0]

y = make_y(hierarchy_levels)
X = make_X(y)
forecaster = model_class(**hyperparams, optimizer_steps=10, mcmc_samples=2, mcmc_warmup=2, mcmc_chains=1)

execute_fit_predict_test(forecaster, y, X, test_size=4)


@pytest.mark.smoke
@pytest.mark.parametrize("hyperparams", HYPERPARAMS)
def test_prophet2_fit_with_different_nlevels(model_class, hierarchy_levels, make_X, hyperparams):

def test_hyperparams_fit(hyperparams):
model_class = MODELS[1]
hierarchy_levels = (1,)
make_X = make_random_X

y = make_y(hierarchy_levels)
X = make_X(
y
)
forecaster = model_class(
**hyperparams, optimizer_steps=100, mcmc_samples=2, mcmc_warmup=2, mcmc_chains=1
)

X = make_X(y)
forecaster = model_class(**hyperparams, optimizer_steps=10, mcmc_samples=2, mcmc_warmup=2, mcmc_chains=1)

execute_fit_predict_test(forecaster, y, X, test_size=4)


@pytest.mark.parametrize("make_X", [make_random_X, make_None_X, make_empty_X])
def test_extra_predict_methods(make_X):
y = make_y((2,1))
X = make_X(
y
)
forecaster = Prophet(
optimizer_steps=100, mcmc_samples=2, mcmc_warmup=2, mcmc_chains=1
)
y = make_y((2, 1))
X = make_X(y)
forecaster = Prophet(optimizer_steps=10, mcmc_samples=2, mcmc_warmup=2, mcmc_chains=1)
execute_extra_predict_methods_tests(forecaster=forecaster, X=X, y=y)




@pytest.mark.ci
@pytest.mark.parametrize("model_class", MODELS)
@pytest.mark.parametrize("hierarchy_levels", [(1,), (2,), (2, 1)])
@pytest.mark.parametrize("make_X", [make_random_X, make_None_X, make_empty_X])
@pytest.mark.parametrize("hyperparams", HYPERPARAMS)
def test_prophet2_fit_with_different_nlevels(model_class, hierarchy_levels, make_X, hyperparams):

y = make_y(hierarchy_levels)
X = make_X(y)
forecaster = model_class(**hyperparams, optimizer_steps=100, mcmc_samples=2, mcmc_warmup=2, mcmc_chains=1)

execute_fit_predict_test(forecaster, y, X, test_size=4)


def test_raise_error_when_passing_bad_trend():
with pytest.raises(ValueError):
Prophet(trend="bad_trend")

Loading