Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pytest_collection_modifyitems to select benchmark tests only #1874

Merged
merged 2 commits into from
Sep 16, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 21 additions & 10 deletions sdk/python/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,11 @@
import multiprocessing
from datetime import datetime, timedelta
from sys import platform
from typing import List

import pandas as pd
import pytest
from _pytest.nodes import Item

from tests.data.data_creator import create_dataset
from tests.integration.feature_repos.repo_configuration import (
Expand Down Expand Up @@ -52,18 +54,27 @@ def pytest_addoption(parser):
)


def pytest_collection_modifyitems(config, items):
def pytest_collection_modifyitems(config, items: List[Item]):
should_run_integration = config.getoption("--integration") is True
should_run_benchmark = config.getoption("--benchmark") is True
skip_integration = pytest.mark.skip(
reason="not running tests with external dependencies"
)
skip_benchmark = pytest.mark.skip(reason="not running benchmarks")
for item in items:
if "integration" in item.keywords and not should_run_integration:
item.add_marker(skip_integration)
if "benchmark" in item.keywords and not should_run_benchmark:
item.add_marker(skip_benchmark)

integration_tests = [t for t in items if "integration" in t.keywords]
if not should_run_integration:
for t in integration_tests:
items.remove(t)
else:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like these are exclusive?

If the user does should_run_integration and should_run_benchmark, it ends up only running benchmark tests.

I'd expect this more to be

if not should_run_integration, remove integration tests
if not should_run_benchmarks, remove benchmarks
done

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the current behaviour to be exclusive.

if user specifies --benchmark and --integration, it runs tests marked with both. If the user specified only one of them, it runs tests with only those marks.

i prefer this behaviour since otherwise we run unit tests during all our integration tests - which seems like a duplication and a waste. It also is more straightforward to reason about, IMO.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason why integration and benchmark tests are orthogonal is because it's possible for an integration test to need access to resources on aws/gcp, which requires the integration mark so that it can be set up correctly.

items.clear()
for t in integration_tests:
items.append(t)

benchmark_tests = [t for t in items if "benchmark" in t.keywords]
if not should_run_benchmark:
for t in benchmark_tests:
items.remove(t)
else:
items.clear()
for t in benchmark_tests:
items.append(t)


@pytest.fixture
Expand Down