-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic pytest conf for custom plugins #61
Conversation
Hey, didn't find any PR about some POC for running tests, so I just finished with one example for configuration, still WIP, just some basic which we can discuss if we will chose this approach or different one. You can try to run: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Besides the cleanup of all unneeded things from the runner, LGTM
f855518
to
a9ea5ae
Compare
And moved deployment test case to pytest
Just a note about an edge case: this will work fine, but it won't be possible to use configuration in parametrization code. |
Verified deployment: py.test -m deployment --cluster-conf conf/ocs_basic_install.yml --cluster-name pbalogh-pr61 --cluster-path /home/jenkins/cluster-dir-pr-61 --log-level info tests/
OUTPUT OMITTED
INFO tests.test_ocs_basic_install:test_ocs_basic_install.py:200 HEALTH_OK, install successful.
PASSED
===============================================================================
1 passed, 1 deselec
ted in 2130.18 seconds =============================================================================== |
@petr-balogh is this still WIP or I can start a review like it is ready for merge? |
@mbukatov can you elaborate on this a bit? Not sure if we plan on heavily using parametrize but we should discuss this if something we're doing here entirely blocks this feature of pytest for us. |
@@ -0,0 +1,141 @@ | |||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to make plugins directory placed directly in root of the repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, which other pytest customization could we have here other than plugins?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about defining for example markers module here as well and define all markers like
import pytest
tier1 = pytest.mark.tier1
tier2 = pytest.mark.tier3
then from test modules we can import
from pytest_customization.markers import tier1
@tier1
def test_tier1():
pass
Cannot thing now about any other, cause hooks are like plugins but we can separate them to hooks package maybe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather not do that. This is additional level of abstraction, which would increase complexity a bit without any benefit. When I look at @tier1
function annotation, I have no idea what it actually means. But looking at @pytest.mark.tier1
gives a clear indication that this is a pytest marker.
That said, maintaining list of valid markers is not a bad idea. We can do that via Registering markers, which can be combined with --strict-markers
to prevent using unregistered markers (it could be enabled by default in pytest.ini as well).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that in latest pytest there is warning message automatically printed that you are using non registered markers, not sure about if it's just warning or it's by default strict. I am already registering those markers in pytest.ini file as you can see @mbukatov .
And about abstraction, we used it in RHV a lot and found it quite handy, cause it allowed us to define something like this:
BEFORE_UPGRADE = 1
UPGRADE = 2
BEFORE_UPGRADE_HOSTS = 3
UPGRADE_HOSTS = 4
AFTER_UPGRADE_HOSTS = 5
UPGRADE_CL = 6
AFTER_UPGRADE_CL = 7
UPGRADE_DC = 8
AFTER_UPGRADE = 9
# Tests ordering
FIRST_TEST = AFTER_UPGRADE + 1
TEST = FIRST_TEST + 1
LAST_TEST = TEST + 1
# Polarion decorator
polarion = pytest.mark.polarion
# Bugzilla decorator
# https://github.com/rhevm-qe-automation/pytest_marker_bugzilla
bz = pytest.mark.bugzilla
# Jira decorator
# https://github.com/vkondula/pytest_jira/
jira = pytest.mark.jira
network = pytest.mark.network
sla = pytest.mark.sla
storage = pytest.mark.storage
coresystem = pytest.mark.coresystem
virt = pytest.mark.virt
integration = pytest.mark.integration
# used for tests that are not adjusted to GE or tests that we don't want to run
do_not_run = pytest.mark.do_not_run(value=17)
endless = pytest.mark.endless(value='endless')
upgrade = pytest.mark.upgrade(value='upgrade')
tier1 = pytest.mark.tier1(value=1)
tier2 = pytest.mark.tier2(value=2)
tier3 = pytest.mark.tier3(value=3)
tier4 = pytest.mark.tier4(value=4)
timeout = pytest.mark.timeout
storages = pytest.mark.storages
tier_markers = [tier1, tier2, tier3, tier4, upgrade, do_not_run]
team_markers = [network, sla, storage, coresystem, virt, upgrade]
# order markers for ordering tests
order_before_upgrade = pytest.mark.run(order=BEFORE_UPGRADE)
order_upgrade = pytest.mark.run(order=UPGRADE)
order_before_upgrade_hosts = pytest.mark.run(order=BEFORE_UPGRADE_HOSTS)
order_upgrade_hosts = pytest.mark.run(order=UPGRADE_HOSTS)
order_after_upgrade_hosts = pytest.mark.run(order=AFTER_UPGRADE_HOSTS)
order_upgrade_cluster = pytest.mark.run(order=UPGRADE_CL)
order_after_upgrade_cluster = pytest.mark.run(order=AFTER_UPGRADE_CL)
We had some plugins which for example added default timeout for different tiers.
Or thanks to value=X we were able to easily know which number of test is directly from test.
Another usage was about ordering tests for pytest order plugin as you can see our order markers for upgrade tests.
So instead of ppl decorated the tests all the time like @pytest.mark.run(order=UPGRADE_CL) they just used @order_upgrade_cluster .
So this will is advantage we used for the markers defined like this and I think that ppl will know about those markers if we will document them properly.
@RazTamir WDYT here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbukatov value=1 fro tier 1 was used in some of our pytest plugin we had and needed to consider the number of tier. So was easier to take it from value from pre defined marker which everyone used than to pare the number of tier from the name of marker. Also we had for example upgrade tear, and as you can see it doesn't contain any number in the marker.
At some point we will get to situation when we will need to inject some values to markers and I think that have them defined on one place will be easy to use them.
If you know beforehand that you need to process value of fixtures in plugins, it's easier to just use markers like @pytest.marker.tier(value=1)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbukatov yep, it's possible but we faced the issue with inheritance and reading the real value of marker kwargs value as this wasn't easily possible that time. But this issue was already fixed in pytetst I think.
Just found some related issues with markers we faced in past:
pytest-dev/pytest#2515
pytest-dev/pytest#3605
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbukatov another thing which I've just realized and which was the main point is that you cannot call pytest -m tier(value=1)
so that's why pytest -m tier1 and team
for example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@petr-balogh good point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some structure in next commit. So please review once more @red-hat-storage/top-level-reviewers
I rechecked and the proposed solution seems to work. I overlooked the fact that you are actually just adding your own command line parameters and process the config on your own (I was assuming that you are adding pytest config options ...). The use case I have in mind is this: we have some values in the config file, eg:
And we would like to run test case parametrized by these values:
And as I already noted, this works fine:
|
yes it's ready to review and in case it's ok we can merge, but still a lot of things have to be done for our move to pytest, this is just the start |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
Moving deployment test to test class under ecosystem team package Adding basic teams test cases classes Adding marks for usage in our testing
Add one simple test that pytest is working
@@ -0,0 +1,214 @@ | |||
import json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI this is basically old test case just changed few stuff which moved to pytest. So pleas don't look on code improvements here which were already implemented like this in origin test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Can you just try execute it with @TIER1 for example and also inherit from some base class to check this is working?
@petr-balogh I haven't seen bugzilla configuration for the bz decorator. Shouldn't it be part of the scope here? |
I don't know. I not a big fan of having the markers defined in 4 different places. |
I made basic functionality here, the rest of things needs to be done in separated PRs I think. |
You are talking about added: ocsci/init.py ? This if for easier imports for users. @tier1
@manage but if most of you don't like this approach we can omit this one. |
I submitted another commit with some documentation, let's discuss this on our mtg today in 1 hour. |
Also added some documentation for new changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Thanks Petr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM. Still not sure I'm convinced we get a lot of benefit from importing the markers in ocsci/__init__.py
for the added complexity but it's not a blocker for me.
+1 I guess we would need to revamp the end result anyway, to polish edges a bit. |
@clacroix12 @RazTamir ok, just verified that old runner is still working. Tried to run test like: import logging
import pytest
from ocs import ocp
from ocs import defaults
from ocsci.config import ENV_DATA
from ocsci import run_this, EcosystemTest, tier1
logger = logging.getLogger(__name__)
OCP = ocp.OCP(
kind='pods', namespace=defaults.ROOK_CLUSTER_NAMESPACE
)
@run_this
def test_not_run_me_fail_pass():
logger.info("Hey from test which should pass")
logger.info(
"You can easily access data from ENV_DATA like cluster_name: %s",
ENV_DATA['cluster_name']
)
assert 1 == 1, "This will not reach this message"
@tier1
class TestExampleClass(EcosystemTest):
def test_example_method(self):
logger.info("Hello from test method inside test class")
pods = OCP.get()['items']
number_of_pods = len(pods)
logger.info(
f"Found {number_of_pods} pods in namespace: "
f"{defaults.ROOK_CLUSTER_NAMESPACE}"
)
for pod in pods:
logger.info(f"Pod : {pod.metadata.name}")
assert number_of_pods, "No pod exists!" Ran with So I think we can merge |
No description provided.