Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple cov fail under flags Fixes #323 #330

Closed

Conversation

graingert
Copy link
Member

@graingert graingert commented Aug 30, 2019

Allows you to run pytest --cov-fail-under=70 --cov-fail-under=100:test/**.

Fixes #323.

@graingert graingert force-pushed the multiple-cov-fail-under-flags branch from 3d2581f to b3bd9e0 Compare August 30, 2019 18:17
src/pytest_cov/engine.py Outdated Show resolved Hide resolved
src/pytest_cov/engine.py Outdated Show resolved Hide resolved
src/pytest_cov/engine.py Outdated Show resolved Hide resolved
src/pytest_cov/engine.py Outdated Show resolved Hide resolved
src/pytest_cov/plugin.py Show resolved Hide resolved
@graingert graingert force-pushed the multiple-cov-fail-under-flags branch 8 times, most recently from 40ac848 to c2846e2 Compare September 2, 2019 17:23
"Required test coverage of 100.0%:+test/* reached. Total coverage: 100.00%",
"Required test coverage of 100.0%:+test/* reached. Total coverage: 100.00%",
"Required test coverage of 33.33%:-test/* reached. Total coverage: 33.33%",
])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So to understand what the argument do - in this test it's like this right?

  • overall coverage must be at least 55.55%
  • test coverage must be 100%
  • test coverage must be 100% (alternative syntax with "+")
  • overall coverage but without tests must be at least 33.33%

Also if I understand it correctly "%" is optional, and "+" as well yes?

And --cov-fail-under=100:foo:bar means coverage must be 100% for foo and bar right? It should be part of the test suite.

There are few things that I think users will find confusing or produce undesired results:

  • the multiple syntax to obtain the same thing
  • the reporting discrepancy - you can set fail-under for something that you cannot see in the reporting - suddenly you can't know what to fix to make the suite pass

Do we really need the omit mode (the "-" syntax)?

Copy link
Member Author

@graingert graingert Sep 2, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So to understand what the argument do - in this test it's like this right?

* overall coverage must be at least 55.55%

* test coverage must be 100%

* test coverage must be 100% (alternative syntax with "+")

* overall coverage but without tests must be at least 33.33%

Also if I understand it correctly "%" is optional, and "+" as well yes?

And --cov-fail-under=100:foo:bar means coverage must be 100% for foo and bar right? It should be part of the test suite.

There are few things that I think users will find confusing or produce undesired results:

* the multiple syntax to obtain the same thing

I don't think it's confusing. You get the same with +1 and 1

* the reporting discrepancy - you can set fail-under for something that you cannot see in the reporting - suddenly you can't know what to fix to make the suite pass

I don't believe you can

Do we really need the omit mode (the "-" syntax)?

Yes so you can get an idea of coverage for code without tests. This is the coverage result people ignore the test suite for.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most people upgrading to this syntax will have ignored tests globally and will still want to assert on that number

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes so you can get an idea of coverage for code without tests. This is the coverage result people ignore the test suite for.

Most people have just one top level package thus omit is unnecessary.

I don't believe you can

The reporting show percentages for individual files, doesn't show any totals for packages, subpackages and so on. This is why I don't think this is a great idea.

@nedbat do you have any input on this syntax? I don't want to have something in pytest-cov that will never ever be part of coveragepy.

Copy link
Member Author

@graingert graingert Sep 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes so you can get an idea of coverage for code without tests. This is the coverage result people ignore the test suite for.

Most people have just one top level package thus omit is unnecessary

That doesn't follow:
Most people have just one top level package
thus some people do not have just one top level package
thus omit is necessary
Also omit is needed for the http://doc.pytest.org/en/latest/goodpractices.html#tests-as-part-of-application-code layout

I don't believe you can

The reporting show percentages for individual files, doesn't show any totals for packages, subpackages and so on. This is why I don't think this is a great idea.

No the reporting lists a percentage for each --cov-fail-under

@nedbat do you have any input on this syntax? I don't want to have something in pytest-cov that will never ever be part of coveragepy.

This interface is already available in coveragepy: you just run "coverage report" with different include/omit overrides

@mjtorn
Copy link

mjtorn commented Sep 10, 2019

I'm also suffering of pytest: error: argument --cov-fail-under: invalid int value: '54.20' style errors.

Sadly this PR (at commit 7989cb6 at least) breaks pytest-xdist:

INTERNALERROR> DumpError: can't serialize <class 'pytest_cov.plugin.FailUnder'>

Uninstalling pytest-xdist and running a file works just fine:

FAIL Required test coverage of 54.21% not reached. Total coverage: 54.20%

Required test coverage of 54.1% reached. Total coverage: 54.20%

Of course in this day and age computers tend to have more than one core, so it would be super nice if FailUnder magically became serializable ;)

Thanks for taking on this work, though, much appreciated!

@graingert
Copy link
Member Author

@mjtorn do you have more of that stacktrace?

@mjtorn
Copy link

mjtorn commented Sep 10, 2019

@graingert sure. Wasn't thinking it's necessary cuz pip install pytest-xdist should be enough to break it, but I think the below is good enough without giving any secrets away.

============================= test session starts ==============================
platform linux2 -- Python 2.7.15rc1, pytest-4.6.2, py-1.8.0, pluggy-0.12.0 -- REDACTED_VENV/bin/python2
cachedir: .pytest_cache
Django settings: REDACTED_PROJECT.test_settings (from command line option)
rootdir: REDACTED_ROOT, inifile: setup.cfg
plugins: forked-1.0.2, cov-2.7.1, django-3.5.0, xdist-1.28.0
gw0 I / gw1 I / gw2 I / gw3 I / gw4 I / gw5 I / gw6 I / gw7 I

[gw0] linux2 Python 2.7.15 cwd: REDACTED_ROOT
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/_pytest/main.py", line 204, in wrap_session
INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/pluggy/hooks.py", line 289, in __call__
INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/pluggy/manager.py", line 87, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/pluggy/manager.py", line 81, in <lambda>
INTERNALERROR>     firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/pluggy/callers.py", line 81, in get_result
INTERNALERROR>     _reraise(*ex)  # noqa
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/xdist/dsession.py", line 81, in pytest_sessionstart
INTERNALERROR>     nodes = self.nodemanager.setup_nodes(putevent=self.queue.put)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/xdist/workermanage.py", line 64, in setup_nodes
INTERNALERROR>     nodes.append(self.setup_node(spec, putevent))
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/xdist/workermanage.py", line 73, in setup_node
INTERNALERROR>     node.setup()
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/xdist/workermanage.py", line 254, in setup
INTERNALERROR>     self.channel.send((self.workerinput, args, option_dict, change_sys_path))
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 708, in send
INTERNALERROR>     self.gateway._send(Message.CHANNEL_DATA, self.id, dumps_internal(item))
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1345, in dumps_internal
INTERNALERROR>     return _Serializer().save(obj)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1363, in save
INTERNALERROR>     self._save(obj)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1381, in _save
INTERNALERROR>     dispatch(self, obj)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1466, in save_tuple
INTERNALERROR>     self._save(item)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1381, in _save
INTERNALERROR>     dispatch(self, obj)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1462, in save_dict
INTERNALERROR>     self._write_setitem(key, value)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1456, in _write_setitem
INTERNALERROR>     self._save(value)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1381, in _save
INTERNALERROR>     dispatch(self, obj)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1452, in save_list
INTERNALERROR>     self._write_setitem(i, item)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1456, in _write_setitem
INTERNALERROR>     self._save(value)
INTERNALERROR>   File "REDACTED_VENV/local/lib/python2.7/site-packages/execnet/gateway_base.py", line 1379, in _save
INTERNALERROR>     raise DumpError("can't serialize {}".format(tp))
INTERNALERROR> DumpError: can't serialize <class 'pytest_cov.plugin.FailUnder'>

@graingert
Copy link
Member Author

@mjtorn nice thanks for that

@mjtorn
Copy link

mjtorn commented Sep 10, 2019

You're very welcome. As a workaround, I installed that one commit of yours and run coverage reports without parallelization, which is acceptable for now, but I'll be keeping an eye on this PR, and hopefully a merge and new version will arise :)

@graingert
Copy link
Member Author

graingert commented Sep 10, 2019 via email

@graingert
Copy link
Member Author

@mjtorn it looks like it's supposed to be fixed pytest-dev/pytest-xdist#384

@mjtorn
Copy link

mjtorn commented Sep 10, 2019

Doesn't look like a win, got 1.28.0 here, but maybe someone will react on that issue. Subscribed to its notifications, thanks :)

@graingert
Copy link
Member Author

@mjtorn try that

@mjtorn
Copy link

mjtorn commented Sep 11, 2019

pip install -U 'git+https://github.com/graingert/pytest-cov@94f5ff3d1ef4bc8fee33a1d3fa25478c332581d0#egg=pytest-cov' (instead of 7989cb6)

pytest -n auto -s -v --ds=REDACTED_PROJECT.test_settings --disable-warnings --cov-report=term-missing --cov-report=xml --cov=. --cov-branch yielded a reasonable result!

Likewise pytest -n auto -s -v --ds=REDACTED_PROJECT.test_settings --disable-warnings --cov-report=term-missing --cov-report=xml --cov=. --cov-branch REDACTED/test_REDACTED.py


The weird thing here is that with both your commit and vanilla pytest-cov==2.7.1 I get subtly different coverage reports for the same commit of this code base!

I'll be away for quite a few days starting later today, so I don't know how much testing and reporting I can get done. The variations in the coverage reports are quite strange, and I have no proper explanation for them. It could be whoever wrote some of these tests wrote them to leak state that affects the code paths for different pytest-xdist workers. In that case it's harmless from a pytest point of view, but definitely something to investigate and fix.


To sum it up, your latest change feels nice. Are there many blockers for getting it merged? Issues I should be aware of?

@nedbat
Copy link
Collaborator

nedbat commented Sep 11, 2019

As part of my proposal that pytest-cov should do less (#337), I think this shouldn't be merged. This entire feature can be a separate tool. It's got nothing to do with running tests.

@graingert
Copy link
Member Author

graingert commented Sep 12, 2019

@mjtorn

The weird thing here is that with both your commit and vanilla pytest-cov==2.7.1 I get subtly different coverage reports for the same commit of this code base!

This might be due to term-missing mutating the cov config in pytest-cov==2.7.1

What's the subtle difference?

@graingert
Copy link
Member Author

@mjtorn does this have the same "subtly different coverage reports" #338 ?

@mjtorn
Copy link

mjtorn commented Sep 17, 2019

@graingert I should be back on this tomorrow, but before going away, I got quite sure the problem was in the tests themselves. I'll let you know, but taking a quick look at #338 I have no idea what I should be looking out for. When do the configs mutate?

@nedbat I'll read through #337 later, but having two severe bugs such as "fail the test if it's less than n.d" failing because n.d is not an integer, contrary to Coverage, and a bug where xdist don't work, are definitely two things to fix. Anything beyond those bugs, if pytest-cov does too much, I don't care about. Also if this doesn't get merged, I'll have to, what, manually start exiting Jenkins scripts with 1 instead of using a fixed-up previously-broken cli api?-)

@nedbat
Copy link
Collaborator

nedbat commented Sep 17, 2019

@mjtorn I haven't followed the failing tests here, so I'm not sure what the two severe bugs are. Are they written up as issues?

My main point in #337 is that pytest-cov should not be doing things that could be done by a shell script, or make, or tox. The UI is getting convoluted for no real gain.

@mjtorn
Copy link

mjtorn commented Sep 17, 2019

@nedbat I did not write the issues up because it looked like they're being tackled quickly here.

Bug 1: Asking pytest-cov to fail if the coverage is beneath a threshold works only for integers. Quite severe!

Bug 2: xdist didn't work. Blocker-level.


The other fluff about indeterminate results is - with a guesstimated certainty of 99% - someone else's inexperience in writing isolated tests in an existing code base and irrelevant to pytest, coverage or any combination thereof.

@graingert
Copy link
Member Author

@mjtorn Bug 1 should be fixed in master already. master is also not impacted by Bug 2

@nedbat
Copy link
Collaborator

nedbat commented Sep 17, 2019

@mjtorn Bug 1 is a great example of why pytest-cov shouldn't be doing all this. Coverage.py has allowed a float for fail-under since version 4.5, 2018-02-03. Why should we have to also update the UI of the plugin, just so it can pass it along to coverage.py? If you do your reporting as a separate explicit step using coverage.py directly, then you don't have to wait for two projects to update in order to use the feature. There's no reason for pytest-cov to be handling this value.

@mjtorn
Copy link

mjtorn commented Sep 18, 2019

@graingert you're absolutely right. I got carried away finding everything under the sun except checking the master branch first ;)

@nedbat I'm happy to see that the master branch should work, so all philosophizing is moot. I will say that I disagree with needing an extra step to deal with or work around something that's a part of the documented CLI interface, only because it's buggy in the 2.7.1 release. I don't have time to think about where to draw lines for this stuff, but software should be convenient to use and implementing --cov-fail-under as a separate thing is not convenient to use. If that's what The Powers That Be ™️ want to do, I guess I'd be stuck with it. But all of this appears to be moot now.

Best of luck to figuring things out and I'll pop back in if I have the need to. Thanks!

@graingert graingert force-pushed the multiple-cov-fail-under-flags branch 2 times, most recently from 0fdddb4 to 3f24c0a Compare January 8, 2020 10:06
@nedbat
Copy link
Collaborator

nedbat commented Jan 8, 2020

@graingert I'll reiterate my strong belief that it doesn't make sense for pytest-cov to be doing this. Pytest should run tests and report on whether they pass or fail. Coverage reporting commands can deal with coverage reporting.

@graingert graingert force-pushed the multiple-cov-fail-under-flags branch from 3f24c0a to 9183ae5 Compare January 8, 2020 12:10
@graingert
Copy link
Member Author

@nedbat I couldn't disagree more. However I wouldn't object to moving the summary method into the coverage.Coverage class and implementing multiple reporting there

@graingert graingert force-pushed the multiple-cov-fail-under-flags branch from 9183ae5 to 0a0be4d Compare January 8, 2020 13:17
@nedbat
Copy link
Collaborator

nedbat commented Jan 8, 2020

@graingert We should come to an agreement. This issue (#337) had overwhelming support for removing reporting features from pytest-cov. There's no reason for it to implement them. Whatever people are using to run pytest, they can use that thing to also run coverage reporting commands afterward.

This just adds needless complexity and awkard interfaces.

pytest-cov's job should be integrating pytest with coverage where that integration is needed. It should not be a one-stop shop for everything people want to do with coverage.

@ssbarnea
Copy link
Member

Closing this as this was not touched in more than two years, so I doubt it would suddenly become something ready for review. Feel free reopen if you disagree.

Doing this to cleanup the queue, so we can focus on reviewing things that are ready for review and merge.

@ssbarnea ssbarnea closed this Jul 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

per source cov-fail-under
5 participants