Skip to content

Fix subtests #178

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Nov 27, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions tests/testsuite.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,11 @@ def test_subTest_fail(self):
with self.subTest(i=i):
self.fail('this is a subtest.')

def test_subTest_error(self):
for i in range(2):
with self.subTest(i=i):
raise Exception('this is a subtest')

class DummyErrorInCallTest(unittest.TestCase):

def __call__(self, result):
Expand Down Expand Up @@ -411,6 +416,30 @@ def test_unittest_subTest_fail(self):
br'(XMLTestRunnerTestCase\.)?DummySubTest" '
br'name="test_subTest_fail \(i=1\)"')

@unittest.skipIf(not hasattr(unittest.TestCase, 'subTest'),
'unittest.TestCase.subTest not present.')
def test_unittest_subTest_error(self):
# test for issue #155
outdir = BytesIO()
runner = xmlrunner.XMLTestRunner(
stream=self.stream, output=outdir, verbosity=self.verbosity,
**self.runner_kwargs)
suite = unittest.TestSuite()
suite.addTest(self.DummySubTest('test_subTest_error'))
runner.run(suite)
outdir.seek(0)
output = outdir.read()
self.assertRegexpMatches(
output,
br'<testcase classname="tests\.testsuite\.'
br'(XMLTestRunnerTestCase\.)?DummySubTest" '
br'name="test_subTest_error \(i=0\)"')
self.assertRegexpMatches(
output,
br'<testcase classname="tests\.testsuite\.'
br'(XMLTestRunnerTestCase\.)?DummySubTest" '
br'name="test_subTest_error \(i=1\)"')

@unittest.skipIf(not hasattr(unittest.TestCase, 'subTest'),
'unittest.TestCase.subTest not present.')
def test_unittest_subTest_pass(self):
Expand Down
60 changes: 56 additions & 4 deletions xmlrunner/result.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,8 +156,10 @@ def __init__(self, test_result, test_method, outcome=SUCCESS, err=None, subTest=

self.test_name = testcase_name(test_method)
self.test_id = test_method.id()
self.subDescription = None
Copy link

@bbartholomew-om1 bbartholomew-om1 Nov 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the addSubtest method also be updated to handled cases where the the subtest succeeded? This can be detected when no err is provided to the addSubtest method.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the docs for addSubTest(test, subtest, outcome):

If outcome is None, the subtest succeeded

Link: https://docs.python.org/3.4/library/unittest.html?highlight=unittest#unittest.TestResult.addSubTest

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know but with my changes the successful subtest does not get reported as failed or anything (or so I think)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question was whether this method should also be updated to include successful subtests in the result XML. The docs indicate that unittest does nothing by default, but I think that the xml runner might want to do something different in this case.

Since subtest successes aren't currently tracked right now, the reports generate variable number of tests executed based on subtest success / failure. Additionally, as is, it is not possible to track the history of subtests since all the successes are seen as a no-run for the subtest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs indicate that unittest does nothing by default, but I think that the xml runner might want to do something different in this case.

But why? That may very well confuse people; we could have this opt-in based on a flag instead.
Generators can make subtests so easy that it may not be relevant for everybody, or it may be always a different set if fuzzing is involved.

I'm merging as-is, @bbartholomew-om1 if you want to work on this additional piece I can look at it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reasoning I was thinking through was the second paragraph above, though I may not have articulated it well. The test history, reliability rate, and time taken for sub tests are less meaningful unless you include successes as part of the report. A sub test might typically fail in 3 ms but take 32 seconds when succeeding - not recording the successes may mask test infrastructure issues and makes test analysis more difficult.

Another way to look at the above is that a subtest success should be treated like a test success. Based on the current implementation, test successes are considered interesting and hence included in the final report. Why would subtest successes be different? unitest does nothing with addSuccess as well, but the library overloads this functionality to add the result to the xml.

I'm not sure I understand your point on confusing others, could you elaborate more on the point? I'm also not sure how generators making subtests easy is relevant to this confusion. Agreed that the use of the flag would simplify integration and avoid back-compat problems / confusion.

I can open an issue to get feedback from others if that would be more useful. Though based on my understanding of how subtests are used within unittest, I'd expect the framework to treat the results the same as a regular test.

if subTest:
self.test_id = subTest.id()
self.subDescription = subTest._subDescription()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this update the test_description to be that of the subtest? That will allow the existing description function to work as is once sub test information is stored into the test result.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i.e.: self.test_result.getDescription(subTest)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really understand what you mean... I did not find any getDescription definition in this repo?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not in this repo - it's from the unittest base class which this inherits from. You can see its use a few lines above this.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of getting the description for the testcase, as we are currently doing, we can get the description for the subTest. The default behavior of unittest is to include the test description in the subtest.


def id(self):
return self.test_id
Expand All @@ -174,7 +176,12 @@ def get_description(self):
"""
Return a text representation of the test method.
"""
return self.test_description
description = self.test_description

if self.subDescription is not None:
description += ' ' + self.subDescription

return description

def get_error_info(self):
"""
Expand Down Expand Up @@ -337,14 +344,29 @@ def addSubTest(self, testcase, test, err):
Called when a subTest method raises an error.
"""
if err is not None:

errorText = None
errorValue = None
errorList = None
if issubclass(err[0], test.failureException):
errorText = 'FAIL'
errorValue = self.infoclass.FAILURE
errorList = self.failures

else:
errorText = 'ERROR'
errorValue = self.infoclass.ERROR
errorList = self.errors

self._save_output_data()

testinfo = self.infoclass(
self, testcase, self.infoclass.ERROR, err, subTest=test)
self.errors.append((
self, testcase, errorValue, err, subTest=test)
errorList.append((
testinfo,
self._exc_info_to_string(err, testcase)
))
self._prepare_callback(testinfo, [], 'ERROR', 'E')
self._prepare_callback(testinfo, [], errorText, errorText[0])

def addSkip(self, test, reason):
"""
Expand All @@ -356,6 +378,36 @@ def addSkip(self, test, reason):
self.skipped.append((testinfo, reason))
self._prepare_callback(testinfo, [], 'SKIP', 'S')

def addExpectedFailure(self, test, err):
"""
Missing in xmlrunner, copy-pasted from xmlrunner addError.
"""
self._save_output_data()

testinfo = self.infoclass(self, test, self.infoclass.ERROR, err)
testinfo.test_exception_name = 'ExpectedFailure'
testinfo.test_exception_message = 'EXPECTED FAILURE: {}'.format(testinfo.test_exception_message)

self.expectedFailures.append((testinfo, self._exc_info_to_string(err, test)))
self._prepare_callback(testinfo, [], 'EXPECTED FAILURE', 'X')

@failfast
def addUnexpectedSuccess(self, test):
"""
Missing in xmlrunner, copy-pasted from xmlrunner addSuccess.
"""
self._save_output_data()

testinfo = self.infoclass(self, test) # do not set outcome here because it will need exception
testinfo.outcome = self.infoclass.ERROR
# But since we want to have error outcome, we need to provide additional fields:
testinfo.test_exception_name = 'UnexpectedSuccess'
testinfo.test_exception_message = ('UNEXPECTED SUCCESS: This test was marked as expected failure but passed, '
'please review it')

self.unexpectedSuccesses.append(testinfo)
self._prepare_callback(testinfo, [], 'UNEXPECTED SUCCESS', 'U')

def printErrorList(self, flavour, errors):
"""
Writes information about the FAIL or ERROR to the stream.
Expand Down