-
Notifications
You must be signed in to change notification settings - Fork 138
Fix subtests #178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix subtests #178
Changes from all commits
8ea7554
13da670
47a7878
a1ed8c4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -156,8 +156,10 @@ def __init__(self, test_result, test_method, outcome=SUCCESS, err=None, subTest= | |
|
||
self.test_name = testcase_name(test_method) | ||
self.test_id = test_method.id() | ||
self.subDescription = None | ||
if subTest: | ||
self.test_id = subTest.id() | ||
self.subDescription = subTest._subDescription() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. should this update the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i.e.: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't really understand what you mean... I did not find any getDescription definition in this repo? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's not in this repo - it's from the unittest base class which this inherits from. You can see its use a few lines above this. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Instead of getting the description for the testcase, as we are currently doing, we can get the description for the subTest. The default behavior of |
||
|
||
def id(self): | ||
return self.test_id | ||
|
@@ -174,7 +176,12 @@ def get_description(self): | |
""" | ||
Return a text representation of the test method. | ||
""" | ||
return self.test_description | ||
description = self.test_description | ||
|
||
if self.subDescription is not None: | ||
description += ' ' + self.subDescription | ||
|
||
return description | ||
|
||
def get_error_info(self): | ||
""" | ||
|
@@ -337,14 +344,29 @@ def addSubTest(self, testcase, test, err): | |
Called when a subTest method raises an error. | ||
""" | ||
if err is not None: | ||
|
||
errorText = None | ||
errorValue = None | ||
errorList = None | ||
if issubclass(err[0], test.failureException): | ||
errorText = 'FAIL' | ||
errorValue = self.infoclass.FAILURE | ||
errorList = self.failures | ||
|
||
else: | ||
errorText = 'ERROR' | ||
errorValue = self.infoclass.ERROR | ||
errorList = self.errors | ||
|
||
self._save_output_data() | ||
|
||
testinfo = self.infoclass( | ||
self, testcase, self.infoclass.ERROR, err, subTest=test) | ||
self.errors.append(( | ||
self, testcase, errorValue, err, subTest=test) | ||
errorList.append(( | ||
testinfo, | ||
self._exc_info_to_string(err, testcase) | ||
)) | ||
self._prepare_callback(testinfo, [], 'ERROR', 'E') | ||
self._prepare_callback(testinfo, [], errorText, errorText[0]) | ||
|
||
def addSkip(self, test, reason): | ||
""" | ||
|
@@ -356,6 +378,36 @@ def addSkip(self, test, reason): | |
self.skipped.append((testinfo, reason)) | ||
self._prepare_callback(testinfo, [], 'SKIP', 'S') | ||
|
||
def addExpectedFailure(self, test, err): | ||
""" | ||
Missing in xmlrunner, copy-pasted from xmlrunner addError. | ||
""" | ||
self._save_output_data() | ||
|
||
testinfo = self.infoclass(self, test, self.infoclass.ERROR, err) | ||
testinfo.test_exception_name = 'ExpectedFailure' | ||
testinfo.test_exception_message = 'EXPECTED FAILURE: {}'.format(testinfo.test_exception_message) | ||
|
||
self.expectedFailures.append((testinfo, self._exc_info_to_string(err, test))) | ||
self._prepare_callback(testinfo, [], 'EXPECTED FAILURE', 'X') | ||
|
||
@failfast | ||
def addUnexpectedSuccess(self, test): | ||
""" | ||
Missing in xmlrunner, copy-pasted from xmlrunner addSuccess. | ||
""" | ||
self._save_output_data() | ||
|
||
testinfo = self.infoclass(self, test) # do not set outcome here because it will need exception | ||
testinfo.outcome = self.infoclass.ERROR | ||
# But since we want to have error outcome, we need to provide additional fields: | ||
testinfo.test_exception_name = 'UnexpectedSuccess' | ||
testinfo.test_exception_message = ('UNEXPECTED SUCCESS: This test was marked as expected failure but passed, ' | ||
'please review it') | ||
|
||
self.unexpectedSuccesses.append(testinfo) | ||
self._prepare_callback(testinfo, [], 'UNEXPECTED SUCCESS', 'U') | ||
|
||
def printErrorList(self, flavour, errors): | ||
""" | ||
Writes information about the FAIL or ERROR to the stream. | ||
|
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the
addSubtest
method also be updated to handled cases where the the subtest succeeded? This can be detected when no err is provided to theaddSubtest
method.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the docs for
addSubTest(test, subtest, outcome)
:Link: https://docs.python.org/3.4/library/unittest.html?highlight=unittest#unittest.TestResult.addSubTest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know but with my changes the successful subtest does not get reported as failed or anything (or so I think)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My question was whether this method should also be updated to include successful subtests in the result XML. The docs indicate that unittest does nothing by default, but I think that the xml runner might want to do something different in this case.
Since subtest successes aren't currently tracked right now, the reports generate variable number of tests executed based on subtest success / failure. Additionally, as is, it is not possible to track the history of subtests since all the successes are seen as a no-run for the subtest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But why? That may very well confuse people; we could have this opt-in based on a flag instead.
Generators can make subtests so easy that it may not be relevant for everybody, or it may be always a different set if fuzzing is involved.
I'm merging as-is, @bbartholomew-om1 if you want to work on this additional piece I can look at it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reasoning I was thinking through was the second paragraph above, though I may not have articulated it well. The test history, reliability rate, and time taken for sub tests are less meaningful unless you include successes as part of the report. A sub test might typically fail in 3 ms but take 32 seconds when succeeding - not recording the successes may mask test infrastructure issues and makes test analysis more difficult.
Another way to look at the above is that a subtest success should be treated like a test success. Based on the current implementation, test successes are considered interesting and hence included in the final report. Why would subtest successes be different? unitest does nothing with
addSuccess
as well, but the library overloads this functionality to add the result to the xml.I'm not sure I understand your point on confusing others, could you elaborate more on the point? I'm also not sure how generators making subtests easy is relevant to this confusion. Agreed that the use of the flag would simplify integration and avoid back-compat problems / confusion.
I can open an issue to get feedback from others if that would be more useful. Though based on my understanding of how subtests are used within unittest, I'd expect the framework to treat the results the same as a regular test.