Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Robustify tests of iterable_subprocess #614

Merged
merged 2 commits into from
Feb 2, 2024

Conversation

christian-monch
Copy link
Contributor

@christian-monch christian-monch commented Feb 2, 2024

This PR modifies the test datalad_next.iterable_subprocess.test_iterable_subprocess.test_success_returncode_available_from_generator_with_exception to simply check for a non-None return code instead of checking for a 0 return code.

The reason is that a StopIteration-exception that occurs before the subprocess has terminated might lead to a non-0 return code, thus failing the test. Some environments might trigger such an early exception, e.g. due to some unexpected stdout-configuration. In other words, we cannot guarantee a 0 return code in all environments. But we can guarantee, that a return code is available when the iterable_subprocess-context is exited with an exception.

The test is renamed to test_returncode_available_from_generator_with_exception.

This should make the test robust enough to prevent this test failure.

@christian-monch christian-monch requested a review from mih as a code owner February 2, 2024 09:44
This commit modifies the test
`test_success_returncode_available_from_generator_with_exception`
to accept `0` and `-15` return codes. The reason is that
any `StopIteration`-exception that occurs before the
subprocess has exited will lead to a subprocess termination
and to a `-15` return code.
There seem to be system configurations in which this
situation appears.

The new test is renamed to
`test_returncode_available_from_generator_with_exception`.
@christian-monch christian-monch force-pushed the fix-iterable-subprocess-test branch from 03ec8c5 to b83c101 Compare February 2, 2024 10:08
Copy link

codecov bot commented Feb 2, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (3d0c8a4) 92.99% compared to head (d1fc39f) 92.99%.
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #614   +/-   ##
=======================================
  Coverage   92.99%   92.99%           
=======================================
  Files         160      160           
  Lines       11863    11863           
  Branches     1795     1795           
=======================================
  Hits        11032    11032           
- Misses        640      642    +2     
+ Partials      191      189    -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@mih mih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That rationale is sound. I wonder, though, shouldn't the implementation follow that rationale more closely? Now we are expecting one of two possible return values, rather than any.

Is there a reason not to test for is not None?

@christian-monch
Copy link
Contributor Author

christian-monch commented Feb 2, 2024

That rationale is sound. I wonder, though, shouldn't the implementation follow that rationale more closely? Now we are expecting one of two possible return values, rather than any.

Is there a reason not to test for is not None?

I used this comparison before. But I think this case is already covered by the test test_error_returncode_available_from_generator_with_exception. We would not gain new information in the modified test and may as well delete it then.

I think there are only two possible values to assert. On a Linux system, all exceptions that are raised before the subprocess exited will lead to a -15 return code. If StopIteration is raised, the subprocess will either have terminated which results in a 0-return code, or the subprocess is still running and will therefore be terminated which results in a -15 return code. Any other exception than StopIteration, e.g. a CommandError because echo could not be found, would lead to an early test-exit and not proceed to the assign-statement.

In conclusion, I would only expect return codes 0 and -15 at the assert-statement, as well as uncaught exceptions that lead to a failing test before the assert-statement is reached. Having said that other unconventional setups, e.g. echo always exits with return code 1, might not match this expectation. I see the following options:

  1. Leave the test as is.
  2. Delete the test

WDYT?

mih
mih previously approved these changes Feb 2, 2024
Copy link
Member

@mih mih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation. Let's run with it then. I added a suggestion to include the explanation into the test code.

Co-authored-by: Michael Hanke <michael.hanke@gmail.com>
@mih mih merged commit 8f9650f into datalad:main Feb 2, 2024
7 of 8 checks passed
@mih mih modified the milestones: 1.1.1, 1.2 Feb 2, 2024
@christian-monch christian-monch deleted the fix-iterable-subprocess-test branch February 2, 2024 13:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants