Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test suite ignores tests after an exit(0). #33954

Closed
cristianoliveira opened this issue May 29, 2016 · 7 comments
Closed

Test suite ignores tests after an exit(0). #33954

cristianoliveira opened this issue May 29, 2016 · 7 comments

Comments

@cristianoliveira
Copy link
Contributor

cristianoliveira commented May 29, 2016

I was dealing with this issue (the tests that do not run) for while then I figured out that it was because of an exit(0) that was added in the middle of a method.

The problem occurs when there is some method that calls exit(0). The test suite starts to ignore fails for the next tests even for tests in others modules depending on the order of execution.

Example of erroneous code:

use std::process::exit;

fn main() {
    println!("Hello, world!");
}

fn die() { // name is just an example
    exit(0)
}

#[test]
fn it_works() {
    assert!(true);
}

#[test]
fn it_does_not_run() {
    assert!(false);
}

#[test]
#[should_panic]
fn it_dies() {
    die();
}

// another file.
mod other {
    #[test]
    fn it_may_run() {
        assert!(false);
    }
}

I hoped that the it_dies test does not run and the others tests run without the influence of this problematic one.

I got a strange behavior where the tests started to be ignored and the suite returning the total of all tests that supposed to be performed.

I know that dealing with exit(0) on the tests may be difficult but I hope that the test suite: runs all the tests, shows some error/warning or an option to wait for it, like #[should_exit].

Btw, you are doing a great job guys!

@Aatch
Copy link
Contributor

Aatch commented May 30, 2016

I'm not sure this is fixable, at least not without imposing a significant performance cost on everybody that doesn't need to handle this. All the tests are run in a single process. Spread across mulitple threads, yes, but a single process still. Handling this would require spawning each test in its own process.

This is probably better handled using a different test method that isn't the integrated one, which Cargo supports.

@nagisa
Copy link
Member

nagisa commented May 30, 2016

I would like to provide you a counter-example on why this is not fixable:

fn die() {
    power_off_the_machine() // does what it says
}

#[test]
#[should_panic]
fn it_dies() {
    die(); // after this test runs no other tests get run nor summary is printed 
           // and even the machine gets shut down!
}

This is exactly what mocking in tests has been invented for. In this particular case, you could mock the call this way:

#[cfg(test)]
fn exit() { panic!("exit") }

#[cfg(not(test))]
use std::process::exit;

or perhaps intercepting the syscalls rust standard library does or any of the other million ways to do it.

I will close this as in-actionable. It is not the test framework’s job to prevent side-effectful functions from launching rockets.

@nagisa nagisa closed this as completed May 30, 2016
@cristianoliveira
Copy link
Contributor Author

cristianoliveira commented May 30, 2016

The problem is not about mocking the exit method (cause I didn't know that exit(0) was there). My concern was because the no response from the test suite. I would like to see some error or even a result like "Program exited on test 'it_dies' ". Should I mock 'exit' method for all tests?

For example: using exit(1) it shows that there was an error.

@nagisa
Copy link
Member

nagisa commented May 30, 2016

I would like to see some error or even a result like "Program exited on test 'it_dies' ".

I’m pretty sure most (sane) unit testing frameworks will behave similarly. exit(0) is a side effect and naively testing side effectful code is incorrect. One shouldn’t ever construct a test which looks like #[test] fn some_test() { something_that_has_side_effect() } in the first place.

At the risk of repeating myself: if you expect test framework to notify you about exit(0) getting called, do you also expect the test framework to notify you about attempt to power off the machine? What about launching a nuke and potentially causing the next big war? Unleashing a very potent pathogen? I’m pretty sure the reasonable answer to all of them is “no” even if all of them would result in you not observing the test result in the end.

Alas, this is not an unknown problem. See this, this, this and so on.

@cristianoliveira
Copy link
Contributor Author

cristianoliveira commented May 30, 2016

Hey @nagisa, Thanks for the answer. I got it. Sorry if my questions sounded pretentious or even I seemed presumptuous was not my intention. :)

For future readings they are called Death Tests: https://github.com/google/googletest/blob/master/googletest/docs/AdvancedGuide.md#death-tests

Btw, you are doing a great job guys!

@zofrex
Copy link
Contributor

zofrex commented May 31, 2016

I think there's a subtle difference between "does not report failure" and "reports success". Let's take the power_off_the_machine() example - that would not result in a success being reported, so it's different to the exit(0) example.

I totally agree the test harness should not be trying to catch the whole world of side-effects and account for them, but I also think it should not report success if the tests unexpectedly bail. Is that a viable middle-ground or would that still impose a performance cost?

@nagisa
Copy link
Member

nagisa commented May 31, 2016

I think there's a subtle difference between "does not report failure" and "reports success".

It doesn’t report either, other than having the exit code of 0, which is the side effect in question.

This is the sample test output for code given above:

kumabox :: /tmp  
$ ./test 
running 4 tests
test it_works ... ok
test it_does_not_run ... FAILED
test other::it_may_run ... FAILED
kumabox :: /tmp  
$ ./test 
running 4 tests
kumabox :: /tmp  
$ ./test 
running 4 tests
test it_works ... ok

It doesn’t hide the fact that some of the tests failed nor it does lie about tests getting run or succeeding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants