-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disallow failures for tests we expect to pass #8301
Disallow failures for tests we expect to pass #8301
Conversation
1468ea8
to
e89ba7d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing this, I didn't know how to get Travis to just allow failures in the one case.
And it’s immediately catching regressions… |
e89ba7d
to
17d953c
Compare
The weird thing is those three tests are all passing locally on my machine, otherwise I could have just fixed them here 😕 So yeah maybe it’d be best to revert and work on investigating the problems in another PR. |
Yea, let's revert. :) |
Reverting in #8304 |
And the way the tests capture the subprocess output, and then assert and use the output as the assertion message, which pytest then abbreviates, makes it so hard to work out what went on. That's a really bad pattern we use in our tests, and I'd love to fix it somehow. But maybe not right now 🙂 |
This should probably be rebased on master before merging, to confirm that the tests now pass... |
leans in from edge of a wall Pssst! Look over here! Is more stuff like #8303 what you want? |
🦶 I hope this passes now the output thing is fixed. |
17d953c
to
1f14c91
Compare
1f14c91
to
9238fc3
Compare
The test has been failing ever since #7996 was merged. It needs to somehow be fixed first. (It’s quite weird though; I imagine the logic should be entirely in the finder, which is the same between the two resolvers.) |
Found the issue; we broke the candidate ordering |
Let's add the Note from 2-minutes-later-me: well, we'd need to rebase either PR if we want to ensure that CI passes the entire time. FWIW, we could merge that and then this PR and things will be hunky-dory, so... this entire comment adds basically no value and you just wasted your time reading this. |
With the situation with the resolvelib PR in mind, I think we should just merge all the things first and take a complete look at the regressions. These test config PRs are blocking too many things… |
|
I think we should just merge this and then make a follow up fixing the finer issues. Travis CI is already flaky/not-required, so this wouldn't break anything. |
I don’t mind if you don’t. |
@pfmoore If you're OK with this plan, let's do that. Feel free to click merge on this one. :P |
Since we’re skipping tests with
fails_on_new_resolver
in group 1 and 2, disallowing them to fail would help us catch regressions.