Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scrub skipped test issue numbers #17235

Open
ajcvickers opened this issue Aug 18, 2019 · 2 comments
Open

Scrub skipped test issue numbers #17235

ajcvickers opened this issue Aug 18, 2019 · 2 comments

Comments

@ajcvickers
Copy link
Member

Scrub all skipped tests to:

  • Determine if it is useful to keep the test
  • If so, then make sure an open issue number is referenced, and the issue is appropriate for the test being disabled
  • Skip the test using a common pattern
    • Add API consistency tests to ensure that tests are using the correct pattern
    • Consider automating checks so that tests are flagged if they are disabled for a closed issue
@ajcvickers ajcvickers added this to the Backlog milestone Aug 19, 2019
@ajcvickers ajcvickers modified the milestones: Backlog, MQ Sep 11, 2020
@smitpatel
Copy link
Contributor

I propose we do this exercise once we have good progress to zero in ZBB. Many of the tests are disabled with tracking bugs which are in 6.0. We could save some cycles if we just fixed bugs first.

@AndriySvyryd
Copy link
Member

Agree, there are plenty of other MQ tasks to tackle

@ajcvickers ajcvickers modified the milestones: MQ, 6.0.0 Nov 19, 2020
@ajcvickers ajcvickers modified the milestones: 6.0.0, MQ Jul 30, 2021
ajcvickers added a commit that referenced this issue Oct 17, 2021
Part of #26088 and #17235

Ideas for better catching behavior changes in the product code. Specifically:
- Detect when a negative case stops failing
- Detect when a negative case starts failing in a different way

Fundamental approach: don't skip tests.

In NorthwindAggregateOperatorsQueryTests, we had:
- Negative cases that were no longer failing
- Negative cases that were skipped for all providers, but worked on some. For example:
  - Failed on relational, but passed on in-memory
  - Failed on relational, but passed on Cosmos
  - Failed on SQL Server, but passed on SQLite
- Negative cases that failed in different ways on different providers

Specifics:
- If a test throws, catch the exception
  - Were feasible, also validate the exception message or error number
- Always call base where possible, rather than repeating the query in an overriden test
- Add a standard comment where we have a bug or enhancement tracking the issue. For example:
  - `// Contains over subquery. Issue #17246.`
- Always have an `AssertSql` call in Cosmos and SQL Server tests
  - Where we expect a provider-specific class to verify SQL, then add a test that checks all test methods are overriden.
ajcvickers added a commit that referenced this issue Oct 25, 2021
ajcvickers added a commit that referenced this issue Oct 27, 2021
@ajcvickers ajcvickers removed their assignment Aug 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants