-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoiding Regressions #5962
Comments
I would like to make sure this discussion includes what we mean by a regression. I would like to see the stability marker pushed down to the Construct level and a regression would only be a bakwards incompatible change in a Construct with a |
I'm a little worried about this. By the rule of "you get what you measure" we will get a lot of tests that raise coverage, but still won't necessarily test what we want/need tested. Also, raising coverage by itself does nothing to address compatibility.
I like this better as a means of testing regression.
Integration tests should be black box tests instead of whitebox, so I feel this shouldn't really be able to happen? |
@rix0rrr Wrote:
Raising coverage is never a goal, it should simply serve as a tool for identifying gaps, which in my experience does happen. Identifying gaps and implementing new tests, thus, raising coverage, will reduce the risk of missing a compatibility issue. Actually, don't we already have this? Hmm :\
Me too.
|
Closing this issue. This topic is covered in aws/aws-cdk-rfcs#110 |
❓ General Issue
Lets talk about what can we do to avoid introducing regressions. Note that I will not be addressing any CLI vs Framework compatibility requirements here.
What I mean by regressions, is simply the breakage of existing functionality or API.
There is no silver bullet here, the main thing we have to do is make sure we have full test coverage. We have to keep identifying gaps in our test suite and close them out:
For example:
To that end, we should consider:
Code Coverage
Lets add a code coverage validation to our PRs. I've found this to be very useful in the past, especially in identifying gaps for edge cases.
Regression Suite
What if we run old integration tests against the new version?
Ideally, the old integration tests are always a subset of the latest integration tests (assuming we always add tests, never modify or remove). So there shouldn't be much value in running this scenario. However, the way we currently enforce this is only manually in our reviews, nothing actually stops us from introducing such breaking changes.
Things to consider:
The Question
Environment
Other information
The text was updated successfully, but these errors were encountered: