Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Broken build policy? #128

Closed
jasperblues opened this issue Dec 10, 2013 · 3 comments
Closed

Broken build policy? #128

jasperblues opened this issue Dec 10, 2013 · 3 comments
Labels

Comments

@jasperblues
Copy link
Member

Hi Folks,

We don’t have a process / policy about broken builds yet. . . just wanted to see if we can make an agreement.

Option 1: Check in on broken build

  • If an issue is identified we create a test-case and flag it. . . The build will then be flagged as broken.
    Advantage: We can see what needs to be fixed.
    Disadvantage: If other folks have stuff to commit, they want be able to see if they’ve introduced a regression defect.

Option 2: Build goes Green ASAP

If we identify an issue, we log it in the issues tracker. . . Create a test, but disable it with a TODO.
Advantage: Others can check in and see if they’ve broken something new - Jenkins will email.
Disadvantage: Its hard to track between issues and test-cases.

Which is better? Most places I’ve worked used Option 2.

@rhgills
Copy link
Contributor

rhgills commented Dec 10, 2013

Option 2

The build should not be broken just because we plan to change behavior in the future. The tests should check for the behavior of the software now, not as we want it to be.

Breaking the build as a reminder to change behavior felt wrong to me, but I wanted some sort of (prominent) documentation in the tests that things weren't working as intended. I wanted to avoid the situation where a user of Typhoon assumed the library behaved correctly and tediously stepped through their own code only to find that the problem was in fact Typhoon behaving in an incorrect way.

We could comment out the failing tests (as you've mentioned), but then we don't have any tests covering whatever code that is behaving in an unexpected way. Now I see a better alternative. We can change the tests to verify the current (incorrect) behavior, with a prominent comment indicating that this behavior is going to be changed and linking to the relevant issue. This way any developer using the library who looks to the unit tests to understand how it behaves will have an executable specification—one that specifies broken behavior, but a specification nonetheless. Compare this to an entire commented out test, which can easily get stale and out of date.

@rhgills
Copy link
Contributor

rhgills commented Dec 10, 2013

NB: Applying this idea to the particular case of #107 is difficult. The test is flickering by failing when run alone and succeeding otherwise, which is hard to test for. So I can't really test for failure (the initialization of a collection of assemblies in any order does NOT work) without breaking the build when all the tests are run. I can't test for success without breaking the build when run alone. These tests can't really assert anything!

As a compromise, I commented out the failing assertions, not the entire test, and changed the name of the previously failing tests to reflect their flickering nature.

- (void)test_allows_initialization_with_a_collection_of_assemblies_in_any_order =>
- (void)test_inconsistently_allows_initialization_with_a_collection_of_assemblies_in_any_order

@jasperblues
Copy link
Member Author

Ah, yes. . . that case was tricky . . I think your solution is good until we work something else out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants