-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix [circleci] badge for projects that use workflows #1995
Conversation
768d2b5
to
28fa27e
Compare
}) | ||
|
||
describe('circleci: schema validation', function() { | ||
const validate = function(data, schema) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may not be the best place to get bogged down in it, but it seems like writing tests for non-trivial validators (like this one) is something we might want to do sometimes. I definitely found I needed to write tests to get this one correct. Lets have a think about a good pattern for this.
const emptyArraySchema = Joi.array() | ||
.min(0) | ||
.max(0) | ||
.required() // [] is also a valid response from Circle CI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out sometimes []
is a legit response example: https://circleci.com/api/v1.1/project/github/chris48s/shields/tree/issue1827
|
||
module.exports = class CircleCi extends BaseJsonService { | ||
async fetch({ token, vcsType, userRepo, branch }) { | ||
let url = `https://circleci.com/api/v1.1/project/${vcsType}/${userRepo}` | ||
if (branch != null) { | ||
url += `/tree/${branch}` | ||
} | ||
const query = { filter: 'completed', limit: 1 } | ||
const query = { limit: 50 } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't filter on 'completed' any more here. We need to know about in-progress/queued builds so that if the first workflow we come across has some builds which are not complete we know to ignore that one and move on to the next workflow to try and find one where all the builds are complete. If we filter on complete here, sometimes we will summarise an incomplete workflow.
In terms of limit, I've picked 50 to balance 2 properties:
- We need a fairly large number of records so that on a project with a large build matrix where we can't take the first
workflow_id
(e.g: because some builds are still in progress) we've got enough records - If we fetch loads of records the badge takes a long time to render because we must wait a long time for the
fetch()
stage
} else if (counts.timedout >= 1) { | ||
return 'timed out' | ||
} else if (counts.failed >= 1) { | ||
return 'failed' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've taken a stab at a sensible order here, but I'm not committed to it. If you're reading this thinking "'canceled' obviously takes precedence over 'infrastructure fail'", feel free to make suggestions and I'll change it.
I've also switched to using 'outcome' here instead of status (outcome only applies to finished builds and we only want to summarise workflows where all the builds are finished). This also gives as smaller number of values to try and make sense of.
) { | ||
color = 'red' | ||
} else if (status === 'no tests') { | ||
color = 'yellow' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. I've attempted to do something sensible-ish, but again if you're thinking "'canceled' should obviously be grey" or whatever, feel free to make suggestions and I'll change them. Once we've got this nailed I'll also add tests for the colour logic. There is relatively little consistency about this on other services. Prior art to maybe consider:
- https://github.com/amio/badgen-service/blob/master/libs/live-fns/circleci.js
- https://github.com/circleci/frontend/blob/c189f3546afe49b64c8ee86d92ff67ed9d2eda78/src-cljs/frontend/models/build.cljs#L174-L181
(although they are both looking at 'status' as opposed to 'outcome')
value: 'invalid json response', | ||
colorB: '#9f9f9f', | ||
}) | ||
.expectJSON({ name: 'build', value: 'could not summarize build status' }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've implemented a lot of the special-case testing for this badge logic in circleci.helpers.spec.js
instead of using a lot of mocked responses here. I've kept the service tests pretty high-level
- use lodash.countby in countOutcomes() - use lodash.countby in countOutcomes() - move validation from countOutcomes() into schema - update tests - switch getLatestCompleteBuildOutcome() to use filter()
resolve conflict: keep ours
Based on chat in discord, I've updated this PR to use a more functional style in a couple more places in b6cda72 . I think I'm going to leave Using |
I've resolved the conflicts on this if you want to pick up the review. I've slightly forgotten what was going on here but usefully I seem to have left a bunch of notes on the diff so hopefully it shouldn't be too hard to resume. Well done me from the past :) |
It sounds like we're thinking #1064 is a better way to go because it will let us rely on Circle's own logic. |
Well this escalated in complexity quite sharply...
In an attempt to solve #1792 I've had a go at grouping related builds based on
workflow_id
. There are a number of changes in this PR which are useful to discuss 'inline', so I will mark up the diff with specific line comments, but just to give a high-level summary this is how the new approach works:In general, this seems to work pretty well. Unfortunately there are 3 edge cases (that I can think of) which could be problematic:
InvalidResponse()
until every build in the build history specified by the limit param either does or doesn't use workflows. This case fails with an error rather than unexpected output (so is less serious). Also it may be possible to account for, but I've not attempted to do so.