-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split CI pipelines #23385
Split CI pipelines #23385
Conversation
Signed-off-by: jolheiser <john.olheiser@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apart from my comments below, the CI seems to agree with you that you improved its performance.
@@ -1607,7 +1815,6 @@ platform: | |||
|
|||
steps: | |||
- name: manifest-rootless | |||
pull: always |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was set twice in the same step, see two lines below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pull: always
should only be set for the first occurence of an image in the whole file. @jolheiser can you check that this is still true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can double-check, sure.
Since some of these can now run in parallel, it might be a bit off. I'll see what comes up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I guess for the sake of performance, we can skip double pull in parallel runs. The CI runs often enough to keep images up to date. Even better would be an outside job on the server that updates images once a day or so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this whole image pulling business is something that drone or each server could do itself. Even if you make a cron pipeline that does the pulling, it's not guaranteed to execute on all daemons in a distributed setup. What I do on my drone machines is this in a system cron once per day:
docker images --format "{{.Repository}}:{{.Tag}}" | grep -v '<none>' | sort | xargs -L1 docker pull
docker system prune -f
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this current PR would hit dockerhub rate limits as-is, but in general I think image pulling should be cleaned up.
I'm just not sure if it should be done before or after this PR.
Another possibility is to more fully qualify the tag used, such that we pin the version more specifically.
A pro would be needing to pull less frequently, a con being we would need to manually update CI more often to update image versions, and I'm not sure if all images would play nice with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think exact tags is really maintainable without automation that also continuously updates the versions as some images publish very frequently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use a combination of a static runner, and autoscaler that will create/destroy runners as needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use a combination of a static runner, and autoscaler that will create/destroy runners as needed.
Ok, sounds like we are not really in control of the docker image storage in such a setup, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other images will probably be outdated too, but I don't think this should be the scope of this PR (including my earlier comment regarding the postgres version 😅 )
* giteaofficial/main: Scoped label display and documentation tweaks (go-gitea#23430) Deduplicate template code for label selection menu (go-gitea#23431) Show edit/close/delete button on organization wide repositories (go-gitea#23388) Sync the class change of Edit Column Button to JS code (go-gitea#23400) Preserve file size when creating attachments (go-gitea#23406) [skip ci] Updated translations via Crowdin Use buildkit for docker builds (go-gitea#23415) Refactor branch/tag selector dropdown (first step) (go-gitea#23394) [skip ci] Updated translations via Crowdin Hide target selector if tag exists when creating new release (go-gitea#23171) Parse external request id from request headers, and print it in access log (go-gitea#22906) Add missing tabs to org projects page (go-gitea#22705) Add user webhooks (go-gitea#21563) Handle OpenID discovery URL errors a little nicer when creating/editing sources (go-gitea#23397) Split CI pipelines (go-gitea#23385)
go mod download
task in the Makefile from the tool installation, as the tools are only needed in the compliance pipeline. (Arguably even some of the tools aren't needed there, but that could be a follow-up PR)Should resolve #22010 - one thing that wasn't changed here but is mentioned in that issue, unit tests are needed in the same pipeline as an integration test in order to form a complete coverage report (at least as far as I could tell), so for now it remains in a pipeline with a DB integration test.
Please let me know if I've inadvertently changed something that was how it was on purpose.
I will say sometimes it's hard to pin down the average time, as a pipeline could be waiting for a runner for X minutes and that brings the total up by X minutes as well, but overall this does seem to be faster on average.