-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce release process #9
Conversation
3ff12ba
to
6d50591
Compare
6d50591
to
394ae76
Compare
0335e85
to
b026cff
Compare
.github/workflows/release.yaml
Outdated
VERSION: ${{ env.VERSION }} | ||
run: | | ||
echo "Building Docker server image with version: ${VERSION}" | ||
docker build -t kubeshop/monokle-admission-controller-server:latest -t kubeshop/monokle-admission-controller-server:${VERSION} ./admission-controller/server/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a difference between kubeshop/monokle-admission-controller-server
& kubeshop/monokle-admission-controller
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kubeshop/monokle-admission-controller
is a Helm chart while kubeshop/monokle-admission-controller-server
is a Docker image for webhook server.
.github/workflows/release.yaml
Outdated
- name: Build and publish Docker image (init container) | ||
env: | ||
VERSION: ${{ env.VERSION }} | ||
run: | | ||
echo "Building Docker init image with version: ${VERSION}" | ||
docker build -t kubeshop/monokle-admission-controller-init:latest -t kubeshop/monokle-admission-controller-init:${VERSION} ./admission-controller/init/ | ||
docker push kubeshop/monokle-admission-controller-init:${VERSION} | ||
docker push kubeshop/monokle-admission-controller-init:latest | ||
|
||
- name: Build and publish Docker image (server container) | ||
env: | ||
VERSION: ${{ env.VERSION }} | ||
run: | | ||
echo "Building Docker server image with version: ${VERSION}" | ||
docker build -t kubeshop/monokle-admission-controller-server:latest -t kubeshop/monokle-admission-controller-server:${VERSION} ./admission-controller/server/ | ||
docker push kubeshop/monokle-admission-controller-server:${VERSION} | ||
docker push kubeshop/monokle-admission-controller-server:latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use the docker GH actions that are designed for this exact purpose of building and pushing a docker image? They also bring the benefit of layer caching, which reduces the amount of time we spend building.
Furthermore, why not build these docker images in separate actions?
I think there's a benefit in that by not building the initContainer image each time as that will probably not change as often as the main image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use the docker GH actions that are designed for this exact purpose of building and pushing a docker image? They also bring the benefit of layer caching, which reduces the amount of time we spend building.
Agree, I'll be happy to improve it. I just went with quite basic version to not spend too much time on it. Do you mean to use something like buildx (https://docs.docker.com/build/cache/backends/gha/), also similarly we have here - https://github.com/kubeshop/monokle-saas/blob/e3efa7c43904aac4f64bd1f4b27abe99dbd7a6d5/.github/workflows/api-renderers.yml#L41-L61?
Furthermore, why not build these docker images in separate actions?
I think there's a benefit in that by not building the initContainer image each time as that will probably not change as often as the main image.
The idea was to have separate image version for each release, but I guess it can be done just by tagging already built/existing images if it's source was not modified? (not sure, about Helm in DockerHub OCI registry, haven't found a way to add multiple tags to single version 🤔).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can somehow see why we would want the same version on both, but on the other hand it does not necessarily make sense to version the initContainer if it does not actually change. IDK, @WitoDelnat, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd keep the versions separated. There is little benefit to having them being the same. If you have problems with certificates it can even help to see if the init container changed (meaningfully).
As for the Helm Chart versioning I'm a bit more hesitant. Do we want to publish a new Chart for each Docker Image or leave the image as a variable in the values.yaml and default it to latest
. @murarustefaan you have knowledge about current best practices?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. TBH I have not seen a lot of projects linking the Chart and App Versions, they are separated for a reason.
Chart version should change only on Chart modifications, while
App Version should change alongside the image version. Here I would link the App Version to the admission controller version (which should be configurable trough values, just in case) while leaving the init container set to latest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After discussing further with @murarustefaan I extracted an issue, since it seems to be broad topic and it's difficult to know what will be really useful at this stage - #13.
For improvements:
.github/workflows/release.yaml
Outdated
create-release: | ||
runs-on: ubuntu-latest | ||
needs: build-and-push | ||
|
||
steps: | ||
- name: Extract version from tag | ||
id: extract_version | ||
run: echo "VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_ENV | ||
|
||
- name: Debug version | ||
run: echo ${{ env.VERSION }} | ||
|
||
- name: Checkout Repository | ||
uses: actions/checkout@v2 | ||
|
||
- name: Build release artifacts | ||
env: | ||
VERSION: ${{ env.VERSION }} | ||
run: | | ||
helm package ./helm --version ${VERSION} | ||
helm template ./helm --version ${VERSION} > install.yaml | ||
mv monokle-admission-controller-${VERSION}.tgz helm.tgz | ||
|
||
- name: Create release and upload artifacts | ||
uses: ncipollo/release-action@v1 | ||
with: | ||
name: "Release ${{ env.VERSION }}" | ||
allowUpdates: true | ||
artifacts: 'install.yaml,helm.tgz' | ||
omitBody: true | ||
token: ${{ secrets.CI_BOT_TOKEN }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, this could be a separate job for creating the manifest release only on changes to the manifest files (or on demand).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably the answer will be the same as in the comment above:
The idea was to have separate version for each release.
And here, creating manifest seems pretty cheap/fast. I guess reusing ones (based on changes) would be very similar when it comes to time/resources consumed? Unless you had something different in mind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, maybe because I went with, kind of, coupling manifest version with container versions. Still, if any container version is updated and released, manifests needs to be released to point to latest images.
The other way around not necessarily, so changes in manifests (Helm chart) only, should't result in new docker images. Probably that's what you had in mind? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
This PR introduces release pipeline.
It's quite basic but should be enough for a start and as a good foundation to build on further.
How it works
There is a single action with two jobs:
install.yaml
(which is a result ofhelm template...
). This way admission controller can be installed directly from GH too.The action will run on tag being pushed to this repo (for testing purposes I just run it as PR part too).
The only required manual actions is running
bump
script and pushing changes to repo (as described in CONTRIBUTING guide - https://github.com/kubeshop/monokle-admission-controller/blob/f1ames/feat/release-flow/CONTRIBUTING.md#releasing).As a result of a release, we got 3 artifacts published in Kubeshop dockerhub:
and GH release:
I'm only not sure about DockerHub OCI registry. I mean, usually with helm the steps are adding specific helm repo and installing chart:
And here we have single command to install the chart. I mean, this works and is according to the docs, but it's a bit different than usual installation flow.
Testing
You can see successful job runs and also artifacts/releases publish.
To test how users would use it, see README installation and usage sections.
Changes
Fixes
Checklist