Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce release process #9

Merged
merged 22 commits into from
Oct 25, 2023
Merged

Introduce release process #9

merged 22 commits into from
Oct 25, 2023

Conversation

f1ames
Copy link
Contributor

@f1ames f1ames commented Oct 18, 2023

This PR introduces release pipeline.

It's quite basic but should be enough for a start and as a good foundation to build on further.

How it works

There is a single action with two jobs:

  1. First one builds and publishes docker images. And builds and publishes helm chart. Both are published to docker hub.
  2. The 2nd one (which depends on the 1st one) creates GH release and publishes packaged helm chart and install.yaml (which is a result of helm template...). This way admission controller can be installed directly from GH too.

The action will run on tag being pushed to this repo (for testing purposes I just run it as PR part too).

The only required manual actions is running bump script and pushing changes to repo (as described in CONTRIBUTING guide - https://github.com/kubeshop/monokle-admission-controller/blob/f1ames/feat/release-flow/CONTRIBUTING.md#releasing).

As a result of a release, we got 3 artifacts published in Kubeshop dockerhub:

image

and GH release:

image

I'm only not sure about DockerHub OCI registry. I mean, usually with helm the steps are adding specific helm repo and installing chart:

helm repo add ...
helm repo install ...

And here we have single command to install the chart. I mean, this works and is according to the docs, but it's a bit different than usual installation flow.

Testing

You can see successful job runs and also artifacts/releases publish.

To test how users would use it, see README installation and usage sections.

Changes

Fixes

  • None.

Checklist

  • tested locally
  • added new dependencies
  • updated the docs
  • added a test

@f1ames f1ames force-pushed the f1ames/feat/release-flow branch from 3ff12ba to 6d50591 Compare October 18, 2023 08:09
@f1ames f1ames changed the title Introduce release process Introduce release process [WIP] Oct 18, 2023
@f1ames f1ames force-pushed the f1ames/feat/release-flow branch from 6d50591 to 394ae76 Compare October 18, 2023 08:25
@f1ames f1ames mentioned this pull request Oct 19, 2023
4 tasks
Base automatically changed from f1ames/feat/deployment to main October 19, 2023 12:54
@f1ames f1ames force-pushed the f1ames/feat/release-flow branch from 0335e85 to b026cff Compare October 19, 2023 13:28
@f1ames f1ames changed the title Introduce release process [WIP] Introduce release process Oct 19, 2023
@f1ames f1ames marked this pull request as ready for review October 19, 2023 14:24
VERSION: ${{ env.VERSION }}
run: |
echo "Building Docker server image with version: ${VERSION}"
docker build -t kubeshop/monokle-admission-controller-server:latest -t kubeshop/monokle-admission-controller-server:${VERSION} ./admission-controller/server/

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a difference between kubeshop/monokle-admission-controller-server & kubeshop/monokle-admission-controller?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubeshop/monokle-admission-controller is a Helm chart while kubeshop/monokle-admission-controller-server is a Docker image for webhook server.

Comment on lines 33 to 49
- name: Build and publish Docker image (init container)
env:
VERSION: ${{ env.VERSION }}
run: |
echo "Building Docker init image with version: ${VERSION}"
docker build -t kubeshop/monokle-admission-controller-init:latest -t kubeshop/monokle-admission-controller-init:${VERSION} ./admission-controller/init/
docker push kubeshop/monokle-admission-controller-init:${VERSION}
docker push kubeshop/monokle-admission-controller-init:latest

- name: Build and publish Docker image (server container)
env:
VERSION: ${{ env.VERSION }}
run: |
echo "Building Docker server image with version: ${VERSION}"
docker build -t kubeshop/monokle-admission-controller-server:latest -t kubeshop/monokle-admission-controller-server:${VERSION} ./admission-controller/server/
docker push kubeshop/monokle-admission-controller-server:${VERSION}
docker push kubeshop/monokle-admission-controller-server:latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use the docker GH actions that are designed for this exact purpose of building and pushing a docker image? They also bring the benefit of layer caching, which reduces the amount of time we spend building.

Furthermore, why not build these docker images in separate actions?
I think there's a benefit in that by not building the initContainer image each time as that will probably not change as often as the main image.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use the docker GH actions that are designed for this exact purpose of building and pushing a docker image? They also bring the benefit of layer caching, which reduces the amount of time we spend building.

Agree, I'll be happy to improve it. I just went with quite basic version to not spend too much time on it. Do you mean to use something like buildx (https://docs.docker.com/build/cache/backends/gha/), also similarly we have here - https://github.com/kubeshop/monokle-saas/blob/e3efa7c43904aac4f64bd1f4b27abe99dbd7a6d5/.github/workflows/api-renderers.yml#L41-L61?

Furthermore, why not build these docker images in separate actions?
I think there's a benefit in that by not building the initContainer image each time as that will probably not change as often as the main image.

The idea was to have separate image version for each release, but I guess it can be done just by tagging already built/existing images if it's source was not modified? (not sure, about Helm in DockerHub OCI registry, haven't found a way to add multiple tags to single version 🤔).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can somehow see why we would want the same version on both, but on the other hand it does not necessarily make sense to version the initContainer if it does not actually change. IDK, @WitoDelnat, what do you think?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd keep the versions separated. There is little benefit to having them being the same. If you have problems with certificates it can even help to see if the init container changed (meaningfully).

As for the Helm Chart versioning I'm a bit more hesitant. Do we want to publish a new Chart for each Docker Image or leave the image as a variable in the values.yaml and default it to latest. @murarustefaan you have knowledge about current best practices?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. TBH I have not seen a lot of projects linking the Chart and App Versions, they are separated for a reason.
Chart version should change only on Chart modifications, while
App Version should change alongside the image version. Here I would link the App Version to the admission controller version (which should be configurable trough values, just in case) while leaving the init container set to latest.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussing further with @murarustefaan I extracted an issue, since it seems to be broad topic and it's difficult to know what will be really useful at this stage - #13.

For improvements:

  • Changed to multi-stage images - e5eb0f4
  • Utilized buildx action for docker related logic - e98d709
  • Added build cache and restructured job - 1802bcd

Comment on lines 60 to 90
create-release:
runs-on: ubuntu-latest
needs: build-and-push

steps:
- name: Extract version from tag
id: extract_version
run: echo "VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_ENV

- name: Debug version
run: echo ${{ env.VERSION }}

- name: Checkout Repository
uses: actions/checkout@v2

- name: Build release artifacts
env:
VERSION: ${{ env.VERSION }}
run: |
helm package ./helm --version ${VERSION}
helm template ./helm --version ${VERSION} > install.yaml
mv monokle-admission-controller-${VERSION}.tgz helm.tgz

- name: Create release and upload artifacts
uses: ncipollo/release-action@v1
with:
name: "Release ${{ env.VERSION }}"
allowUpdates: true
artifacts: 'install.yaml,helm.tgz'
omitBody: true
token: ${{ secrets.CI_BOT_TOKEN }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, this could be a separate job for creating the manifest release only on changes to the manifest files (or on demand).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably the answer will be the same as in the comment above:

The idea was to have separate version for each release.

And here, creating manifest seems pretty cheap/fast. I guess reusing ones (based on changes) would be very similar when it comes to time/resources consumed? Unless you had something different in mind?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, maybe because I went with, kind of, coupling manifest version with container versions. Still, if any container version is updated and released, manifests needs to be released to point to latest images.

The other way around not necessarily, so changes in manifests (Helm chart) only, should't result in new docker images. Probably that's what you had in mind? 🤔

@f1ames f1ames requested a review from murarustefaan October 25, 2023 11:24
Copy link
Contributor

@murarustefaan murarustefaan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@f1ames f1ames merged commit 72bd24f into main Oct 25, 2023
@f1ames f1ames deleted the f1ames/feat/release-flow branch October 25, 2023 11:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants