Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multi architecture docker images using BuildX #3355

Merged
merged 23 commits into from
Oct 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .devcontainer/install-dependencies.sh
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,15 @@ if should-install "$TOOL_DEST/cmctl"; then
curl -L "https://github.com/jetstack/cert-manager/releases/latest/download/cmctl-${os}-${arch}.tar.gz" | tar -xz -C "$TOOL_DEST"
fi

BUILDX_DEST=$HOME/.docker/cli-plugins
write-verbose "Checking for $BUILDX_DEST/docker-buildx"
if should-install "$BUILDX_DEST/docker-buildx"; then
matthchr marked this conversation as resolved.
Show resolved Hide resolved
write-info "Installing buildx-${os}_${arch}…"
theunrepentantgeek marked this conversation as resolved.
Show resolved Hide resolved
mkdir -p "$BUILDX_DEST"
curl -o "$BUILDX_DEST/docker-buildx" -L "https://github.com/docker/buildx/releases/download/v0.11.2/buildx-v0.11.2.${os}-${arch}"
chmod +x "$BUILDX_DEST/docker-buildx"
fi

# Install azwi
write-verbose "Checking for $TOOL_DEST/azwi"
if should-install "$TOOL_DEST/azwi"; then
Expand Down
30 changes: 14 additions & 16 deletions .github/workflows/create-release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,6 @@ jobs:
container_id=${{env.container_id}}
docker exec "$container_id" task make-release-artifacts

- name: Build & tag Docker image
run: |
container_id=${{env.container_id}}
docker exec -e DOCKER_PUSH_TARGET "$container_id" task controller:docker-tag-version
env:
DOCKER_PUSH_TARGET: ${{ secrets.REGISTRY_PUBLIC }}

- name: Upload release assets
uses: svenstaro/upload-release-action@7319e4733ec7a184d739a6f412c40ffc339b69c7 # this is v2.5.0, but pinned
with:
Expand All @@ -49,13 +42,18 @@ jobs:
file: "v2/out/release/*"
file_glob: true

- name: Login to registry
matthchr marked this conversation as resolved.
Show resolved Hide resolved
# note that all creds are on host and never passed into devcontainer
uses: docker/login-action@v2.1.0
with:
registry: ${{ secrets.REGISTRY_LOGIN }}
username: ${{ secrets.AZURE_CLIENT_ID }}
password: ${{ secrets.AZURE_CLIENT_SECRET }}
- name: Docker login
run: |
container_id=${{env.container_id}}
docker exec -e AZURE_CLIENT_ID -e AZURE_CLIENT_SECRET -e DOCKER_REGISTRY "$container_id" task docker-login
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that these secrets are being passed in to the devcontainer, is there any possibility of them being logged or left behind as a part of the published image? If so, this opens up the chance of someone harvesting those secrets.

Copy link
Collaborator Author

@super-harsh super-harsh Oct 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so - We already do pass-in secrets for live validation and az-login in other workflows as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging we're protected from because GH actually redacts the logs for the secret contents. Even if you try to log the secret it'll be redacted.

Rest I think as Harsh said, we should be OK with as long as we aren't actually saving the secret into the published container. I think it would be relatively obviously if we were based on the dockerfile for the image we're building.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my above question though about why we have to do it this way... it seems like the docker login should work?

Or is the issue that, docker context on host and docker context in devcontainer are different, and so we need to make the login happen in the devcontainer?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking a git blame of this file, it was @Porges who wrote the comment

# note that all creds are on host and never passed into devcontainer

Working from the precept that smart people do things for good reasons, I'm worried that we're missing something here. To be more specific on my earlier concern, I'm not worried about disclosure during CI, I'm worried about the credentials being left lying around inside the container image, available for anyone to extract if they go nosing around inside.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By default, Docker looks for the native binary on each of the platforms, i.e. "osxkeychain" on macOS, "wincred" on windows, and "pass" on Linux. A special case is that on Linux, Docker will fall back to the "secretservice" binary if it cannot find the "pass" binary. If none of these binaries are present, it stores the credentials (i.e. password) in base64 encoding in the config files described above.

Here's the reference to how docker stores the credentials and auth tokens.

env:
DOCKER_REGISTRY: ${{ secrets.REGISTRY_LOGIN }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_CLIENT_SECRET }}

- name: Push docker image
run: docker push --all-tags ${{ secrets.REGISTRY_PUBLIC }}/azureserviceoperator
- name: Build, tag and push docker image
run: |
container_id=${{env.container_id}}
docker exec -e DOCKER_PUSH_TARGET "$container_id" task controller:docker-push-multiarch
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: is there a "normal" docker-push target stil? If not, seems like we should just rename this to that? Yes it's multiarch but it's the only push we have?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have any other docker push target instead of docker-push-local. I'm letting the local push target be there for reasons:

  • For local builds, we don't need multi-arch images as they take time to build and occupy space.
  • Buildx does not have direct support for pushing to local registries. We need to add in a few extra steps to enable it. Mentioned in Build cannot export to registry on localhost

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably document this on the taskfile targets, for why we have two and why different ones are used in different places.

There are some risks with this too (we're not actually testing the image we're creating), but I see that lots of people have this problem, as documented:
docker/buildx#166
docker/buildx#1152

There does seem to be a way to push buildx image to local registry now though, so I am wondering if we should try swapping the kind steps to use the multiarch image?

Copy link
Collaborator Author

@super-harsh super-harsh Oct 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can surely swap kind steps to use buildx, but not sure about the multiarch images if thats a good idea. As I noticed, for cross-platform images buildx takes around about 10-15 mins to build(on my machine). Which would increase the CI run time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI shouldn't be building the image every time - it's supposed to be cached. We even have a workflowthat runs weekly to keep it current.

So if switching CI to use buildx slows it down, we need to fix that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think CI caches this image @theunrepentantgeek - it caches the .devcontainer image, not the controller image (being built here). It must build the controller image every time because that's the image that:

  • Contains the code changes for the PR in question.
  • Runs in kind and is verified by the tests.

My understanding of what buildx does is that it basically calls N docker builds with different architectures and then automatically merges the manifests. So it may be OK to not use it for the local build (see the issues I linked above for where this is talked about some and people suggest using buildx w/ a single version argument to get local working, which anyway is going to be different than what we do for release)

env:
DOCKER_PUSH_TARGET: ${{ secrets.REGISTRY_PUBLIC }}
4 changes: 0 additions & 4 deletions .github/workflows/push-release-image-test.yaml

This file was deleted.

83 changes: 53 additions & 30 deletions Taskfile.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ vars:
LATEST_VERSION_TAG:
sh: git describe --tags $(git rev-list --tags=v2* --max-count=1)

VERSION_FLAGS: -ldflags "-X {{.PACKAGE}}/internal/version.BuildVersion={{.VERSION}}"
VERSION_FLAGS: '"-X {{.PACKAGE}}/internal/version.BuildVersion={{.VERSION}}"'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove -ldflags - it's one of the flags required to set the version in the executable. Doesn't make sense to me to remove this here, then restate is manually everywhere else.

If you have a need for part of this value elsewhere, introduce a different variable for that purpose rather than redefining this one and making things awkward.

LDFLAGS: -ldflags {{.VERSION_FLAGS}}

CONTROLLER_DOCKER_IMAGE: azureserviceoperator:{{.VERSION}}
PUBLIC_REGISTRY: mcr.microsoft.com/k8s/
Expand Down Expand Up @@ -162,7 +163,7 @@ tasks:
- "{{.ARCHIVE}}"
cmds:
- mkdir -p ./bin/{{.GOOS}}-{{.GOARCH}}
- GOOS={{.GOOS}} GOARCH={{.GOARCH}} go build {{.VERSION_FLAGS}} -o {{.EXECUTABLE}}
- GOOS={{.GOOS}} GOARCH={{.GOARCH}} go build {{.LDFLAGS}} -o {{.EXECUTABLE}}
- if [ "{{.ARCHIVETYPE}}" = ".zip" ]; then zip -j -r {{.ARCHIVE}} {{.EXECUTABLE}}; fi
- if [ "{{.ARCHIVETYPE}}" = ".gz" ]; then gzip -v -c {{.EXECUTABLE}} > {{.ARCHIVE}} ; fi
vars:
Expand Down Expand Up @@ -239,7 +240,7 @@ tasks:
desc: Generate the {{.GENERATOR_APP}} binary.
dir: '{{.GENERATOR_ROOT}}'
cmds:
- go build {{.VERSION_FLAGS}} -o ../../bin/{{.GENERATOR_APP}} .
- go build {{.LDFLAGS}} -o ../../bin/{{.GENERATOR_APP}} .

# Stub retained until migration of workflows complete
generator:diagrams:
Expand Down Expand Up @@ -369,60 +370,76 @@ tasks:
deps:
- controller:generate-crds
sources:
# excluding the ./apis directory here
- "go.mod"
- "go.sum"
- "*.go"
- "internal/**/*.go"
- "pkg/**/*.go"
- "cmd/controller/**/*.go"
- "**/*.go"
cmds:
- CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build {{.VERSION_FLAGS}} -o ./bin/{{.CONTROLLER_APP}} ./cmd/controller/
- CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build {{.LDFLAGS}} -o ./bin/{{.CONTROLLER_APP}} ./cmd/controller/

controller:docker-build:
matthchr marked this conversation as resolved.
Show resolved Hide resolved
desc: Builds the {{.CONTROLLER_APP}} Docker image.
dir: "{{.CONTROLLER_ROOT}}"
deps:
- controller:build
- controller:bundle-crds
run: always
sources:
matthchr marked this conversation as resolved.
Show resolved Hide resolved
- "go.mod"
- "go.sum"
- "**/*.go"
- "out/crds/**/*"
- Dockerfile
- ./bin/{{.CONTROLLER_APP}}
- out/crds/**/*
cmds:
- docker build . -t {{.CONTROLLER_DOCKER_IMAGE}}
status:
- "docker manifest inspect {{.CONTROLLER_DOCKER_IMAGE}} > /dev/null"
- docker build . --build-arg VERSION_FLAGS={{.VERSION_FLAGS}} --build-arg CONTROLLER_APP={{.CONTROLLER_APP}} -t {{.CONTROLLER_DOCKER_IMAGE}}

controller:docker-build-and-save:
desc: Builds the {{.CONTROLLER_APP}} Docker image and saves it using docker save.
dir: "{{.CONTROLLER_ROOT}}"
deps:
deps:
- controller:docker-build
cmds:
- docker save {{.CONTROLLER_DOCKER_IMAGE}} > bin/$(echo '{{.CONTROLLER_DOCKER_IMAGE}}' | sed -e 's/:/_/g').tar

controller:docker-tag-version:
desc: Tags the {{.CONTROLLER_APP}} Docker image with the appropriate version.
dir: "{{.CONTROLLER_ROOT}}"
deps:
- controller:docker-build
cmds:
- 'if [ -z "{{.DOCKER_PUSH_TARGET}}" ]; then echo "Error: DOCKER_PUSH_TARGET must be set"; exit 1; fi'
- docker tag {{.CONTROLLER_DOCKER_IMAGE}} "{{.DOCKER_PUSH_TARGET}}/{{.CONTROLLER_DOCKER_IMAGE}}"

controller:docker-push-local:
desc: Pushes the controller container image to a local registry
deps:
- controller:docker-build
- controller:bundle-crds
dir: "{{.CONTROLLER_ROOT}}"
run: always
sources:
- "go.mod"
- "go.sum"
- "**/*.go"
- "out/crds/**/*"
- Dockerfile
cmds:
- docker tag {{.CONTROLLER_DOCKER_IMAGE}} {{.LOCAL_REGISTRY_CONTROLLER_DOCKER_IMAGE}}
- docker push {{.LOCAL_REGISTRY_CONTROLLER_DOCKER_IMAGE}}
status:
- "docker manifest inspect {{.LOCAL_REGISTRY_CONTROLLER_DOCKER_IMAGE}} > /dev/null"
# We don't use multi-arch images here for local registry as building cross-platform image takes about 10-15 minutes to build.
# Which would increase our CI time and in the local environment, only the same architecture image is going to be pulled always.
- docker buildx create --driver-opt network=host --use
- docker buildx build --push
--build-arg VERSION_FLAGS={{.VERSION_FLAGS}}
--build-arg CONTROLLER_APP={{.CONTROLLER_APP}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: Leave a comment here about how we don't use multiarch platform options here because we dont need them for local testing and they don't work for local registries? (Can link the GH issues I linked earlier)

--tag "{{.LOCAL_REGISTRY_CONTROLLER_DOCKER_IMAGE}}" .

controller:docker-push-multiarch:
desc: Builds, tags and pushes the multi architecture controller container image to repository
deps:
- controller:bundle-crds
dir: "{{.CONTROLLER_ROOT}}"
run: always
sources:
- "go.mod"
- "go.sum"
- "**/*.go"
- "out/crds/**/*"
- Dockerfile
cmds:
- 'if [ -z "{{.DOCKER_PUSH_TARGET}}" ]; then echo "Error: DOCKER_PUSH_TARGET must be set"; exit 1; fi'
- docker buildx create --use
- docker buildx build --push
--build-arg VERSION_FLAGS={{.VERSION_FLAGS}}
--build-arg CONTROLLER_APP={{.CONTROLLER_APP}}
--platform linux/amd64,linux/arm64
--tag "{{.DOCKER_PUSH_TARGET}}/{{.CONTROLLER_DOCKER_IMAGE}}" .

controller:test-integration-envtest:
desc: Run integration tests with envtest using record/replay.
Expand Down Expand Up @@ -1030,6 +1047,12 @@ tasks:
- az login --service-principal -u {{.AZURE_CLIENT_ID}} -p {{.AZURE_CLIENT_SECRET}} --tenant {{.AZURE_TENANT_ID}} > /dev/null
- az account set --subscription {{.AZURE_SUBSCRIPTION_ID}}

docker-login:
desc: Docker login
cmds:
- 'if [ -z "{{.DOCKER_REGISTRY}}" ]; then echo "Error: DOCKER_REGISTRY must be set"; exit 1; fi'
- docker login {{.DOCKER_REGISTRY}} --username {{.AZURE_CLIENT_ID}} --password {{.AZURE_CLIENT_SECRET}}

header-check:
desc: Ensure all files have an appropriate license header.
cmds:
Expand Down
7 changes: 7 additions & 0 deletions v2/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
**

# Ignore everything except below
!/go.mod
!/go.sum
!**/*.go
!/out/crds
24 changes: 22 additions & 2 deletions v2/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,10 +1,30 @@
# Note: This Dockerfile assumes that the binary has been built using the top-level Taskfile.yml
matthchr marked this conversation as resolved.
Show resolved Hide resolved
# Note: This Dockerfile assumes that the "bundle-crds" taskfile target has been run already
ARG VERSION_FLAGS

# Build the manager binary
FROM golang:1.20 as builder
ARG VERSION_FLAGS

WORKDIR /workspace/
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum

# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
RUN go mod download

# Copy the go source
COPY . ./

# Build
RUN CGO_ENABLED=0 go build -ldflags "${VERSION_FLAGS}" -o ./bin/aso-controller ./cmd/controller/

# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY ./bin/aso-controller .
COPY --from=builder /workspace/bin/aso-controller .
COPY ./out/crds ./crds
USER nonroot:nonroot
ENTRYPOINT ["/aso-controller"]