-
Notifications
You must be signed in to change notification settings - Fork 536
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BuildKite CI: Deployed buildkite agents can download and upload artifacts to gcloud #4804
Comments
Initial thinking around implementation: Steps:
Result: Buildkite Agents should be able to run ANY job which requires access to Google Cloud Storage as long as the job key is prefixed with
resource "google_service_account" "buildkite_gcs_account" {
account_id = "buildkite-${var.cluster_name}"
display_name = "Buildkite agent GCS service account -- cluster: ${var.cluster_name}"
project = "o1labs-192920"
}
resource "google_service_account_key" "buildkite_gcs_key" {
service_account_id = google_service_account.buildkite_gcs_account.name
}
resource "kubernetes_secret" "google-application-credentials" {
metadata {
name = "google-application-credentials"
namespace = "${var.cluster_namespace}"
}
data = {
"credentials.json" = base64decode(google_service_account_key.buildkite_gcs_key.private_key)
}
}
apiVersion: v1
kind: Pod
metadata:
name: buildkite-agent
spec:
containers:
- name: buildkite-agent
image: buildkite/agent
env:
- name: BUILDKITE_HOOKS_PATH
value: {{ $.Values.buildkite_hooks_path }}
volumeMounts:
- name: credentials-json
mountPath: "/etc/credentials.json"
readOnly: true
volumes:
- name: credentials-json
secret:
secretName: google-application-credentials ** also may be able to use the official Buildkite Helm Chart's
$ cat /hooks/environment
#!/bin/bash
set -euo pipefail
if [[ "$BUILDKITE_STEP_KEY" ~= ^upload|download ]]; then
export BUILDKITE_GS_APPLICATION_CREDENTIALS_JSON="$(cat /etc/credentials.json)"
export BUILDKITE_ARTIFACT_UPLOAD_DESTINATION="gs://<ci_cd_bucket>/${BUILDKITE_JOB_ID}"
fi |
[reference] buildkite agent hooks: https://buildkite.com/docs/agent/v3/hooks#available-hooks
|
(step 3 - from above) iii. Terraform -- Buildkite Helm chart config and provisioning of Kubernetes Pods:
|
Note: Based on the official Buildkite Docker image, we can either mount hooks into the container at runtime (-- which forces the question of where to source the buildkite hooks file/directory from --) or copy them into the /buildkite/hooks directory (and ensure they are executable) during image build. I generally favor maintaining a custom O(1) image to:
# Re: Mounting - pretty simple for running and testing locally though more unwieldy and complicated via Kubernetes
docker run -it \
-v "$HOME/buildkite-hooks:/buildkite/hooks:ro" \
buildkite/agent:3
# Alternatively, if we create our own image based off `buildkite/agent`,
# we can copy our CI/CD hooks into the correct location by default and also enable
# developers/operators to mount additional hooks for developmental purposes:
$ cat Dockerfile-buildkite-agent
# Example O(1) labs custom Buildkite Agent Docker image
FROM buildkite/agent:3
COPY hooks /buildkite/hooks/ |
@0x0i what is the benefit of using the official buildkite agent docker image vs. just including the buildkite-agent package in whatever environment we have. I forsee this being somewhat difficult to compose with our existing toolchain image -- it would be much easier to |
@0x0i also this approach seems good to me terraform -> helm -> etc (may be worth quickly chatting with @yourbuddyconner and @nholland94 to see how it could or should fit in with the other dhall stuff) -- in the short (today), term do you know of a quick way I can get a local agent to talk to gcloud so it can unblock some pipelines I want to write? |
@bkase : so we'll likely use the official Buildkite Helm chart for deploying the Buildkite agents across the Kubernetes cluster though that's still in discussion (#4803) due to how it fits in with our current infrastructure stack and since the Helm chart seems solidly built. Based on if we do, seems to make sense to leverage an image with proper packaging, etc. (either our own, the official image, a hybrid of both, etc) rather than creating a base image and adding/maintaining the installation of necessary packages ourselves.
You should be able to properly set the environment variables mentioned earlier (e.g. $BUILDKITE_GS_APPLICATION_CREDENTIALS_JSON) to what you use locally and make use of the |
# MacOS installation
brew install minikube helm
minikube start # should automatically choose *docker* virtualization but if not, run: minikube start --driver=docker
helm install <cluster/release-name> buildkite/agent --set privateSshKey="$(cat </path/to/your/github/private-key>)" -f <helm-buildkite-values.yaml> Note: most of the configuration defaults should be fine except:
extraEnv:
- name: BUILDKITE_GS_APPLICATION_CREDENTIALS_JSON
value: '<contents-of-gs-app-creds.json>' |
@bkase @yourbuddyconner Thinking we shouldn't have to worry about developing and maintaining our own Docker image for the buildkite-agents, at least for the purpose of setting buildkite-agent hooks since:
|
@0x0i good call-out, I concur that we probably won't need to extend beyond the built-in functionality of the buildkite agent in the initial implementation. |
See https://buildkite.com/docs/agent/v3/gcloud#uploading-artifacts-to-google-cloud-storage
The JSON credentials approach is nice because we would also be able to support artifact upload/download on local developers' machines trivially.
Depends on #4802
The text was updated successfully, but these errors were encountered: