This collects test results scattered across a variety of GCS buckets, stores them in a local SQLite database, and outputs newline-delimited JSON files for import into BigQuery. See overview for more details.
Results are stored in the k8s-gubernator:build BigQuery dataset, which is publicly accessible.
Kettle runs as a pod in the k8s-gubernator/g8r
cluster. To drop into it's context, run <root>$ make -C kettle get-cluster-credentials
If you change:
buckets.yaml
: do nothing, it's automatically fetched from GitHubdeployment.yaml
: deploy withmake push deploy
- any code: Run from root deploy with
make -C kettle push update
, revert withmake -C kettle rollback
if it failspush
builds the continer image and pushes it to the image registryupdate
sets the image of the existing kettle Pod which triggers a restart cycle- this will build the image to Google Container Registry
- See Makefile for details
- If you make local changes in the branch prior to
make push/update
the image will be uploaded with-dirty
in the tag. Keep this in mind when seeting the image. If you see a Pod in aImagePullBackOff
loop, there is likely an issue whenkubectl image set
was run, where the image does not exist in the specified location.
eg: by looking at the logs
make get-cluster-credentials
kubectl logs -l app=kettle
# ...
==== 2018-07-06 08:19:05 PDT ========================================
PULLED 174
ACK irrelevant 172
EXTEND-ACK 2
gs://kubernetes-jenkins/pr-logs/pull/kubeflow_kubeflow/1136/kubeflow-presubmit/2385 True True 2018-07-06 07:51:49 PDT FAILED
gs://kubernetes-jenkins/logs/ci-cri-containerd-e2e-ubuntu-gce/5742 True True 2018-07-06 07:44:17 PDT FAILURE
ACK "finished.json" 2
Downloading JUnit artifacts.
Alternatively, navigate to Gubernator BigQuery page (click on Details) and you can see a table showing last date/time the metrics were collected.
kubectl delete pod -l app=kettle
kubectl rollout status deployment/kettle # monitor pod restart status
kubectl get pod -l app=kettle # should show a new pod name
You can watch the pod startup and collect data from various GCS buckets by looking at its logs via:
kubectl logs -f $(kubectl get pod -l app=kettle -oname)
or access log history with the Query: resource.labels.container_name="kettle"
.
It might take a couple of hours to be fully functional and start updating BigQuery. You can always go back to the Gubernator BigQuery page and check to see if data collection has resumed. Backfill should happen automatically.
Kettle Staging
uses a similar deployment to Kettle
with the following differences
- 100G SSD vs 1001G in production
- Limit option for number of builds to pull from each job bucket (Default 1000 each). Set via BUILD_LIMIT env in deployment-staging.yaml.
- writes to build.staging table only. This differs from production that writes to three tables
build.all
,build.day
, andbuild.week
.
It can be deployed with make -C kettle deploy-staging
. If already deployed, you may just run make -C kettle update-staging
.
To add fields to the BQ table, Visit the k8s-gubernator:build BigQuery dataset and Select the table (Ex. Build > All). Schema -> Edit Schema -> Add field. As well as update schema.json
To add a new GCS bucket to Kettle, simply update buckets.yaml in master
, it will be auto pulled by Kettle on the next cycle.
gs://<bucket path>: #bucket url
contact: "username" #Git Hub Username
prefix: "abc:" #the identifier prefixed to jobs from this bucket (ends in :).
sequential: (bool) #an optional boolean that indicates whether test runs in this
# bucket are numbered sequentially
exclude_jobs: # list of jobs to explicitly exclude from kettle data collection
- job_name1
- job_name2
A postsubmit job runs that pushes Kettle on changes.
Kettle stream.py
leverages Google Cloud PubSub to alert on GCS changes within the kubernetes-jenkins
bucket. These events are tied to the gcs-changes
Topic in the kubernetes-jenkins
project where Prow job artifacts are collated. Each time an artifact is finalized, a PubSub event is triggered and Kettle collects job information when it sees a resource uploaded called finished.json
(indicating the build completed).
Topic Creation can be performed by running gcloud config set project kubernetes-jenkins
and gsutil notification create -t gcs-changes -f json gs://kubernetes-jenkins
Subscriptions are in Kuberenetes Jenkins Build - PubSub.
- kettle
- kettle-staging
They are split so that the staging instance does not consume events aimed at production.
These can be created via:
gcloud pubsub subscriptions create <subscription name> --topic=gcs-changes --topic-project="kubernetes-jenkins" --message-filter='attributes.eventType = "OBJECT_FINALIZE"'
For kettle to have permission, kettle's user needs access. When updating or changing a [Subscription] make sure to add kettle@k8s-gubernator.iam.gserviceaccount.com
as a PubSub Editor
.
gcloud pubsub subscriptions add-iam-policy-binding \
projects/kubernetes-jenkins/subscriptions/kettle-staging \
--member=serviceAccount:kettle@k8s-gubernator.iam.gserviceaccount.com \
--role=roles/pubsub.editor
- Occasionally data from Kettle stops updating, we suspect this is due to a transient hang when contacting GCS (#8800). If this happens, restart kettle