This guide explains how to install Tekton Pipelines. It covers the following topics:
- Before you begin
- Installing Tekton Pipelines on Kubernetes
- Configuring PipelineResource storage
- Configuring CloudEvents notifications
- Configuring remote Task and Pipeline resolution
- Configuring self-signed cert for private registry
- Customizing basic execution parameters
- Enabling larger results using sidecar logs
- Configuring High Availability
- Configuring tekton pipeline controller performance
- Creating a custom release of Tekton Pipelines
- Verify Tekton Pipelines release
- Verify Tekton Resources
- Next steps
-
You must have a Kubernetes cluster running version 1.22 or later.
If you don't already have a cluster, you can create one for testing with
kind
. Installkind
and create a cluster by runningkind create cluster
. This will create a cluster running locally, with RBAC enabled and your user granted thecluster-admin
role. -
If you want to support high availability usecases, install a Metrics Server on your cluster.
-
Choose the version of Tekton Pipelines you want to install. You have the following options:
- Official - install this unless you have a specific reason to go for a different release.
- Nightly - may contain bugs,
install at your own risk. Nightlies live at
gcr.io/tekton-nightly
. - [
HEAD
] - this is the bleeding edge. It contains unreleased code that may result in unpredictable behavior. To get started, see the development guide instead of this page.
-
Grant
cluster-admin
permissions to the current user.See Role-based access control for more information.
To install Tekton Pipelines on a Kubernetes cluster:
-
Run the following command to install Tekton Pipelines and its dependencies:
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
Or, for the nightly release, use:
kubectl apply --filename https://storage.googleapis.com/tekton-releases-nightly/pipeline/latest/release.yaml
You can install a specific release using
previous/$VERSION_NUMBER
. For example:kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.2.0/release.yaml
If your container runtime does not support
image-reference:tag@digest
(for example, likecri-o
used in OpenShift 4.x), userelease.notags.yaml
instead:kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.notags.yaml
-
Note: Some cloud providers (such as GKE) may also require you to allow port 8443 in your firewall rules so that the Tekton Pipelines webhook is reachable.
-
Monitor the installation using the following command until all components show a
Running
status:kubectl get pods --namespace tekton-pipelines --watch
Note: Hit CTRL+C to stop monitoring.
Congratulations! You have successfully installed Tekton Pipelines on your Kubernetes cluster. Next, see the following topics:
- Configuring PipelineResource storage to set up artifact storage for Tekton Pipelines.
- Customizing basic execution parameters if you need to customize your service account, timeout, or Pod template values.
To install Tekton Pipelines on OpenShift, you must first apply the anyuid
security
context constraint to the tekton-pipelines-controller
service account. This is required to run the webhook Pod.
See
Security Context Constraints
for more information.
-
Log on as a user with
cluster-admin
privileges. The following example uses the defaultsystem:admin
user:oc login -u system:admin
-
Set up the namespace (project) and configure the service account:
oc new-project tekton-pipelines oc adm policy add-scc-to-user anyuid -z tekton-pipelines-controller oc adm policy add-scc-to-user anyuid -z tekton-pipelines-webhook
-
Install Tekton Pipelines:
oc apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.notags.yaml
See the OpenShift CLI documentation for more information on the
oc
command. -
Monitor the installation using the following command until all components show a
Running
status:oc get pods --namespace tekton-pipelines --watch
Note: Hit CTRL + C to stop monitoring.
Congratulations! You have successfully installed Tekton Pipelines on your OpenShift environment. Next, see the following topics:
- Configuring PipelineResource storage to set up artifact storage for Tekton Pipelines.
- Customizing basic execution parameters if you need to customize your service account, timeout, or Pod template values.
If you want to run OpenShift 4.x on your laptop (or desktop), you should take a look at Red Hat CodeReady Containers.
⚠️ PipelineResources
are deprecated.For storage, consider using
Workspaces
withVolumeClaimTemplates
to automatically provision and manage Persistent Volume Claims (PVCs). Read more in TEP-0074.
PipelineResources are one of the ways that Tekton passes data between Tasks. If you intend to use PipelineResources in your Pipelines then you'll need to configure a storage location for that data to be put so that it can be shared between Tasks in the Pipeline.
The storage options available for sharing PipelineResources between Tasks in a Pipeline are:
Either option provides the same functionality to Tekton Pipelines. Choose the option that best suits your business needs. For example:
- In some environments, creating a persistent volume could be slower than transferring files to/from a cloud storage bucket.
- If the cluster is running in multiple zones, accessing a persistent volume could be unreliable.
Note: To customize the names of the ConfigMaps
for artifact persistence (e.g. to avoid collisions with other services), rename the ConfigMap
and update the env value defined controller.yaml.
To configure a persistent volume, use a ConfigMap
with the name config-artifact-pvc
and the following attributes:
size
: the size of the volume. Default is 5GiB.storageClassName
: the storage class of the volume. The possible values depend on the cluster configuration and the underlying infrastructure provider. Default is the default storage class.
To configure either an S3 bucket or a GCS bucket,
use a ConfigMap
with the name config-artifact-bucket
and the following attributes:
location
- the address of the bucket, for examplegs://mybucket
ors3://mybucket
.bucket.service.account.secret.name
- the name of the secret containing the credentials for the service account with access to the bucket.bucket.service.account.secret.key
- the key in the secret with the required service account JSON file.bucket.service.account.field.name
- the name of the environment variable to use when specifying the secret path. Defaults toGOOGLE_APPLICATION_CREDENTIALS
. Set toBOTO_CONFIG
if using S3 instead of GCS.
Important: Configure your bucket's retention policy to delete all files after your Tasks
finish running.
Note: You can only use an S3 bucket located in the us-east-1
region. This is a limitation of gsutil
running a boto
configuration behind the scenes to access the S3 bucket.
Below is an example configuration that uses an S3 bucket:
apiVersion: v1
kind: Secret
metadata:
name: tekton-storage
namespace: tekton-pipelines
type: kubernetes.io/opaque
stringData:
boto-config: |
[Credentials]
aws_access_key_id = AWS_ACCESS_KEY_ID
aws_secret_access_key = AWS_SECRET_ACCESS_KEY
[s3]
host = s3.us-east-1.amazonaws.com
[Boto]
https_validate_certificates = True
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-artifact-bucket
namespace: tekton-pipelines
data:
location: s3://mybucket
bucket.service.account.secret.name: tekton-storage
bucket.service.account.secret.key: boto-config
bucket.service.account.field.name: BOTO_CONFIG
Below is an example configuration that uses a GCS bucket:
apiVersion: v1
kind: Secret
metadata:
name: tekton-storage
namespace: tekton-pipelines
type: kubernetes.io/opaque
stringData:
gcs-config: |
{
"type": "service_account",
"project_id": "gproject",
"private_key_id": "some-key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\nME[...]dF=\n-----END PRIVATE KEY-----\n",
"client_email": "tekton-storage@gproject.iam.gserviceaccount.com",
"client_id": "1234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/tekton-storage%40gproject.iam.gserviceaccount.com"
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-artifact-bucket
namespace: tekton-pipelines
data:
location: gs://mybucket
bucket.service.account.secret.name: tekton-storage
bucket.service.account.secret.key: gcs-config
bucket.service.account.field.name: GOOGLE_APPLICATION_CREDENTIALS
Four remote resolvers are currently provided as part of the Tekton Pipelines installation.
By default, these remote resolvers are enabled. Each resolver can be disabled by setting
the appropriate feature flag in the resolvers-feature-flags
ConfigMap in the tekton-pipelines-resolvers
namespace:
- The
bundles
resolver, disabled by setting theenable-bundles-resolver
feature flag tofalse
. - The
git
resolver, disabled by setting theenable-git-resolver
feature flag tofalse
. - The
hub
resolver, disabled by setting theenable-hub-resolver
feature flag tofalse
. - The
cluster
resolver, disabled by setting theenable-cluster-resolver
feature flag tofalse
.
When configured so, Tekton can generate CloudEvents
for TaskRun
,
PipelineRun
and Run
lifecycle events. The main configuration parameter is the
URL of the sink. When not set, no notification is generated.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
default-cloud-events-sink: https://my-sink-url
Additionally, CloudEvents for Runs
require an extra configuration to be
enabled. This setting exists to avoid collisions with CloudEvents that might
be sent by custom task controllers:
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-flags
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
send-cloudevents-for-runs: true
The SSL_CERT_DIR
is set to /etc/ssl/certs
as the default cert directory. If you are using a self-signed cert for private registry and the cert file is not under the default cert directory, configure your registry cert in the config-registry-cert
ConfigMap
with the key cert
.
Environment variables can be configured in the following ways, mentioned in order of precedence from lowest to highest.
- Implicit environment variables
Step
/StepTemplate
environment variables- Environment variables specified via a
default
PodTemplate
. - Environment variables specified via a
PodTemplate
.
The environment variables specified by a PodTemplate
supercedes all other ways of specifying environment variables. However, there exists a configuration i.e. default-forbidden-env
, the environment variable specified in this list cannot be updated via a PodTemplate
.
For example:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
namespace: tekton-pipelines
data:
default-timeout-minutes: "50"
default-service-account: "tekton"
default-forbidden-env: "TEST_TEKTON"
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: mytask
namespace: default
spec:
steps:
- name: echo-env
image: ubuntu
command: ["bash", "-c"]
args: ["echo $TEST_TEKTON "]
env:
- name: "TEST_TEKTON"
value: "true"
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: mytaskrun
namespace: default
spec:
taskRef:
name: mytask
podTemplate:
env:
- name: "TEST_TEKTON"
value: "false"
In the above example the environment variable TEST_TEKTON
will not be overriden by value specified in podTemplate, because the config-default
option default-forbidden-env
is configured with value TEST_TEKTON
.
You can specify your own values that replace the default service account (ServiceAccount
), timeout (Timeout
), and Pod template (PodTemplate
) values used by Tekton Pipelines in TaskRun
and PipelineRun
definitions. To do so, modify the ConfigMap config-defaults
with your desired values.
The example below customizes the following:
- the default service account from
default
totekton
. - the default timeout from 60 minutes to 20 minutes.
- the default
app.kubernetes.io/managed-by
label is applied to all Pods created to executeTaskRuns
. - the default Pod template to include a node selector to select the node where the Pod will be scheduled by default. A list of supported fields is available here.
For more information, see
PodTemplate
inTaskRuns
orPodTemplate
inPipelineRuns
. - the default
Workspace
configuration can be set for anyWorkspaces
that a Task declares but that a TaskRun does not explicitly provide - the default maximum combinations of
Parameters
in aMatrix
that can be used to fan out aPipelineTask
. For more information, seeMatrix
.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
data:
default-service-account: "tekton"
default-timeout-minutes: "20"
default-pod-template: |
nodeSelector:
kops.k8s.io/instancegroup: build-instance-group
default-managed-by-label-value: "my-tekton-installation"
default-task-run-workspace-binding: |
emptyDir: {}
default-max-matrix-combinations-count: "1024"
Note: The _example
key in the provided config-defaults.yaml
file lists the keys you can customize along with their default values.
To customize the behavior of the Pipelines Controller, modify the ConfigMap feature-flags
via
kubectl edit configmap feature-flags -n tekton-pipelines
.
Note: Changing feature flags may result in undefined behavior for TaskRuns and PipelineRuns that are running while the change occurs.
The flags in this ConfigMap are as follows:
-
disable-affinity-assistant
- set this flag totrue
to disable the Affinity Assistant that is used to provide Node Affinity forTaskRun
pods that share workspace volume. The Affinity Assistant is incompatible with other affinity rules configured forTaskRun
pods.Note: Affinity Assistant use Inter-pod affinity and anti-affinity that require substantial amount of processing which can slow down scheduling in large clusters significantly. We do not recommend using them in clusters larger than several hundred nodes
Note: Pod anti-affinity requires nodes to be consistently labelled, in other words every node in the cluster must have an appropriate label matching
topologyKey
. If some or all nodes are missing the specifiedtopologyKey
label, it can lead to unintended behavior. -
await-sidecar-readiness
: set this flag to"false"
to allow the Tekton controller to start a TasksRun's first step immediately without waiting for sidecar containers to be running first. Using this option should decrease the time it takes for a TaskRun to start running, and will allow TaskRun pods to be scheduled in environments that don't support Downward API volumes (e.g. some virtual kubelet implementations). However, this may lead to unexpected behaviour with Tasks that use sidecars, or in clusters that use injected sidecars (e.g. Istio). Setting this flag to"false"
will mean therunning-in-environment-with-injected-sidecars
flag has no effect. -
running-in-environment-with-injected-sidecars
: set this flag to"false"
to allow the Tekton controller to start a TasksRun's first step immediately if it has no Sidecars specified. Using this option should decrease the time it takes for a TaskRun to start running. However, for clusters that use injected sidecars (e.g. Istio) this can lead to unexpected behavior. -
require-git-ssh-secret-known-hosts
: set this flag to"true"
to require that Git SSH Secrets include aknown_hosts
field. This ensures that a git remote server's key is validated before data is accepted from it when authenticating over SSH. Secrets that don't include aknown_hosts
will result in the TaskRun failing validation and not running. -
enable-tekton-oci-bundles
: set this flag to"true"
to enable the tekton OCI bundle usage (see the tekton bundle contract). Enabling this option allows the use ofbundle
field intaskRef
andpipelineRef
forPipeline
,PipelineRun
andTaskRun
. By default, this option is disabled ("false"
), which means it is disallowed to use thebundle
field. -
disable-creds-init
- set this flag to"true"
to disable Tekton's built-in credential initialization and use Workspaces to mount credentials from Secrets instead. The default isfalse
. For more information, see the associated issue. -
enable-api-fields
: set this flag to "stable" to allow only the most stable features to be used. Set it to "alpha" to allow alpha features to be used. -
embedded-status
: set this flag to "full" to enable full embedding ofTaskRun
andRun
statuses in thePipelineRun
status. Set it to "minimal" to populate theChildReferences
field in thePipelineRun
status with name, kind, and API version information for eachTaskRun
andRun
in thePipelineRun
instead. Set it to "both" to do both. For more information, see Configuring usage ofTaskRun
andRun
embedded statuses. -
resource-verification-mode
: Setting this flag to "enforce" will enforce verification of tasks/pipeline. Failing to verify will fail the taskrun/pipelinerun. "warn" will only log the err message and "skip" will skip the whole verification. -
results-from
: set this flag to "termination-message" to use the container's termination message to fetch results from. This is the default method of extracting results. Set it to "sidecar-logs" to enable use of a results sidecar logs to extract results instead of termination message. -
enable-provenance-in-status
: set this flag to "true" to enable recording theprovenance
field inTaskRun
andPipelineRun
status. Theprovenance
field contains metadata about resources used in the TaskRun/PipelineRun such as the source from where a remote Task/Pipeline definition was fetched. -
custom-task-version
: set this flag to "v1beta1" to havePipelineRuns
createCustomRuns
from Custom Tasks. Set it to "v1alpha1" to havePipelineRuns
create the legacy alphaRuns
. This may be needed if you are using legacy Custom Tasks that listen for*v1alpha1.Run
instead of*v1beta1.CustomRun
. For more information, see Runs and CustomRuns. Flag defaults to "v1beta1".
For example:
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-flags
data:
enable-api-fields: "alpha" # Allow alpha fields to be used in Tasks and Pipelines.
Alpha features in the following table are still in development and their syntax is subject to change.
- To enable the features without an individual flag:
set the
enable-api-fields
feature flag to"alpha"
in thefeature-flags
ConfigMap alongside your Tekton Pipelines deployment viakubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"enable-api-fields":"alpha"}}'
. - To enable the features with an individual flag:
set the individual flag accordingly in the
feature-flag
ConfigMap alongside your Tekton Pipelines deployment. Example:kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"<FLAG-NAME>":"<FLAG-VALUE>"}}'
.
Features currently in "alpha" are:
Feature | Proposal | Release | Individual Flag |
---|---|---|---|
Bundles | TEP-0005 | v0.18.0 | enable-tekton-oci-bundles |
Isolated Step & Sidecar Workspaces |
TEP-0029 | v0.24.0 | |
Hermetic Execution Mode | TEP-0025 | v0.25.0 | |
Propagated Workspaces |
TEP-0111 | v0.40.0 | |
Windows Scripts | TEP-0057 | v0.28.0 | |
Debug | TEP-0042 | v0.26.0 | |
Step and Sidecar Overrides | TEP-0094 | v0.34.0 | |
Matrix | TEP-0090 | v0.38.0 | |
Embedded Statuses | TEP-0100 | v0.35.0 | embedded-status |
Task-level Resource Requirements | TEP-0104 | v0.39.0 | |
Object Params and Results | TEP-0075 | v0.38.0 | |
Array Results | TEP-0076 | v0.38.0 | |
Trusted Resources | TEP-0091 | N/A | resource-verification-mode |
Provenance field in Status |
issue#5550 | N/A | enable-provenance-in-status |
Larger Results via Sidecar Logs | TEP-0127 | v0.43.0 | results-from |
Beta features are fields of stable CRDs that follow our "beta" compatibility policy.
To enable these features, set the enable-api-fields
feature flag to "beta"
in
the feature-flags
ConfigMap alongside your Tekton Pipelines deployment via
kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"enable-api-fields":"beta"}}'
.
For beta versions of Tekton CRDs, setting enable-api-fields
to "beta" is the same as setting it to "stable".
Note: The maximum size of a Task's results is limited by the container termination message feature of Kubernetes, as results are passed back to the controller via this mechanism. At present, the limit is per task is “4096 bytes”. All results produced by the task share this upper limit.
To exceed this limit of 4096 bytes, you can enable larger results using sidecar logs. By enabling this feature, you will have a configurable limit (with a default of 4096 bytes) per result with no restriction on the number of results. The results are still stored in the taskrun crd so they should not exceed the 1.5MB CRD size limit.
Note: to enable this feature, you need to grant get
access to all pods/log
to the Tekton pipeline controller
. This means that the tekton pipeline controller has the ability to access the pod logs.
- Create a cluster role and rolebinding by applying the following spec to provide log access to
tekton-pipelines-controller
.
kubectl apply -f optional_config/enable-log-access-to-controller/
- Set the
results-from
feature flag to use sidecar logs by settingresults-from: sidecar-logs
in the configMap.
kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"results-from":"sidecar-logs"}}'
- If you want the size per result to be something other than 4096 bytes, you can set the
max-result-size
feature flag in bytes by settingmax-result-size: 8192(whatever you need here)
. Note: The value you can set here cannot exceed the size of the CRD limit of 1.5 MB.
kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"max-result-size":"<VALUE-IN-BYTES>"}}'
If you want to run Tekton Pipelines in a way so that webhooks are resiliant against failures and support high concurrency scenarios, you need to run a Metrics Server in your Kubernetes cluster. This is required by the Horizontal Pod Autoscalers to compute replica count.
See HA Support for Tekton Pipeline Controllers for instructions on configuring High Availability in the Tekton Pipelines Controller.
The default configuration is defined in webhook-hpa.yaml which can be customized to better fit specific usecases.
Out-of-the-box, Tekton Pipelines Controller is configured for relatively small-scale deployments but there have several options for configuring Pipelines' performance are available. See the Performance Configuration document which describes how to change the default ThreadsPerController, QPS and Burst settings to meet your requirements.
The Tekton project provides support for running on x86 Linux Kubernetes nodes.
The project produces images capable of running on other architectures and operating systems, but may not be able to help debug issues specific to those platforms as readily as those that affect Linux on x86.
The controller and webhook components are currently built for:
- linux/amd64
- linux/arm64
- linux/arm (Arm v7)
- linux/ppc64le (PowerPC)
- linux/s390x (IBM Z)
The entrypoint component is also built for Windows, which enables TaskRun workloads to execute on Windows nodes. See Windows documentation for more information.
Additional components to support PipelineResources may be available for other architectures as well.
You can create a custom release of Tekton Pipelines by following and customizing the steps in Creating an official release. For example, you might want to customize the container images built and used by Tekton Pipelines.
We will refine this process over time to be more streamlined. For now, please follow the steps listed in this section to verify Tekton pipeline release.
Tekton Pipeline's images are being signed by Tekton Chains since 0.27.1. You can verify the images with
cosign
using the Tekton's public key.
With Go 1.16+, you can install cosign
by running:
go install github.com/sigstore/cosign/cmd/cosign@latest
You can verify Tekton Pipelines official images using the Tekton public key:
cosign verify -key https://raw.githubusercontent.com/tektoncd/chains/main/tekton.pub gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.28.1
which results in:
Verification for gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.28.1 --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
- Any certificates were verified against the Fulcio roots.
{
"Critical": {
"Identity": {
"docker-reference": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller"
},
"Image": {
"Docker-manifest-digest": "sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8"
},
"Type": "Tekton container signature"
},
"Optional": {}
}
The verification shows a list of checks performed and returns the digest in Critical.Image.Docker-manifest-digest
which can be used to retrieve the provenance from the transparency logs for that image using rekor-cli
.
Install the rekor-cli
by running:
go install -v github.com/sigstore/rekor/cmd/rekor-cli@latest
Now, use the digest collected from the previous section in
Critical.Image.Docker-manifest-digest
, for example,
sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8
.
Search the transparency log with the digest just collected:
rekor-cli search --sha sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8
which results in:
Found matching entries (listed by UUID):
68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226
Tekton Chains generates provenance based on the custom format
in which the subject
holds the list of artifacts which were built as part of the release. For the Pipeline release,
subject
includes a list of images including pipeline controller, pipeline webhook, etc. Use the UUID
to get the provenance:
rekor-cli get --uuid 68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226 --format json | jq -r .Attestation | base64 --decode | jq
which results in:
{
"_type": "https://in-toto.io/Statement/v0.1",
"predicateType": "https://tekton.dev/chains/provenance",
"subject": [
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller",
"digest": {
"sha256": "0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint",
"digest": {
"sha256": "2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init",
"digest": {
"sha256": "83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter",
"digest": {
"sha256": "e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop",
"digest": {
"sha256": "59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/pullrequest-init",
"digest": {
"sha256": "4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook",
"digest": {
"sha256": "bf0ef565b301a1981cb2e0d11eb6961c694f6d2401928dccebe7d1e9d8c914de"
}
}
],
...
Now, verify the digest in the release.yaml
by matching it with the provenance, for example, the digest for the release v0.28.1
:
curl -s https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.28.1/release.yaml | grep github.com/tektoncd/pipeline/cmd/controller:v0.28.1 | awk -F"github.com/tektoncd/pipeline/cmd/controller:v0.28.1@" '{print $2}'
which results in:
sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8
Now, you can verify the deployment specifications in the release.yaml
to match each of these images and their digest.
The tekton-pipelines-controller
deployment specification has a container named tekton-pipeline-controller
and a
list of image references with their digest as part of the args
:
containers:
- name: tekton-pipelines-controller
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.28.1@sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8
args: [
# These images are built on-demand by `ko resolve` and are replaced
# by image references by digest.
"-git-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.28.1@sha256:83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7",
"-entrypoint-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:v0.28.1@sha256:2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721",
"-nop-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop:v0.28.1@sha256:59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8",
"-imagedigest-exporter-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.28.1@sha256:e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0",
"-pr-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/pullrequest-init:v0.28.1@sha256:4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb",
Similarly, you can verify the rest of the images which were published as part of the Tekton Pipelines release:
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/pullrequest-init
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook
Trusted Resources is a feature to verify Tekton Tasks and Pipelines. The current version of feature supports v1beta1
Task
and Pipeline
. For more details please take a look at Trusted Resources.
To get started with Tekton Pipelines, see the Tekton Pipelines Tutorial and take a look at our examples.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.