Skip to content

Commit

Permalink
[TEP-0089] - Phase 2 Signed TaskRun
Browse files Browse the repository at this point in the history
Signed-off-by: pxp928 <parth.psu@gmail.com>
  • Loading branch information
pxp928 committed Jul 29, 2022
1 parent b8b9a06 commit bd80422
Show file tree
Hide file tree
Showing 18 changed files with 344 additions and 256 deletions.
129 changes: 112 additions & 17 deletions docs/spire.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,22 +63,22 @@ This feature relies on a SPIRE installation. This is how it integrates into the
┌─────────────┐ Register TaskRun Workload Identity ┌──────────┐
│ ├──────────────────────────────────────────────►│ │
│ Tekton │ │ SPIRE │
Controller │◄───────────┐ │ Server │
│ │ Listen on TaskRun │ │
└────────────┬┘ │ └──────────┘
▲ │ ┌───────┴───────────────────────────────┐ ▲
│ │ │ Tekton TaskRun │ │
│ │ └───────────────────────────────────────┘ │
│ Configure│ │ Attest
│ Pod & │ │ +
│ check │ │ Request
│ ready │ ┌───────────┐ │ SVIDs
│ └────►│ TaskRun ├────────────────────────┘
│ │ Pod │
│ └───────────┘ TaskRun Entrypointer
│ ▲ Sign Result and update
│ Get │ Get SVID TaskRun status with
│ SPIRE │ signature + cert
Pipelines │◄───────────┐ │ Server │
Controller │ │ Listen on TaskRun │ │
└────────────┬┘◄┐ │ └──────────┘
▲ │ ┌───────┴───────────────────────────────┐ ▲
│ │ │ Tekton TaskRun │ │
│ │ └───────────────────────────────────────┘ │
│ Configure│ │ Attest
│ Pod & │ └─────────────────┐ TaskRun Entrypointer │ +
│ check │ │ Sign Result and update │ Request
│ ready │ ┌───────────┐ │ the status with the │ SVIDs
│ └────►│ TaskRun ├──┘ signature + cert
│ │ Pod │ which will be used by
│ └───────────┘ tekton-pipelines-controller
│ ▲ to update TaskRun.
│ Get │ Get SVID
│ SPIRE │
│ server │ │
│ Credentials │ ▼
┌┴───────────────────┴─────────────────────────────────────────────────────┐
Expand Down Expand Up @@ -280,6 +280,101 @@ The signatures are being verified by the Tekton controller, the process of verif
- For each of the items in the results, verify its content against its associated `.sig` field


# TaskRun Status attestations

Each TaskRun status that is written by the tekton-pipelines-controller will be signed to ensure that there is no external
tampering of the TaskRun status. Upon each retrieval of the TaskRun, the tekton-pipelines-controller checks if the status is initialized,
and that the signature validates the current status.
The signature and SVID will be stored as annotations on the TaskRun Status field, and can be verified by a client.

The verification is done on every consumption of the TaskRun except when the TaskRun is uninitialized. When uninitialized, the
tekton-pipelines-controller is not influenced by fields in the status and thus will not sign incorrect reflections of the TaskRun.

The spec and TaskRun annotations/labels are not signed when there are valid interactions from other controllers or users (i.e. cancelling taskrun).
Editing the object annotations/labels or spec will not result in any unverifiable outcome of the status field.

As the TaskRun progresses, the Pipeline Controller will reconcile the TaskRun object and continually verify the current hash against the `tekton.dev/status-hash-sig` before updating the hash to match the new status and creating a new signature.

An example TaskRun annotations would be:

```console
$ tkn tr describe non-falsifiable-provenance -oyaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
annotations:
pipeline.tekton.dev/release: 3ee99ec
creationTimestamp: "2022-03-04T19:10:46Z"
generation: 1
labels:
app.kubernetes.io/managed-by: tekton-pipelines
name: non-falsifiable-provenance
namespace: default
resourceVersion: "23088242"
uid: 548ebe99-d40b-4580-a9bc-afe80915e22e
spec:
serviceAccountName: default
taskSpec:
results:
- description: ""
name: foo
- description: ""
name: bar
steps:
- image: ubuntu
name: non-falsifiable
resources: {}
script: |
#!/usr/bin/env bash
sleep 30
printf "%s" "hello" > "$(results.foo.path)"
printf "%s" "world" > "$(results.bar.path)"
timeout: 1m0s
status:
annotations:
tekton.dev/controller-svid: |
-----BEGIN CERTIFICATE-----
MIIB7jCCAZSgAwIBAgIRAI8/08uXSn9tyv7cRN87uvgwCgYIKoZIzj0EAwIwHjEL
MAkGA1UEBhMCVVMxDzANBgNVBAoTBlNQSUZGRTAeFw0yMjAzMDQxODU0NTlaFw0y
MjAzMDQxOTU1MDlaMB0xCzAJBgNVBAYTAlVTMQ4wDAYDVQQKEwVTUElSRTBZMBMG
ByqGSM49AgEGCCqGSM49AwEHA0IABL+e9OjkMv+7XgMWYtrzq0ESzJi+znA/Pm8D
nvApAHg3/rEcNS8c5LgFFRzDfcs9fxGSSkL1JrELzoYul1Q13XejgbMwgbAwDgYD
VR0PAQH/BAQDAgOoMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNV
HRMBAf8EAjAAMB0GA1UdDgQWBBR+ma+yZfo092FKIM4F3yhEY8jgDDAfBgNVHSME
GDAWgBRKiCg5+YdTaQ+5gJmvt2QcDkQ6KjAxBgNVHREEKjAohiZzcGlmZmU6Ly9l
eGFtcGxlLm9yZy90ZWt0b24vY29udHJvbGxlcjAKBggqhkjOPQQDAgNIADBFAiEA
8xVWrQr8+i6yMLDm9IUjtvTbz9ofjSsWL6c/+rxmmRYCIBTiJ/HW7di3inSfxwqK
5DKyPrKoR8sq8Ne7flkhgbkg
-----END CERTIFICATE-----
tekton.dev/status-hash: 76692c9dcd362f8a6e4bda8ccb4c0937ad16b0d23149ae256049433192892511
tekton.dev/status-hash-sig: MEQCIFv2bW0k4g0Azx+qaeZjUulPD8Ma3uCUn0tXQuuR1FaEAiBHQwN4XobOXmC2nddYm04AZ74YubUyNl49/vnbnR/HcQ==
completionTime: "2022-03-04T19:11:22Z"
conditions:
- lastTransitionTime: "2022-03-04T19:11:22Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
- lastTransitionTime: "2022-03-04T19:11:22Z"
message: Spire verified
reason: TaskRunResultsVerified
status: "True"
type: SignedResultsVerified
podName: non-falsifiable-provenance-pod
startTime: "2022-03-04T19:10:46Z"
steps:
...
<TRUNCATED>
```

## How is the status being verified

The signature are being verified by the Tekton controller, the process of verification is as follows:

- Verify status-hash fields
- verify `tekton.dev/status-hash` content against its associated `tekton.dev/status-hash-sig` field. If status hash does
not match invalidate the `tekton.dev/not-verified = yes` annotation will be added

## Further Details

To learn more about SPIRE TaskRun attestations, check out the [TEP](https://github.com/tektoncd/community/blob/main/teps/0089-nonfalsifiable-provenance-support.md).
To learn more about SPIRE attestations, check out the [TEP](https://github.com/tektoncd/community/blob/main/teps/0089-nonfalsifiable-provenance-support.md).
3 changes: 0 additions & 3 deletions pkg/apis/config/feature_flags.go
Original file line number Diff line number Diff line change
Expand Up @@ -142,9 +142,6 @@ func NewFeatureFlagsFromMap(cfgMap map[string]string) (*FeatureFlags, error) {
if err := setEmbeddedStatus(cfgMap, DefaultEmbeddedStatus, &tc.EmbeddedStatus); err != nil {
return nil, err
}
if err := setFeature(enableSpire, DefaultEnableSpire, &tc.EnableSpire); err != nil {
return nil, err
}

// Given that they are alpha features, Tekton Bundles and Custom Tasks should be switched on if
// enable-api-fields is "alpha". If enable-api-fields is not "alpha" then fall back to the value of
Expand Down
1 change: 0 additions & 1 deletion pkg/pod/pod.go
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,6 @@ func (b *Builder) Build(ctx context.Context, taskRun *v1beta1.TaskRun, taskSpec
}

readyImmediately := isPodReadyImmediately(*featureFlags, taskSpec.Sidecars)

// append credEntrypointArgs with entrypoint arg that contains if spire is enabled by configmap
commonExtraEntrypointArgs = append(commonExtraEntrypointArgs, credEntrypointArgs...)

Expand Down
169 changes: 0 additions & 169 deletions pkg/pod/pod_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2219,175 +2219,6 @@ func TestPodBuildwithSpireEnabled(t *testing.T) {
}
}

func TestPodBuildwithSpireEnabled(t *testing.T) {
placeToolsInit := corev1.Container{
Name: "place-tools",
Image: images.EntrypointImage,
WorkingDir: "/",
Command: []string{"/ko-app/entrypoint", "cp", "/ko-app/entrypoint", "/tekton/bin/entrypoint"},
VolumeMounts: []corev1.VolumeMount{binMount},
}

initContainers := []corev1.Container{placeToolsInit, tektonDirInit(images.EntrypointImage, []v1beta1.Step{{Name: "name"}})}
for i := range initContainers {
c := &initContainers[i]
c.VolumeMounts = append(c.VolumeMounts, corev1.VolumeMount{
Name: "spiffe-workload-api",
MountPath: "/spiffe-workload-api",
})
}

for _, c := range []struct {
desc string
trs v1beta1.TaskRunSpec
trAnnotation map[string]string
ts v1beta1.TaskSpec
want *corev1.PodSpec
wantAnnotations map[string]string
}{{
desc: "simple with debug breakpoint onFailure",
trs: v1beta1.TaskRunSpec{
Debug: &v1beta1.TaskRunDebug{
Breakpoint: []string{breakpointOnFailure},
},
},
ts: v1beta1.TaskSpec{
Steps: []v1beta1.Step{{
Name: "name",
Image: "image",
Command: []string{"cmd"}, // avoid entrypoint lookup.
}},
},
want: &corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
InitContainers: initContainers,
Containers: []corev1.Container{{
Name: "step-name",
Image: "image",
Command: []string{"/tekton/bin/entrypoint"},
Args: []string{
"-wait_file",
"/tekton/downward/ready",
"-wait_file_content",
"-post_file",
"/tekton/run/0/out",
"-termination_path",
"/tekton/termination",
"-step_metadata_dir",
"/tekton/run/0/status",
"-enable_spire",
"-entrypoint",
"cmd",
"--",
},
VolumeMounts: append([]corev1.VolumeMount{binROMount, runMount(0, false), downwardMount, {
Name: "tekton-creds-init-home-0",
MountPath: "/tekton/creds",
}, {
Name: "spiffe-workload-api",
MountPath: "/spiffe-workload-api",
}}, implicitVolumeMounts...),
TerminationMessagePath: "/tekton/termination",
}},
Volumes: append(implicitVolumes, binVolume, runVolume(0), downwardVolume, corev1.Volume{
Name: "tekton-creds-init-home-0",
VolumeSource: corev1.VolumeSource{EmptyDir: &corev1.EmptyDirVolumeSource{Medium: corev1.StorageMediumMemory}},
}, corev1.Volume{
Name: "spiffe-workload-api",
VolumeSource: corev1.VolumeSource{
CSI: &corev1.CSIVolumeSource{
Driver: "csi.spiffe.io",
},
},
}),
ActiveDeadlineSeconds: &defaultActiveDeadlineSeconds,
},
}} {
t.Run(c.desc, func(t *testing.T) {
featureFlags := map[string]string{
"enable-spire": "true",
}
names.TestingSeed()
store := config.NewStore(logtesting.TestLogger(t))
store.OnConfigChanged(
&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{Name: config.GetFeatureFlagsConfigName(), Namespace: system.Namespace()},
Data: featureFlags,
},
)
kubeclient := fakek8s.NewSimpleClientset(
&corev1.ServiceAccount{ObjectMeta: metav1.ObjectMeta{Name: "default", Namespace: "default"}},
&corev1.ServiceAccount{ObjectMeta: metav1.ObjectMeta{Name: "service-account", Namespace: "default"},
Secrets: []corev1.ObjectReference{{
Name: "multi-creds",
}},
},
&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "multi-creds",
Namespace: "default",
Annotations: map[string]string{
"tekton.dev/docker-0": "https://us.gcr.io",
"tekton.dev/docker-1": "https://docker.io",
"tekton.dev/git-0": "github.com",
"tekton.dev/git-1": "gitlab.com",
}},
Type: "kubernetes.io/basic-auth",
Data: map[string][]byte{
"username": []byte("foo"),
"password": []byte("BestEver"),
},
},
)
var trAnnotations map[string]string
if c.trAnnotation == nil {
trAnnotations = map[string]string{
ReleaseAnnotation: fakeVersion,
}
} else {
trAnnotations = c.trAnnotation
trAnnotations[ReleaseAnnotation] = fakeVersion
}
tr := &v1beta1.TaskRun{
ObjectMeta: metav1.ObjectMeta{
Name: "taskrun-name",
Namespace: "default",
Annotations: trAnnotations,
},
Spec: c.trs,
}

// No entrypoints should be looked up.
entrypointCache := fakeCache{}
builder := Builder{
Images: images,
KubeClient: kubeclient,
EntrypointCache: entrypointCache,
}

got, err := builder.Build(store.ToContext(context.Background()), tr, c.ts)
if err != nil {
t.Fatalf("builder.Build: %v", err)
}

expectedName := kmeta.ChildName(tr.Name, "-pod")
if d := cmp.Diff(expectedName, got.Name); d != "" {
t.Errorf("Pod name does not match: %q", d)
}

if d := cmp.Diff(c.want, &got.Spec, resourceQuantityCmp, volumeSort, volumeMountSort); d != "" {
t.Errorf("Diff %s", diff.PrintWantGot(d))
}

if c.wantAnnotations != nil {
if d := cmp.Diff(c.wantAnnotations, got.ObjectMeta.Annotations, cmpopts.IgnoreMapEntries(ignoreReleaseAnnotation)); d != "" {
t.Errorf("Annotation Diff(-want, +got):\n%s", d)
}
}
})
}
}

func TestMakeLabels(t *testing.T) {
taskRunName := "task-run-name"
want := map[string]string{
Expand Down
28 changes: 28 additions & 0 deletions pkg/reconciler/taskrun/taskrun.go
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,20 @@ func (c *Reconciler) ReconcileKind(ctx context.Context, tr *v1beta1.TaskRun) pkg
// on the event to perform user facing initialisations, such has reset a CI check status
afterCondition := tr.Status.GetCondition(apis.ConditionSucceeded)
events.Emit(ctx, nil, afterCondition, tr)
} else if config.FromContextOrDefaults(ctx).FeatureFlags.EnableSpire {
var verified = false
if c.SpireClient != nil {
if err := c.SpireClient.VerifyStatusInternalAnnotation(ctx, tr, logger); err == nil {
verified = true
}
if !verified {
if tr.Status.Annotations == nil {
tr.Status.Annotations = map[string]string{}
}
tr.Status.Annotations[spire.NotVerifiedAnnotation] = "yes"
}
logger.Infof("taskrun verification status: %t with hash %v \n", verified, tr.Status.Annotations[spire.TaskRunStatusHashAnnotation])
}
}

// If the TaskRun is complete, run some post run fixtures when applicable
Expand Down Expand Up @@ -289,6 +303,20 @@ func (c *Reconciler) finishReconcileUpdateEmitEvents(ctx context.Context, tr *v1
events.Emit(ctx, beforeCondition, afterCondition, tr)

var err error
// Add status internal annotations hash only if it was verified
if config.FromContextOrDefaults(ctx).FeatureFlags.EnableSpire &&
c.SpireClient != nil && c.SpireClient.CheckSpireVerifiedFlag(tr) {
if err := spire.CheckStatusInternalAnnotation(tr); err != nil {
err = c.SpireClient.AppendStatusInternalAnnotation(ctx, tr)
if err != nil {
logger.Warn("Failed to sign TaskRun internal status hash", zap.Error(err))
events.EmitError(controller.GetEventRecorder(ctx), err, tr)
} else {
logger.Infof("Successfully signed TaskRun internal status with hash: %v",
tr.Status.Annotations[spire.TaskRunStatusHashAnnotation])
}
}
}

merr := multierror.Append(previousError, err).ErrorOrNil()

Expand Down
Loading

0 comments on commit bd80422

Please sign in to comment.