This repository hosts a temporary GitHub owned fork of the Sigstore Policy Controller repository. Once functionality only present in this fork is merged upstream to sigstore/policy-controller, this fork will be archived.
The policy-controller
admission controller can be used to enforce policy on a Kubernetes cluster based on verifiable supply-chain metadata from cosign
and
artifacts attestations produced by the attest-build-provenance GitHub Action.
For more information about the policy-controller
, have a look at the Sigstore documentation
here.
See the official documentation on using artifact attestations to establish build provenance and the blog post introducing Artifact Attestations.
Please see the examples/ directory for example policies etc.
This repo includes a policy-tester
tool which enables checking a policy against
various images.
In the root of this repo, run the following to build:
make policy-tester
Then run it pointing to a YAML file containing a ClusterImagePolicy, and an image to evaluate the policy against:
(set -o pipefail && \
./policy-tester \
--policy=test/testdata/policy-controller/tester/cip-public-keyless.yaml \
--image=ghcr.io/sigstore/cosign/cosign:v1.9.0 | jq)
To allow the webhook to make requests to ACR, you must use one of the following methods to authenticate:
- Managed identities (used with AKS clusters)
- Service principals (used with AKS clusters)
- Pod imagePullSecrets (used with non AKS clusters)
See the official documentation.
See the official documentation for more details.
- You must enable managed identities for the cluster using the
--enable-managed-identities
flag with either theaz aks create
oraz aks update
commands - You must attach the ACR to the AKS cluster using the
--attach-acr
with either theaz aks create
oraz aks update
commands. See here for more details - You must set the
AZURE_CLIENT_ID
environment variable to the managed identity's client ID. - You must set the
AZURE_TENANT_ID
environment variable to the Azure tenant the managed identity resides in.
These will detected by the Azure credential manager.
When you create a cluster that has managed identities enabled,
a user assigned managed identity called
<AKS cluster name>-agentpool
. Use this identity's client ID
when setting AZURE_CLIENT_ID
. Make sure the ACR is attached to
your cluster.
If you are deploying policy-controller directly from this repository with
make ko-apply
, you will need to add AZURE_CLIENT_ID
and AZURE_TENANT_ID
to the list of environment
variables in the webhook deployment configuration.
You can provide the managed identity's client ID as a custom environment variable when installing the Helm chart:
helm install policy-controller oci://ghcr.io/artifact-attestations-helm-charts/policy-controller \
--version 0.9.0 \
--set webhook.env.AZURE_CLIENT_ID=my-managed-id-client-id,webhook.env.AZURE_TENANT_ID=tenant-id
You should be able to provide the service principal client ID and tenant ID as a workload identity annotations:
helm install policy-controller oci://ghcr.io/artifact-attestations-helm-charts/policy-controller \
--version 0.9.0 \
--set-json webhook.serviceAccount.annotations="{\"azure.workload.identity/client-id\": \"${SERVICE_PRINCIPAL_CLIENT_ID}\", \"azure.workload.identity/tenant-id\": \"${TENANT_ID}\"}"
This project is licensed under the terms of the Apache 2.0 open source license. Please refer to Apache 2.0 for the full terms.
See CODEOWNERS for a list of maintainers.
If you have any questions or issues following examples outlined in this repository, please file an issue and we will assist you.
This policy-controller's versions are able to run in the following versions of Kubernetes:
policy-controller > 0.2.x |
policy-controller > 0.10.x |
|
---|---|---|
Kubernetes 1.23 | ✓ | |
Kubernetes 1.24 | ✓ | |
Kubernetes 1.25 | ✓ | |
Kubernetes 1.27 | ✓ | |
Kubernetes 1.28 | ✓ | |
Kubernetes 1.29 | ✓ |
note: not fully tested yet, but can be installed
Should you discover any security issues, please refer to Sigstore's security policy.
The branch release
on the private fork is used for customer-facing released code.
In order to push a new release, follow these steps:
- Merge any changes into the
release
branch. - Tag as
v0.9.0+githubX
(incrementing theX
as needed). - Push the tag to the private fork.
- The Release GitHub Action workflow will triggered automatically when the tag is pushed