Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm enable to config controller manager & audit port #1438

Merged
merged 14 commits into from
Aug 23, 2021
33 changes: 33 additions & 0 deletions cmd/build/helmify/delete-ports.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: gatekeeper-controller-manager
namespace: gatekeeper-system
spec:
template:
spec:
containers:
- name: manager
ports:
- containerPort: 8888
$patch: delete
- containerPort: 8443
$patch: delete
- containerPort: 9090
$patch: delete
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: gatekeeper-audit
namespace: gatekeeper-system
spec:
template:
spec:
containers:
- name: manager
ports:
- containerPort: 8888
$patch: delete
- containerPort: 9090
$patch: delete
1 change: 1 addition & 0 deletions cmd/build/helmify/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ bases:
- "../../../config/overlays/mutation_webhook" # calls ../../default
patchesStrategicMerge:
- kustomize-for-helm.yaml
- delete-ports.yaml
patchesJson6902:
- target:
group: apiextensions.k8s.io
Expand Down
39 changes: 38 additions & 1 deletion cmd/build/helmify/kustomize-for-helm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,9 @@ spec:
containers:
- name: manager
args:
- --port=8443
- --port=HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_PORT
- --health-addr=:HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_HEALTH_PORT
- --prometheus-port=HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_PROMETHEUS_PORT
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thoughts about calling this metrics-port? since the exporter can target more than Prometheus

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable, i will update PR when i get to work tomorrow or later tonight.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the main reason why I used prometheusPort is due to that the flag is --prometheus-port but if you change this flag in the future to make it more generic we won't have to change the helm values at least :)
I will just verify the change locally and then I will update the PR.

- --logtostderr
- --log-denies={{ .Values.logDenies }}
- --emit-admission-events={{ .Values.emitAdmissionEvents }}
Expand All @@ -75,6 +77,24 @@ spec:
- HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_EXEMPT_NAMESPACES
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.release }}"
ports:
- containerPort: HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_PORT
name: webhook-server
protocol: TCP
- containerPort: HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_PROMETHEUS_PORT
name: metrics
protocol: TCP
- containerPort: HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_HEALTH_PORT
name: healthz
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_HEALTH_PORT
livenessProbe:
httpGet:
path: /healthz
port: HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_HEALTH_PORT
resources:
HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_CONTAINER_RESOURCES: ""
nodeSelector:
Expand Down Expand Up @@ -111,8 +131,25 @@ spec:
- --operation=audit
- --operation=status
- --logtostderr
- --health-addr=:HELMSUBST_DEPLOYMENT_AUDIT_HEALTH_PORT
- --prometheus-port=HELMSUBST_DEPLOYMENT_AUDIT_PROMETHEUS_PORT
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.release }}"
ports:
- containerPort: HELMSUBST_DEPLOYMENT_AUDIT_PROMETHEUS_PORT
name: metrics
protocol: TCP
- containerPort: HELMSUBST_DEPLOYMENT_AUDIT_HEALTH_PORT
name: healthz
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: HELMSUBST_DEPLOYMENT_AUDIT_HEALTH_PORT
livenessProbe:
httpGet:
path: /healthz
port: HELMSUBST_DEPLOYMENT_AUDIT_HEALTH_PORT
resources:
HELMSUBST_DEPLOYMENT_AUDIT_CONTAINER_RESOURCES: ""
nodeSelector:
Expand Down
10 changes: 10 additions & 0 deletions cmd/build/helmify/replacements.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,18 @@ var replacements = map[string]string{

"HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_HOST_NETWORK": `{{ .Values.controllerManager.hostNetwork }}`,

"HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_PORT": `{{ .Values.controllerManager.port }}`,

"HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_HEALTH_PORT": `{{ .Values.controllerManager.healthPort }}`,

"HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_PROMETHEUS_PORT": `{{ .Values.controllerManager.prometheusPort }}`,

"HELMSUBST_DEPLOYMENT_AUDIT_HOST_NETWORK": `{{ .Values.audit.hostNetwork }}`,

"HELMSUBST_DEPLOYMENT_AUDIT_HEALTH_PORT": `{{ .Values.audit.healthPort }}`,

"HELMSUBST_DEPLOYMENT_AUDIT_PROMETHEUS_PORT": `{{ .Values.audit.prometheusPort }}`,

`HELMSUBST_DEPLOYMENT_AUDIT_NODE_SELECTOR: ""`: `{{- toYaml .Values.audit.nodeSelector | nindent 8 }}`,

`HELMSUBST_DEPLOYMENT_AUDIT_AFFINITY: ""`: `{{- toYaml .Values.audit.affinity | nindent 8 }}`,
Expand Down
23 changes: 14 additions & 9 deletions cmd/build/helmify/static/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,16 +29,16 @@ _See [helm install](https://helm.sh/docs/helm/helm_install/) for command documen
**Upgrading from < v3.4.0**
Chart 3.4.0 deprecates support for Helm 2 and also removes the creation of the `gatekeeper-system` Namespace from within the chart. This follows Helm 3 Best Practices.

Option 1:
A simple way to upgrade is to uninstall first and re-install with 3.4.0 or greater.
Option 1:
A simple way to upgrade is to uninstall first and re-install with 3.4.0 or greater.

```console
$ helm uninstall gatekeeper
$ helm install -n gatekeeper-system [RELEASE_NAME] gatekeeper/gatekeeper --create-namespace

```

Option 2:
Option 2:
Run the `helm_migrate.sh` script before installing the 3.4.0 or greater chart. This will remove the Helm secret for the original release, while keeping all of the resources. It then updates the annotations of the resources so that the new chart can import and manage them.

```console
Expand All @@ -63,7 +63,7 @@ _See [Exempting Namespaces](https://open-policy-agent.github.io/gatekeeper/websi
## Parameters

| Parameter | Description | Default |
| :--------------------------------------------| :--------------------------------------------------------------------------------------| :-------------------------------------------------------------------------|
| :------------------------------------------- | :------------------------------------------------------------------------------------- | :------------------------------------------------------------------------ |
| postInstall.labelNamespace.enabled | Add labels to the namespace during post install hooks | `true` |
| postInstall.labelNamespace.image.repository | Image with kubectl to label the namespace | `line/kubectl-kustomize` |
| postInstall.labelNamespace.image.tag | Image tag | `1.20.4-4.0.5` |
Expand All @@ -73,11 +73,11 @@ _See [Exempting Namespaces](https://open-policy-agent.github.io/gatekeeper/websi
| constraintViolationsLimit | The maximum # of audit violations reported on a constraint | `20` |
| auditFromCache | Take the roster of resources to audit from the OPA cache | `false` |
| auditChunkSize | Chunk size for listing cluster resources for audit (alpha feature) | `0` |
| auditMatchKindOnly | Only check resources of the kinds specified in all constraints defined in the cluster. | `false` |
| auditMatchKindOnly | Only check resources of the kinds specified in all constraints defined in the cluster. | `false` |
| disableValidatingWebhook | Disable the validating webhook | `false` |
| validatingWebhookTimeoutSeconds | The timeout for the validating webhook in seconds | `3` |
| validatingWebhookFailurePolicy | The failurePolicy for the validating webhook | `Ignore` |
| validatingWebhookCheckIgnoreFailurePolicy | The failurePolicy for the check-ignore-label validating webhook | `Fail` |
| validatingWebhookFailurePolicy | The failurePolicy for the validating webhook | `Ignore` |
| validatingWebhookCheckIgnoreFailurePolicy | The failurePolicy for the check-ignore-label validating webhook | `Fail` |
| enableDeleteOperations | Enable validating webhook for delete operations | `false` |
| experimentalEnableMutation | Enable mutation (alpha feature) | `false` |
| emitAdmissionEvents | Emit K8s events in gatekeeper namespace for admission violations (alpha feature) | `false` |
Expand All @@ -86,17 +86,22 @@ _See [Exempting Namespaces](https://open-policy-agent.github.io/gatekeeper/websi
| logLevel | Minimum log level | `INFO` |
| image.pullPolicy | The image pull policy | `IfNotPresent` |
| image.repository | Image repository | `openpolicyagent/gatekeeper` |
| image.release | The image release tag to use | Current release version: `v3.6.0-beta.3` |
| image.release | The image release tag to use | Current release version: `v3.6.0-beta.3` |
| image.pullSecrets | Specify an array of imagePullSecrets | `[]` |
| resources | The resource request/limits for the container image | limits: 1 CPU, 512Mi, requests: 100mCPU, 256Mi |
| nodeSelector | The node selector to use for pod scheduling | `kubernetes.io/os: linux` |
| affinity | The node affinity to use for pod scheduling | `{}` |
| tolerations | The tolerations to use for pod scheduling | `[]` |
| controllerManager.healthPort | Health port for controller manager | `9090` |
| controllerManager.port | Webhook-server port for controller manager | `8443` |
| controllerManager.prometheusPort | Metrics port for controller manager | `8888` |
| controllerManager.priorityClassName | Priority class name for controller manager | `system-cluster-critical` |
| controllerManager.exemptNamespaces | The namespaces to exempt | `[]` |
| controllerManager.exemptNamespaces | The namespaces to exempt | `[]` |
| controllerManager.hostNetwork | Enables controllerManager to be deployed on hostNetwork | `false` |
| audit.priorityClassName | Priority class name for audit controller | `system-cluster-critical` |
| audit.hostNetwork | Enables audit to be deployed on hostNetwork | `false` |
| audit.healthPort | Health port for audit | `9090` |
| audit.prometheusPort | Metrics port for audit | `8888` |
| replicas | The number of Gatekeeper replicas to deploy for the webhook | `3` |
| podAnnotations | The annotations to add to the Gatekeeper pods | `container.seccomp.security.alpha.kubernetes.io/manager: runtime/default` |
| podLabels | The labels to add to the Gatekeeper pods | `{}` |
Expand Down
5 changes: 5 additions & 0 deletions cmd/build/helmify/static/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,9 @@ secretAnnotations: {}
controllerManager:
exemptNamespaces: []
hostNetwork: false
port: 8443
prometheusPort: 8888
healthPort: 9090
priorityClassName: system-cluster-critical
affinity:
podAntiAffinity:
Expand All @@ -60,6 +63,8 @@ controllerManager:
memory: 256Mi
audit:
hostNetwork: false
prometheusPort: 8888
healthPort: 9090
priorityClassName: system-cluster-critical
affinity: {}
tolerations: []
Expand Down
3 changes: 1 addition & 2 deletions config/webhook/service.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

apiVersion: v1
kind: Service
metadata:
Expand All @@ -7,7 +6,7 @@ metadata:
spec:
ports:
- port: 443
targetPort: 8443
targetPort: webhook-server
Comment on lines -10 to +9
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Realized I need to change the gatekeeper-webhook-service port.
Instead of changing this using kustomize i changed this in the config file instead and use the name.

Thanks to this it will also simplify the GKE usage since they won't have to update the service since we are using the port name instead of the port number, I have updated the docs accordingly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect!

selector:
control-plane: controller-manager
gatekeeper.sh/operation: webhook
Loading