Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc fixes for nginx-ingress chart for better keel and prom-oper #6104

Merged
merged 3 commits into from
Sep 1, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion charts/ingress-nginx/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
apiVersion: v1
name: ingress-nginx
version: 2.13.0
version: 2.14.0
appVersion: 0.35.0
home: https://github.com/kubernetes/ingress-nginx
description: Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer
Expand Down
3 changes: 3 additions & 0 deletions charts/ingress-nginx/templates/controller-daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ metadata:
labels:
{{- include "ingress-nginx.labels" . | nindent 4 }}
app.kubernetes.io/component: controller
{{- with .Values.controller.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "ingress-nginx.controller.fullname" . }}
{{- if .Values.controller.annotations }}
annotations: {{ toYaml .Values.controller.annotations | nindent 4 }}
Expand Down
3 changes: 3 additions & 0 deletions charts/ingress-nginx/templates/controller-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ metadata:
labels:
{{- include "ingress-nginx.labels" . | nindent 4 }}
app.kubernetes.io/component: controller
{{- with .Values.controller.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "ingress-nginx.controller.fullname" . }}
{{- if .Values.controller.annotations }}
annotations: {{ toYaml .Values.controller.annotations | nindent 4 }}
Expand Down
34 changes: 29 additions & 5 deletions charts/ingress-nginx/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,14 @@ controller:
## Annotations to be added to the controller Deployment or DaemonSet
##
annotations: {}
# keel.sh/pollSchedule: "@every 60m"

## Labels to be added to the controller Deployment or DaemonSet
##
labels: {}
# keel.sh/policy: patch
# keel.sh/trigger: poll


# The update strategy to apply to the Deployment or DaemonSet
##
Expand Down Expand Up @@ -451,19 +459,35 @@ controller:
# namespace: ""
rules: []
# # These are just examples rules, please adapt them to your needs
# - alert: TooMany500s
# - alert: NGINXConfigFailed
# expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0
# for: 1s
# labels:
# severity: critical
# annotations:
# description: bad ingress config - nginx config test failed
# summary: uninstall the latest ingress changes to allow config reloads to resume
# - alert: NGINXCertificateExpiry
# expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) < 604800
# for: 1s
# labels:
# severity: critical
# annotations:
# description: ssl certificate(s) will expire in less then a week
# summary: renew expiring certificates to avoid downtime
# - alert: NGINXTooMany500s
# expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
# for: 1m
# labels:
# severity: critical
# severity: warning
# annotations:
# description: Too many 5XXs
# summary: More than 5% of the all requests did return 5XX, this require your attention
# - alert: TooMany400s
# - alert: NGINXTooMany400s
# expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
# for: 1m
# labels:
# severity: critical
# severity: warning
# annotations:
# description: Too many 4XXs
# summary: More than 5% of the all requests did return 4XX, this require your attention
Expand All @@ -472,7 +496,7 @@ controller:
## With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds
## to 300, allowing the draining of connections up to five minutes.
## If the active connections end before that, the pod will terminate gracefully at that time.
## To efectively take advantage of this feature, the Configmap feature
## To effectively take advantage of this feature, the Configmap feature
## worker-shutdown-timeout new value is 240s instead of 10s.
##
lifecycle:
Expand Down