-
Notifications
You must be signed in to change notification settings - Fork 1
/
README.md.gotmpl
372 lines (266 loc) · 14.8 KB
/
README.md.gotmpl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
{{ template "chart.header" . }}
{{ template "chart.description" . }}
{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
[deployments]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
This chart provides a default deployment for a simple application that operates
in a [Deployment][deployments]. The chart automatically configures various
defaults for you like the Kubernetes [Horizontal Pod Autoscaler][hpa].
## Upgrade Notes
### 1.11.x -> 1.12.x
**NEW: Allow access from cross-cluster, in-mesh services**
Beginning with this version, if your app is on the mesh, we'll create
analogous [AuthorizationPolicies](https://istio.io/latest/docs/reference/config/security/authorization-policy/) to the already existing NetworkPolicies,
as they act as drop-in replacements for a multi-clustered, multi-primary setup.
`network.allowAll`, if set, will update your NetworkPolicies to allow
access from anywhere, including other services running in a different
cluster in a multi-cluter, multi-primary Istio environment.
### 1.10.x -> 1.11.x
**NEW: Maintenance Mode and Custom HTTP Fault Injections**
`virtualService.fault` allows you to set custom [HTTP fault injections](https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection)
at the client side (like delays or abort) before proxying to the service.
`virtualService.maintenanceMode.enabled` will set a very specific fault that
aborts the request with a 5xx (or whatever is set at `httpStatus`) and disable
+ scale-down behavior of the HPA.
If maintenanceMode is enabled, fault must be `{}`. If there's a fault configuration,
then maintenanceMode must be disabled. Otherwise the chart won't render.
### 1.9.x -> 1.10.x
**NEW: Templated Termination Grace Period**
`terminationGracePeriodSeconds` now supports template variables. This allows
one to compute the termination grace period based on additional criteria.
### 1.8.x -> 1.9.x
`readinessProbe` is now only required when `virtualService.enabled` is `true`. This
provides the flexibility to use this chart for non request serving services.
### 1.6.x -> 1.7.x
**BREAKING: Istio Alerts have changed**
The Istio Alerts chart was updated to 4.0.0 which updates the alert on the 5XX
rate to only aggregate per service, rather than including the client source workload.
Additionally, it added an alert which will attempt to detect if your selector
criteria is valid or not. This requires kube-state-metrics to be installed and
can be disabled via your values file if you do not wish to install
kube-state-metrics.
### 1.1.2 -> 1.2.x
**BREAKING: Istio Alerts have changed**
Review https://github.com/Nextdoor/k8s-charts/pull/231 carefully - the `5xx`
and `HighLatency` alarms have changed in makeup and you may need to adjust the
thresholds for your application now.
### 1.1.1 -> 1.1.2
The `livenessProbe` and `readinessProbe` changes made in
https://github.com/Nextdoor/k8s-charts/pull/212 were invalid. In the `1.1.2`
release I fix these checks. Going forward `livenessProbe` is optional, but
`readinessProbe` is a required field.
### 1.0.x -> 1.1.x
**BREAKING: `.Values.virtualService.gateways` syntax changed**
Istio `Gateways` can live in any namespace - and it is [recommended by
Istio](https://istio.io/latest/docs/setup/additional-setup/gateway/#deploying-a-gateway)
to run the Gateways in a separate namespace from the Istio Control Plane. The
`.Values.virtualService.gateways` format now must include the namespace of the
[`Gateway`](https://istio.io/latest/docs/reference/config/networking/gateway/)
object. Eg:
_Before_
```yaml
# values.yaml
virtualService:
namespace: istio-system
gateways:
- internal
```
_After_
```yaml
# values.yaml
virtualService:
gateways:
- istio-system/internal
```
### 0.27.x -> 1.0.x
**BREAKING: You need to explicitly set the `livenessProbe` and `readinessProbe`**
In order to make the chart more flexible, we have removed the default values
for the `livenessProbe` and `readinessProbe` parameters. You must now
explicitly set them in your `values.yaml` file. If you do not, the chart will
fail to install.
### 0.26.x -> 0.27.x
**NEW: Optional sidecar and init containers**
We have added the ability to define init and sidecar containers for your pod.
This can be helpful if your application requires bootstrapping or additional
applications to function. They can be added via `initContainers` and
`extraContainers` parameters respectively. It is important to note that these
containers are defined using native helm definition rather than the template
scheme this chart provides.
### 0.25.x -> 0.26.x
**NEW: Optional Deployments and HPAs for Each Availability Zone!**
In certain cases it makes sense for an application to scale up independently in
each availability zone. This requirement often comes up when using "zone aware
routing" topologies where there is no guarantee that your service "clients" are
equally distributed across availability zones, and they may overrun the pods in
one of your zones.
In that case, you can now pass in an explicit list of availablilty zone strings
(eg `us-west-2a`) to the `.Values.deploymentZones` key. For each AZ supplied, a
dedicated `Deployment` and `HorizontalPodAutoscaler` will be created. In this
model, settings like `.Values.replicaCount` are applied to EACH of the zones.
_Warning: If you are transioning to this model (or out of it), you want to set
`.Values.deploymentZonesTransition: true` temporarily to ensure that both the
"zone-aware" and "zone-independent" Deployment resources are created. This
ensures there is no sudden scale-down of pods serving live traffic during the
transition period. You can come back later and flip this setting back to
`false` when you are done with the transition._
### 0.24.x -> 0.25.x
**NEW: Always create a `Service` Resource**
In order to make sure that the Istio Service Mesh can always determine
"locality" for client and server workloads, we _always_ create a `Service`
object now that is used by Istio to track the endpoints and determine their
locality. This `Service` may not expose any real ports to the rest of the
network, but is still critical for Istio.
**Switched `PodMonitor` to `ServiceMonitor`**
Because we are always creating a `Service` resource now, we've followed the
Prometheus Operator recommendations and switched to using a `ServiceMonitor`
object. The metrics stay the same, but for some reason the `ServiceMonitor` is
preferred.
### 0.23.x -> 0.24.x
**BREAKING: Rolled back to Values.prometheusRules**
The use of nested charts within nested charts is problematic, and we have
rolled it back. Please use `Values.prometheusRules` to configure alarms. We
will deprecate the `prometheus-alerts` chart.
### 0.22.x -> 0.23.x
**BREAKING: The HorizontalPodAutoscaler has been upgraded to `v2beta2`**
The new `v2beta2` API has been used for the `HorizontalPodAutoscaler` - and a
custom set of `behaviors` have been implemented. See
`.Values.autoscaling.behavior` for the details.
**NEW: PrometheusRules are enabled by default!!**
Going forward, the
[`prometheus-alerts`](https://github.com/Nextdoor/k8s-charts/tree/main/charts/prometheus-alerts)
chart will be installed _by default_ for you and configured to monitor your
basic resources. If you want to disable it or reconfigure the alerts, the
configuration lives in the `.Values.alerts` key.
### 0.21.x -> 0.22.x
**BREAKING: If you do not set .Values.ports, then no VirtualService will be created**
In the past, the `.Values.virtualService.enabled` flag was the only control
used to determine whether or not to create the `VirtualService` resource. This
meant that you could accidentally create a `VirtualService` pointing to a
non-existent `Service` if your application exposes no ports (like a
"taskworker" type application).
Going forward, the chart will not create a `VirtualService` unless the
`Values.ports` array is populated as well. This links the logic for `Service`
and `VirtualService` creation together.
### 0.20.x -> 0.21.x
**BREAKING: Default behavior is to turn on the Istio Annotations/Labels**
We now default setting `.Values.istio.enabled=true` in the `values.yaml` file.
This was done because the vast majority of our applications operate within the
mesh, and this default behavior is simpler for most users. If your service is
_not_ running within the mesh, then you must set the value to `false`.
**BREAKING: ServiceMonitor has been replaced with PodMonitor**
We have replaced the behavior of creating a `ServiceMonitor` resource with a
`PodMonitor` resource. This is done because not all applications will use a
`Service` (in fact, the creation of the `Service` resource is optional), and
that can cause the monitoring to fail. `PodMonitor` resources will configure
Prometheus to monitor the pods regardless of whether or not they are part of a
Service.
**BREAKING: All .Values.serviceMonitor parameters moved to .Values.monitor**
We have condensed the Values a bit, so the entire `.Values.serviceMonitor` key
has been removed, and all of the parameters have been moved into
`.Values.monitor`. Make sure to update your values appropriately!
**BREAKING: Istio Injection is now explicitly controlled**
In previous versions of the chart, setting `.Values.istio.enabled=true/false`
only impacted whether or not certain lables and annotations were created... it
did not impact whether or not your pod actually got injected with the Istio
sidecar.
_As of this new release, setting `.Values.istio.enabled=true` will explicitly
add the `sidecar.istio.io/inject="true"` label to your pods, which will inject
them regardless of the namespace config. Alternatively, setting
`.Values.istio.enabled=false` will explicitly set
`sidecar.istio.io/inject="false"` which will prevent injection, regardless of
the namespace configuration!_
### 0.19.x -> 0.20.x
**Default Replica Count is now 2!**
In order to make sure that even our staging/development deployments have some
guarantees of uptime, the defaults for the chart have changed. We now set
`replicaCount: 2` and create a `podDisruptionBudget` by default. This ensures
that a developer needs to _intentionally_ disable these settings in order to
create a single-pod deployment.
**No longer setting `DD_ENV` by default**
The `DD_ENV` variable in a container will override the underlying host Datadog
Agent `env` tag. This should not be set by default, so we no longer do this. If
you explicitly set this, it will work ... but by default you should let the
underlying host define the environment in which your application is running.
### 0.18.x -> 0.19.x
**Automatic NodeSelectors**
By default the chart now sets the `kubernetes.io/os` and `kubernetes.io/arch`
values in the `nodeSelector` field for your pods! The default values are
targeted towards our most common deployment environments - `linux` on `amd64`
hosts. Pay close attention to the `targetOperatingSystem` and
`targetArchitecture` values to customize this behavior.
### 0.17.x -> 0.18.x
**New Feature: Secrets Management**
You can now manage `Secret` and `KMSSecret` Resources through `Values.secrets`.
See the [Secrets](#secrets) section below for details on how secrets work.
### 0.16.x -> 0.17.x
**New Feature: Customize User-Facing Ports**
You can now expose a custom port for your users (eg: `80`) while your service
continues to listen on a private containerPort (eg: `5000`). In the maps in
`.Values.ports` simply add a `port: <int>` key and the `Service` resource
will be reconfigured to route that port to the backend container port.
**Bug Fix: ServiceMonitor resources were broken**
Previously the `ServiceMonitor` resources were pointing to the `Service` but
the `Service` did not expose a `metrics` endpoint, which caused the resource to
be invalid. This has been fixed.
## Monitoring
This chart makes the assumption that you _do_ have a Prometheus-style
monitoring endpoint configured. See the `Values.monitor.portName`,
`Values.monitor.portNumber` and `Values.monitor.path` settings for informing
the chart of where your metrics are exposed.
If you are operating in an Istio Service Mesh, see the
[Istio](#istio-networking-support) section below for details on how monitoring
works. Otherwise, see the `Values.serviceMonitor` settings to configure a
Prometheus ServiceMonitor resource to monitor your application.
## Datadog Agent Support
This chart supports operating in environments where the Datadog Agent is
running. If you set the `Values.datadog.enabled` flag, then a series of
additional Pod Annotations, Labels and Environment Variables will be
automatically added in to your deployment. See the `Values.datadog` parameters
for further information.
## Istio Networking Support
### Monitoring through the Sidecar Proxy
[metrics_merging]: https://istio.io/latest/docs/ops/integrations/prometheus/#option-1-metrics-merging
When running your Pod within an Istio Mesh, access to the `metrics` endpoint
for your Pod can be obscured by the mesh itself which sits in front of the
metrics port and may require that all clients are coming in through the
mesh natively. The simplest way around this is to use [Istio Metrics
Merging][metrics_merging] - where the Sidecar container is responsible for
scraping your application's `metrics` port, merging the metrics with its own,
and then Prometheus is configured to pull all of the metrics from the Sidecar.
There are several advantages to this model.
* It's much simpler - developers do not need to create `ServiceMonitor` or
`PodMonitor` resources because the Prometheus system is already configured to
discover all `istio-proxy` sidecar containers and collect their metrics.
* Your application is not exposed outside of the service mesh to anybody - the
`istio-proxy` sidecar handles that for you.
* There are fewer individual configurations for Prometheus, letting it's
configuration be simpler and lighter weight. It runs fewer "scrape" jobs,
improving its overall performance.
This feature is turned on by default if you set `Values.istio.enabled=true` and
`Values.monitor.enabled=true`.
## Secrets
A `Secret`, `KMSSecret`, or `SealedSecret` resource would be created and mounted into the container
based upon the `Values.secrets` and `Values.secretsEngine` being populated.
The `Secret` resource is generally used for local dev and/or CI test.
Secret` resources can be created by setting the following:
```
secrets:
FOO_BAR: my plaintext secret
secretsEngine: plaintext
```
Alternatively, `KMSSecret` can be generated using the following example:
```
secrets:
FOO_BAR: AQIA...
secretsEngine: kms
kmsSecretsRegion: us-west-2 (AWS region where the KMS key is located)
```
Or, alternatively, `SealedSecret` can be generated using the following example:
```
secrets:
FOO_BAR: AQIA...
secretsEngine: sealed
```
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}
{{ template "helm-docs.versionFooter" . }}