diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 795b340ba33c3..785257142deae 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -14,9 +14,11 @@
Use the default base branch, “master”, if you're documenting existing
features in the English localization.
- If you're working on a different localization (not English), or you
- are documenting a feature that will be part of a future release, see
+ If you're working on a different localization (not English), see
https://kubernetes.io/docs/contribute/new-content/overview/#choose-which-git-branch-to-use
for advice.
+ If you're documenting a feature that will be part of a future release, see
+ https://kubernetes.io/docs/contribute/new-content/new-features/ for advice.
+
-->
diff --git a/Makefile b/Makefile
index 523775cd173fb..e8576459a4ff8 100644
--- a/Makefile
+++ b/Makefile
@@ -65,10 +65,10 @@ container-image:
--build-arg HUGO_VERSION=$(HUGO_VERSION)
container-build: module-check
- $(CONTAINER_RUN) $(CONTAINER_IMAGE) hugo --minify
+ $(CONTAINER_RUN) --read-only $(CONTAINER_IMAGE) hugo --minify
container-serve: module-check
- $(CONTAINER_RUN) --mount type=tmpfs,destination=/src/resources,tmpfs-mode=0777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0
+ $(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir
test-examples:
scripts/test_examples.sh install
diff --git a/config.toml b/config.toml
index 943363c9f153b..bf24f39d28f8a 100644
--- a/config.toml
+++ b/config.toml
@@ -33,6 +33,23 @@ enableGitInfo = true
# Hindi is disabled because it's currently in development.
disableLanguages = ["hi", "no"]
+[caches]
+ [caches.assets]
+ dir = ":cacheDir/_gen"
+ maxAge = -1
+ [caches.getcsv]
+ dir = ":cacheDir/:project"
+ maxAge = "60s"
+ [caches.getjson]
+ dir = ":cacheDir/:project"
+ maxAge = "60s"
+ [caches.images]
+ dir = ":cacheDir/_images"
+ maxAge = -1
+ [caches.modules]
+ dir = ":cacheDir/modules"
+ maxAge = -1
+
[markup]
[markup.goldmark]
[markup.goldmark.extensions]
@@ -66,6 +83,10 @@ date = ["date", ":filename", "publishDate", "lastmod"]
[permalinks]
blog = "/:section/:year/:month/:day/:slug/"
+[sitemap]
+ filename = "sitemap.xml"
+ priority = 0.75
+
# Be explicit about the output formats. We (currently) only want an RSS feed for the home page.
[outputs]
home = [ "HTML", "RSS", "HEADERS" ]
diff --git a/content/de/docs/tasks/tools/install-kubectl.md b/content/de/docs/tasks/tools/install-kubectl.md
index d7fb7aa759a49..2354fad25f5a7 100644
--- a/content/de/docs/tasks/tools/install-kubectl.md
+++ b/content/de/docs/tasks/tools/install-kubectl.md
@@ -334,7 +334,7 @@ Sie müssen nun sicherstellen, dass das kubectl-Abschlussskript in allen Ihren S
```
{{< note >}}
-bash-completion bezieht alle Verfollständigungsskripte aus `/etc/bash_completion.d`.
+bash-completion bezieht alle Vervollständigungsskripte aus `/etc/bash_completion.d`.
{{< /note >}}
Beide Ansätze sind gleichwertig. Nach dem erneuten Laden der Shell sollte kubectl autocompletion funktionieren.
diff --git a/content/en/_index.html b/content/en/_index.html
index 452c7ea325f28..c2ec627065fde 100644
--- a/content/en/_index.html
+++ b/content/en/_index.html
@@ -2,6 +2,8 @@
title: "Production-Grade Container Orchestration"
abstract: "Automated container deployment, scaling, and management"
cid: home
+sitemap:
+ priority: 1.0
---
{{< blocks/section id="oceanNodes" >}}
diff --git a/content/en/blog/_posts/2020-10-12-steering-committee-results.md b/content/en/blog/_posts/2020-10-12-steering-committee-results.md
new file mode 100644
index 0000000000000..2acbb6d6e4a7f
--- /dev/null
+++ b/content/en/blog/_posts/2020-10-12-steering-committee-results.md
@@ -0,0 +1,43 @@
+---
+layout: blog
+title: "Announcing the 2020 Steering Committee Election Results"
+date: 2020-10-12
+slug: steering-committee-results-2020
+---
+
+**Author**: Kaslin Fields
+
+The [2020 Steering Committee Election](https://github.com/kubernetes/community/tree/master/events/elections/2020) is now complete. In 2019, the committee arrived at its final allocation of 7 seats, 3 of which were up for election in 2020. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.
+
+This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their [charter](https://github.com/kubernetes/steering/blob/master/charter.md).
+
+## Results
+
+Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):
+
+* **Davanum Srinivas ([@dims](https://github.com/dims)), VMware**
+* **Jordan Liggitt ([@liggitt](https://github.com/liggitt)), Google**
+* **Bob Killen ([@mrbobbytables](https://github.com/mrbobbytables)), Google**
+
+They join continuing members Christoph Blecker ([@cblecker](https://github.com/cblecker)), Red Hat; Derek Carr ([@derekwaynecarr](https://github.com/derekwaynecarr)), Red Hat; Nikhita Raghunath ([@nikhita](https://github.com/nikhita)), VMware; and Paris Pittman ([@parispittman](https://github.com/parispittman)), Apple. Davanum Srinivas is returning for his second term on the committee.
+
+## Big Thanks!
+
+* Thank you and congratulations on a successful election to this round’s election officers:
+ * Jaice Singer DuMars ([@jdumars](https://github.com/jdumars)), Apple
+ * Ihor Dvoretskyi ([@idvoretskyi](https://github.com/idvoretskyi)), CNCF
+ * Josh Berkus ([@jberkus](https://github.com/jberkus)), Red Hat
+* Thanks to the Emeritus Steering Committee Members. Your prior service is appreciated by the community:
+ * Aaron Crickenberger ([@spiffxp](https://github.com/spiffxp)), Google
+ * and Lachlan Evenson([@lachie8e)](https://github.com/lachie8e)), Microsoft
+* And thank you to all the candidates who came forward to run for election. As [Jorge Castro put it](https://twitter.com/castrojo/status/1315718627639820288?s=20): we are spoiled with capable, kind, and selfless volunteers who put the needs of the project first.
+
+## Get Involved with the Steering Committee
+
+This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee [backlog items](https://github.com/kubernetes/steering/projects/1) and weigh in by filing an issue or creating a PR against their [repo](https://github.com/kubernetes/steering). They have an open meeting on [the first Monday of the month at 6pm UTC](https://github.com/kubernetes/steering) and regularly attend Meet Our Contributors. They can also be contacted at their public mailing list steering@kubernetes.io.
+
+You can see what the Steering Committee meetings are all about by watching past meetings on the [YouTube Playlist](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM).
+
+----
+
+_This post was written by the [Upstream Marketing Working Group](https://github.com/kubernetes/community/tree/master/communication/marketing-team#contributor-marketing). If you want to write stories about the Kubernetes community, learn more about us._
diff --git a/content/en/docs/_index.md b/content/en/docs/_index.md
index e06ebf76a5fea..dc42c2d1581b3 100644
--- a/content/en/docs/_index.md
+++ b/content/en/docs/_index.md
@@ -1,4 +1,6 @@
---
linktitle: Kubernetes Documentation
title: Documentation
+sitemap:
+ priority: 1.0
---
diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index b1e1f0dc24827..180b4ca9c46e7 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -261,7 +261,7 @@ a Lease object.
#### Reliability
- In most cases, node controller limits the eviction rate to
+ In most cases, the node controller limits the eviction rate to
`--node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
from more than 1 node per 10 seconds.
diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md
index efbef476bca48..d6abd186b9206 100644
--- a/content/en/docs/concepts/configuration/configmap.md
+++ b/content/en/docs/concepts/configuration/configmap.md
@@ -115,7 +115,8 @@ metadata:
spec:
containers:
- name: demo
- image: game.example/demo-game
+ image: alpine
+ command: ["sleep", "3600"]
env:
# Define the environment variable
- name: PLAYER_INITIAL_LIVES # Notice that the case is different here
diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md
index 9eb31cb91575e..d267b83dd2ccf 100644
--- a/content/en/docs/concepts/configuration/manage-resources-containers.md
+++ b/content/en/docs/concepts/configuration/manage-resources-containers.md
@@ -47,6 +47,13 @@ Limits can be implemented either reactively (the system intervenes once it sees
or by enforcement (the system prevents the container from ever exceeding the limit). Different
runtimes can have different ways to implement the same restrictions.
+{{< note >}}
+If a Container specifies its own memory limit, but does not specify a memory request, Kubernetes
+automatically assigns a memory request that matches the limit. Similarly, if a Container specifies its own
+CPU limit, but does not specify a CPU request, Kubernetes automatically assigns a CPU request that matches
+the limit.
+{{< /note >}}
+
## Resource types
*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md
index 09d5530a29244..fa74d94221cef 100644
--- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md
+++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md
@@ -38,7 +38,7 @@ No parameters are passed to the handler.
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
It is blocking, meaning it is synchronous,
-so it must complete before the call to delete the container can be sent.
+so it must complete before the signal to stop the container can be sent.
No parameters are passed to the handler.
A more detailed description of the termination behavior can be found in
@@ -56,7 +56,8 @@ Resources consumed by the command are counted against the Container.
### Hook handler execution
When a Container lifecycle management hook is called,
-the Kubernetes management system executes the handler in the Container registered for that hook.
+the Kubernetes management system execute the handler according to the hook action,
+`exec` and `tcpSocket` are executed in the container, and `httpGet` is executed by the kubelet process.
Hook handler calls are synchronous within the context of the Pod containing the Container.
This means that for a `PostStart` hook,
@@ -64,10 +65,21 @@ the Container ENTRYPOINT and hook fire asynchronously.
However, if the hook takes too long to run or hangs,
the Container cannot reach a `running` state.
-The behavior is similar for a `PreStop` hook.
-If the hook hangs during execution,
-the Pod phase stays in a `Terminating` state and is killed after `terminationGracePeriodSeconds` of pod ends.
-If a `PostStart` or `PreStop` hook fails,
+`PreStop` hooks are not executed asynchronously from the signal
+to stop the Container; the hook must complete its execution before
+the signal can be sent.
+If a `PreStop` hook hangs during execution,
+the Pod's phase will be `Terminating` and remain there until the Pod is
+killed after its `terminationGracePeriodSeconds` expires.
+This grace period applies to the total time it takes for both
+the `PreStop` hook to execute and for the Container to stop normally.
+If, for example, `terminationGracePeriodSeconds` is 60, and the hook
+takes 55 seconds to complete, and the Container takes 10 seconds to stop
+normally after receiving the signal, then the Container will be killed
+before it can stop normally, since `terminationGracePeriodSeconds` is
+less than the total time (55+10) it takes for these two things to happen.
+
+If either a `PostStart` or `PreStop` hook fails,
it kills the Container.
Users should make their hook handlers as lightweight as possible.
@@ -121,4 +133,3 @@ Events:
* Get hands-on experience
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
-
diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
index b32bce83dd1f4..0384754e35521 100644
--- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
+++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
@@ -11,21 +11,17 @@ weight: 10
-{{< feature-state state="alpha" >}}
-{{< caution >}}Alpha features can change rapidly. {{< /caution >}}
-
Network plugins in Kubernetes come in a few flavors:
-* CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
+* CNI plugins: adhere to the [Container Network Interface](https://github.com/containernetworking/cni) (CNI) specification, designed for interoperability.
+ * Kubernetes follows the [v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) release of the CNI specification.
* Kubenet plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins
-
-
## Installation
-The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as rkt manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
+The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
@@ -166,9 +162,4 @@ This option is provided to the network-plugin; currently **only kubenet supports
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
* `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin.
-
-
## {{% heading "whatsnext" %}}
-
-
-
diff --git a/content/en/docs/concepts/overview/_index.md b/content/en/docs/concepts/overview/_index.md
index a52c47044685f..fb6351ec67756 100755
--- a/content/en/docs/concepts/overview/_index.md
+++ b/content/en/docs/concepts/overview/_index.md
@@ -2,4 +2,6 @@
title: "Overview"
weight: 20
description: Get a high-level outline of Kubernetes and the components it is built from.
+sitemap:
+ priority: 0.9
---
diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md
index 4338c932d5e47..7580287fb3428 100644
--- a/content/en/docs/concepts/overview/kubernetes-api.md
+++ b/content/en/docs/concepts/overview/kubernetes-api.md
@@ -41,6 +41,7 @@ The Kubernetes API server serves an OpenAPI spec via the `/openapi/v2` endpoint.
You can request the response format using request headers as follows:
+
Valid request header values for OpenAPI v2 queries
Header
@@ -68,7 +69,6 @@ You can request the response format using request headers as follows:
serves application/json
-
Valid request header values for OpenAPI v2 queries
Kubernetes implements an alternative Protobuf based serialization format that
@@ -102,13 +102,22 @@ to ensure that the API presents a clear, consistent view of system resources
and behavior, and to enable controlling access to end-of-life and/or
experimental APIs.
-Refer to [API versions reference](/docs/reference/using-api/api-overview/#api-versioning)
-for more details on the API version level definitions.
-
To make it easier to evolve and to extend its API, Kubernetes implements
[API groups](/docs/reference/using-api/api-overview/#api-groups) that can be
[enabled or disabled](/docs/reference/using-api/api-overview/#enabling-or-disabling).
+API resources are distinguished by their API group, resource type, namespace
+(for namespaced resources), and name. The API server may serve the same
+underlying data through multiple API version and handle the conversion between
+API versions transparently. All these different versions are actually
+representations of the same resource. For example, suppose there are two
+versions `v1` and `v1beta1` for the same resource. An object created by the
+`v1beta1` version can then be read, updated, and deleted by either the
+`v1beta1` or the `v1` versions.
+
+Refer to [API versions reference](/docs/reference/using-api/api-overview/#api-versioning)
+for more details on the API version level definitions.
+
## API Extension
The Kubernetes API can be extended in one of two ways:
diff --git a/content/en/docs/concepts/overview/what-is-kubernetes.md b/content/en/docs/concepts/overview/what-is-kubernetes.md
index 418ee3d64481b..6df252ede2b9b 100644
--- a/content/en/docs/concepts/overview/what-is-kubernetes.md
+++ b/content/en/docs/concepts/overview/what-is-kubernetes.md
@@ -10,6 +10,8 @@ weight: 10
card:
name: concepts
weight: 10
+sitemap:
+ priority: 0.9
---
diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md
index 004c18ad2cd2f..f078cb86360d8 100644
--- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md
+++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md
@@ -28,9 +28,6 @@ resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)).
-In future versions of Kubernetes, objects in the same namespace will have the same
-access control policies by default.
-
It is not necessary to use multiple namespaces just to separate slightly different
resources, such as different versions of the same software: use
[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish
diff --git a/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md
index 9d6ab31889cdd..66ec279ad8495 100644
--- a/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md
+++ b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md
@@ -20,7 +20,7 @@ The kube-scheduler can be configured to enable bin packing of resources along wi
## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation
-Before Kubernetes 1.15, Kube-scheduler used to allow scoring nodes based on the request to capacity ratio of primary resources like CPU and Memory. Kubernetes 1.16 added a new parameter to the priority function that allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters improves the utilization of scarce resources in large clusters. The behavior of the `RequestedToCapacityRatioResourceAllocation` priority function can be controlled by a configuration option called `requestedToCapacityRatioArguments`. This argument consists of two parameters `shape` and `resources`. Shape allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. Resources
+Before Kubernetes 1.15, Kube-scheduler used to allow scoring nodes based on the request to capacity ratio of primary resources like CPU and Memory. Kubernetes 1.16 added a new parameter to the priority function that allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters and improves the utilization of scarce resources in large clusters. The behavior of the `RequestedToCapacityRatioResourceAllocation` priority function can be controlled by a configuration option called `requestedToCapacityRatioArguments`. This argument consists of two parameters `shape` and `resources`. Shape allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. Resources
consists of `name` which specifies the resource to be considered during scoring and `weight` specify the weight of each resource.
Below is an example configuration that sets `requestedToCapacityRatioArguments` to bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`
diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md
index a4b821104334c..1a55acf3d81bf 100644
--- a/content/en/docs/concepts/services-networking/dns-pod-service.md
+++ b/content/en/docs/concepts/services-networking/dns-pod-service.md
@@ -181,7 +181,7 @@ When you set `setHostnameAsFQDN: true` in the Pod spec, the kubelet writes the P
{{< note >}}
In Linux, the hostname field of the kernel (the `nodename` field of `struct utsname`) is limited to 64 characters.
-If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from pod hostname and cluster domain, FQDN `long-FDQN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment.
+If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from pod hostname and cluster domain, FQDN `long-FQDN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment.
{{< /note >}}
### Pod's DNS Policy
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index ded1451d801f9..83b7850364d98 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -881,6 +881,10 @@ There are other annotations to manage Classic Elastic Load Balancers that are de
# health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
# value. Defaults to 5, must be between 2 and 60
+ service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
+ # A list of existing security groups to be added to ELB created. Unlike the annotation
+ # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB.
+
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
# A list of additional security groups to be added to the ELB
diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md
index b73f78712ecbd..3eedc64d57ff8 100644
--- a/content/en/docs/concepts/workloads/controllers/deployment.md
+++ b/content/en/docs/concepts/workloads/controllers/deployment.md
@@ -13,7 +13,7 @@ weight: 10
-A _Deployment_ provides declarative updates for {{< glossary_tooltip text="Pods" term_id="pod" >}}
+A _Deployment_ provides declarative updates for {{< glossary_tooltip text="Pods" term_id="pod" >}} and
{{< glossary_tooltip term_id="replica-set" text="ReplicaSets" >}}.
You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_tooltip term_id="controller" >}} changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
@@ -102,7 +102,7 @@ Follow the steps given below to create the above Deployment:
The output is similar to:
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
- deployment.apps/nginx-deployment successfully rolled out
+ deployment "nginx-deployment" successfully rolled out
```
4. Run the `kubectl get deployments` again a few seconds later.
@@ -205,7 +205,7 @@ Follow the steps given below to update your Deployment:
```
or
```
- deployment.apps/nginx-deployment successfully rolled out
+ deployment "nginx-deployment" successfully rolled out
```
Get more details on your updated Deployment:
@@ -857,7 +857,7 @@ kubectl rollout status deployment.v1.apps/nginx-deployment
The output is similar to this:
```
Waiting for rollout to finish: 2 of 3 updated replicas are available...
-deployment.apps/nginx-deployment successfully rolled out
+deployment "nginx-deployment" successfully rolled out
```
and the exit status from `kubectl rollout` is 0 (success):
```shell
diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
index 123eae830ea50..f0ac6e654be7d 100644
--- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
+++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
@@ -13,7 +13,8 @@ of its primary containers starts OK, and then through either the `Succeeded` or
Whilst a Pod is running, the kubelet is able to restart containers to handle some
kind of faults. Within a Pod, Kubernetes tracks different container
-[states](#container-states) and handles
+[states](#container-states) and determines what action to take to make the Pod
+healthy again.
In the Kubernetes API, Pods have both a specification and an actual status. The
status for a Pod object consists of a set of [Pod conditions](#pod-conditions).
@@ -32,7 +33,7 @@ Like individual application containers, Pods are considered to be relatively
ephemeral (rather than durable) entities. Pods are created, assigned a unique
ID ([UID](/docs/concepts/overview/working-with-objects/names/#uids)), and scheduled
to nodes where they remain until termination (according to restart policy) or
-deletion.
+deletion.
If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that node
are [scheduled for deletion](#pod-garbage-collection) after a timeout period.
@@ -140,9 +141,8 @@ and Never. The default value is Always.
The `restartPolicy` applies to all containers in the Pod. `restartPolicy` only
refers to restarts of the containers by the kubelet on the same node. After containers
in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s,
-40s, …), that is capped at five minutes. Once a container has executed with no problems
-for 10 minutes without any problems, the kubelet resets the restart backoff timer for
-that container.
+40s, …), that is capped at five minutes. Once a container has executed for 10 minutes
+without any problems, the kubelet resets the restart backoff timer forthat container.
## Pod conditions
diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md
index 8616f77afb5e6..fb068856b67de 100644
--- a/content/en/docs/contribute/_index.md
+++ b/content/en/docs/contribute/_index.md
@@ -13,6 +13,13 @@ card:
+*Kubernetes welcomes improvements from all contributors, new and experienced!*
+
+{{< note >}}
+To learn more about contributing to Kubernetes in general, see the
+[contributor documentation](https://www.kubernetes.dev/docs/).
+{{< /note >}}
+
This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs).
Kubernetes documentation contributors:
@@ -22,8 +29,6 @@ Kubernetes documentation contributors:
- Translate the documentation
- Manage and publish the documentation parts of the Kubernetes release cycle
-Kubernetes documentation welcomes improvements from all contributors, new and experienced!
-
## Getting started
diff --git a/content/en/docs/contribute/generate-ref-docs/kubectl.md b/content/en/docs/contribute/generate-ref-docs/kubectl.md
index 80552144dd24c..b216a0a5b7bb0 100644
--- a/content/en/docs/contribute/generate-ref-docs/kubectl.md
+++ b/content/en/docs/contribute/generate-ref-docs/kubectl.md
@@ -230,11 +230,9 @@ Build the Kubernetes documentation in your local ``.
```shell
cd
-make docker-serve
+git submodule update --init --recursive --depth 1 # if not already done
+make container-serve
```
-{{< note >}}
-The use of `make docker-serve` is deprecated. Please use `make container-serve` instead.
-{{< /note >}}
View the [local preview](https://localhost:1313/docs/reference/generated/kubectl/kubectl-commands/).
diff --git a/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md b/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md
index f2ec01d8e84c2..251dfe2efed4f 100644
--- a/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md
+++ b/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md
@@ -182,13 +182,10 @@ Verify the [local preview](http://localhost:1313/docs/reference/generated/kubern
```shell
cd
-make docker-serve
+git submodule update --init --recursive --depth 1 # if not already done
+make container-serve
```
-{{< note >}}
-The use of `make docker-serve` is deprecated. Please use `make container-serve` instead.
-{{< /note >}}
-
## Commit the changes
In `` run `git add` and `git commit` to commit the change.
diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md
index 74c4f8e091468..5e91d86ee8932 100644
--- a/content/en/docs/contribute/localization.md
+++ b/content/en/docs/contribute/localization.md
@@ -73,7 +73,9 @@ For an example of adding a label, see the PR for adding the [Italian language la
### Find community
-Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/). Other localization teams are happy to help you get started and answer any questions you have.
+Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) and the [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations). Other localization teams are happy to help you get started and answer any questions you have.
+
+Please also consider participating in the [SIG Docs Localization Subgroup meeting](https://github.com/kubernetes/community/tree/master/sig-docs). The mission of the SIG Docs localization subgroup is to work across the SIG Docs localization teams to collaborate on defining and documenting the processes for creating localized contribution guides. In addition, the SIG Docs localization subgroup will look for opportunities for the creation and sharing of common tools across localization teams and also serve to identify new requirements to the SIG Docs Leadership team. If you have questions about this meeting, please inquire on the [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations).
You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding a channel for Persian](https://github.com/kubernetes/community/pull/4980).
diff --git a/content/en/docs/contribute/new-content/new-features.md b/content/en/docs/contribute/new-content/new-features.md
index 54db84da8f6c8..98823185ff0b4 100644
--- a/content/en/docs/contribute/new-content/new-features.md
+++ b/content/en/docs/contribute/new-content/new-features.md
@@ -98,7 +98,8 @@ deadlines.
1. Open a pull request against the
`dev-{{< skew nextMinorVersion >}}` branch in the `kubernetes/website` repository, with a small
commit that you will amend later.
-2. Use the Prow command `/milestone {{< skew nextMinorVersion >}}` to
+2. Edit the pull request description to include links to `k/k` PR(s) and `k/enhancement` issue(s).
+3. Use the Prow command `/milestone {{< skew nextMinorVersion >}}` to
assign the PR to the relevant milestone. This alerts the docs person managing
this release that the feature docs are coming.
diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md
index efdc9026aa8ea..a97dca823f17e 100644
--- a/content/en/docs/reference/access-authn-authz/authentication.md
+++ b/content/en/docs/reference/access-authn-authz/authentication.md
@@ -414,6 +414,8 @@ Webhook authentication is a hook for verifying bearer tokens.
* `--authentication-token-webhook-config-file` a configuration file describing how to access the remote webhook service.
* `--authentication-token-webhook-cache-ttl` how long to cache authentication decisions. Defaults to two minutes.
+* `--authentication-token-webhook-version` determines whether to use `authentication.k8s.io/v1beta1` or `authentication.k8s.io/v1`
+ `TokenReview` objects to send/receive information from the webhook. Defaults to `v1beta1`.
The configuration file uses the [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
file format. Within the file, `clusters` refers to the remote service and
@@ -447,72 +449,167 @@ contexts:
name: webhook
```
-When a client attempts to authenticate with the API server using a bearer token
-as discussed [above](#putting-a-bearer-token-in-a-request),
-the authentication webhook POSTs a JSON-serialized `authentication.k8s.io/v1beta1` `TokenReview` object containing the token
-to the remote service. Kubernetes will not challenge a request that lacks such a header.
+When a client attempts to authenticate with the API server using a bearer token as discussed [above](#putting-a-bearer-token-in-a-request),
+the authentication webhook POSTs a JSON-serialized `TokenReview` object containing the token to the remote service.
-Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/concepts/overview/kubernetes-api/)
-as other Kubernetes API objects. Implementers should be aware of looser
-compatibility promises for beta objects and check the "apiVersion" field of the
-request to ensure correct deserialization. Additionally, the API server must
-enable the `authentication.k8s.io/v1beta1` API extensions group (`--runtime-config=authentication.k8s.io/v1beta1=true`).
+Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/concepts/overview/kubernetes-api/) as other Kubernetes API objects.
+Implementers should check the `apiVersion` field of the request to ensure correct deserialization,
+and **must** respond with a `TokenReview` object of the same version as the request.
-The POST body will be of the following format:
+{{< tabs name="TokenReview_request" >}}
+{{% tab name="authentication.k8s.io/v1" %}}
+{{< note >}}
+The Kubernetes API server defaults to sending `authentication.k8s.io/v1beta1` token reviews for backwards compatibility.
+To opt into receiving `authentication.k8s.io/v1` token reviews, the API server must be started with `--authentication-token-webhook-version=v1`.
+{{< /note >}}
-```json
+```yaml
+{
+ "apiVersion": "authentication.k8s.io/v1",
+ "kind": "TokenReview",
+ "spec": {
+ # Opaque bearer token sent to the API server
+ "token": "014fbff9a07c...",
+
+ # Optional list of the audience identifiers for the server the token was presented to.
+ # Audience-aware token authenticators (for example, OIDC token authenticators)
+ # should verify the token was intended for at least one of the audiences in this list,
+ # and return the intersection of this list and the valid audiences for the token in the response status.
+ # This ensures the token is valid to authenticate to the server it was presented to.
+ # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.
+ "audiences": ["https://myserver.example.com", "https://myserver.internal.example.com"]
+ }
+}
+```
+{{% /tab %}}
+{{% tab name="authentication.k8s.io/v1beta1" %}}
+```yaml
{
"apiVersion": "authentication.k8s.io/v1beta1",
"kind": "TokenReview",
"spec": {
- "token": "(BEARERTOKEN)"
+ # Opaque bearer token sent to the API server
+ "token": "014fbff9a07c...",
+
+ # Optional list of the audience identifiers for the server the token was presented to.
+ # Audience-aware token authenticators (for example, OIDC token authenticators)
+ # should verify the token was intended for at least one of the audiences in this list,
+ # and return the intersection of this list and the valid audiences for the token in the response status.
+ # This ensures the token is valid to authenticate to the server it was presented to.
+ # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.
+ "audiences": ["https://myserver.example.com", "https://myserver.internal.example.com"]
}
}
```
+{{% /tab %}}
+{{< /tabs >}}
-The remote service is expected to fill the `status` field of
-the request to indicate the success of the login. The response body's `spec`
-field is ignored and may be omitted. A successful validation of the bearer
-token would return:
+The remote service is expected to fill the `status` field of the request to indicate the success of the login.
+The response body's `spec` field is ignored and may be omitted.
+The remote service must return a response using the same `TokenReview` API version that it received.
+A successful validation of the bearer token would return:
-```json
+{{< tabs name="TokenReview_response_success" >}}
+{{% tab name="authentication.k8s.io/v1" %}}
+```yaml
+{
+ "apiVersion": "authentication.k8s.io/v1",
+ "kind": "TokenReview",
+ "status": {
+ "authenticated": true,
+ "user": {
+ # Required
+ "username": "janedoe@example.com",
+ # Optional
+ "uid": "42",
+ # Optional group memberships
+ "groups": ["developers", "qa"],
+ # Optional additional information provided by the authenticator.
+ # This should not contain confidential data, as it can be recorded in logs
+ # or API objects, and is made available to admission webhooks.
+ "extra": {
+ "extrafield1": [
+ "extravalue1",
+ "extravalue2"
+ ]
+ }
+ },
+ # Optional list audience-aware token authenticators can return,
+ # containing the audiences from the `spec.audiences` list for which the provided token was valid.
+ # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.
+ "audiences": ["https://myserver.example.com"]
+ }
+}
+```
+{{% /tab %}}
+{{% tab name="authentication.k8s.io/v1beta1" %}}
+```yaml
{
"apiVersion": "authentication.k8s.io/v1beta1",
"kind": "TokenReview",
"status": {
"authenticated": true,
"user": {
+ # Required
"username": "janedoe@example.com",
+ # Optional
"uid": "42",
- "groups": [
- "developers",
- "qa"
- ],
+ # Optional group memberships
+ "groups": ["developers", "qa"],
+ # Optional additional information provided by the authenticator.
+ # This should not contain confidential data, as it can be recorded in logs
+ # or API objects, and is made available to admission webhooks.
"extra": {
"extrafield1": [
"extravalue1",
"extravalue2"
]
}
- }
+ },
+ # Optional list audience-aware token authenticators can return,
+ # containing the audiences from the `spec.audiences` list for which the provided token was valid.
+ # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.
+ "audiences": ["https://myserver.example.com"]
}
}
```
+{{% /tab %}}
+{{< /tabs >}}
An unsuccessful request would return:
-```json
+{{< tabs name="TokenReview_response_error" >}}
+{{% tab name="authentication.k8s.io/v1" %}}
+```yaml
+{
+ "apiVersion": "authentication.k8s.io/v1",
+ "kind": "TokenReview",
+ "status": {
+ "authenticated": false,
+ # Optionally include details about why authentication failed.
+ # If no error is provided, the API will return a generic Unauthorized message.
+ # The error field is ignored when authenticated=true.
+ "error": "Credentials are expired"
+ }
+}
+```
+{{% /tab %}}
+{{% tab name="authentication.k8s.io/v1beta1" %}}
+```yaml
{
"apiVersion": "authentication.k8s.io/v1beta1",
"kind": "TokenReview",
"status": {
- "authenticated": false
+ "authenticated": false,
+ # Optionally include details about why authentication failed.
+ # If no error is provided, the API will return a generic Unauthorized message.
+ # The error field is ignored when authenticated=true.
+ "error": "Credentials are expired"
}
}
```
-
-HTTP status codes can be used to supply additional error context.
-
+{{% /tab %}}
+{{< /tabs >}}
### Authenticating Proxy
diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md
index db668f818acda..7a251726fbc1b 100644
--- a/content/en/docs/reference/access-authn-authz/authorization.md
+++ b/content/en/docs/reference/access-authn-authz/authorization.md
@@ -52,7 +52,7 @@ Kubernetes reviews only the following API request attributes:
* **Resource** - The ID or name of the resource that is being accessed (for resource requests only) -- For resource requests using `get`, `update`, `patch`, and `delete` verbs, you must provide the resource name.
* **Subresource** - The subresource that is being accessed (for resource requests only).
* **Namespace** - The namespace of the object that is being accessed (for namespaced resource requests only).
- * **API group** - The {{< glossary_tooltip text="API Group" term_id="api-group" >}} being accessed (for resource requests only). An empty string designates the [core API group](/docs/concepts/overview/kubernetes-api/).
+ * **API group** - The {{< glossary_tooltip text="API Group" term_id="api-group" >}} being accessed (for resource requests only). An empty string designates the [core API group](/docs/reference/using-api/api-overview/#api-groups).
## Determine the Request Verb
diff --git a/content/en/docs/reference/glossary/rkt.md b/content/en/docs/reference/glossary/rkt.md
deleted file mode 100644
index 165bce5406130..0000000000000
--- a/content/en/docs/reference/glossary/rkt.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: rkt
-id: rkt
-date: 2019-01-24
-full_link: https://coreos.com/rkt/
-short_description: >
- A security-minded, standards-based container engine.
-
-aka:
-tags:
-- security
-- tool
----
- A security-minded, standards-based container engine.
-
-
-
-rkt is an application {{< glossary_tooltip text="container" term_id="container" >}} engine featuring a {{< glossary_tooltip text="Pod" term_id="pod" >}}-native approach, a pluggable execution environment, and a well-defined surface area. rkt allows users to apply different configurations at both the Pod and application level. Each Pod executes directly in the classic Unix process model, in a self-contained, isolated environment.
diff --git a/content/en/docs/reference/scheduling/config.md b/content/en/docs/reference/scheduling/config.md
index 530e881cbd84f..0dca862fb9f87 100644
--- a/content/en/docs/reference/scheduling/config.md
+++ b/content/en/docs/reference/scheduling/config.md
@@ -20,10 +20,7 @@ by implementing one or more of these extension points.
You can specify scheduling profiles by running `kube-scheduler --config `,
using the component config APIs
-([`v1alpha1`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1alpha1?tab=doc#KubeSchedulerConfiguration)
-or [`v1alpha2`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1alpha2?tab=doc#KubeSchedulerConfiguration)).
-The `v1alpha2` API allows you to configure kube-scheduler to run
-[multiple profiles](#multiple-profiles).
+([`v1beta1`](https://pkg.go.dev/k8s.io/kube-scheduler@v0.19.0/config/v1beta1?tab=doc#KubeSchedulerConfiguration)).
A minimal configuration looks as follows:
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md
index 21a6e628a8716..f84c62c01d20f 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md
@@ -12,6 +12,14 @@ weight: 90
from the community. Please try it out and give us feedback!
{{< /caution >}}
+## kubeadm alpha certs {#cmd-certs}
+
+A collection of operations for operating Kubernetes certificates.
+
+{{< tabs name="tab-certs" >}}
+{{< tab name="overview" include="generated/kubeadm_alpha_certs.md" />}}
+{{< /tabs >}}
+
## kubeadm alpha certs renew {#cmd-certs-renew}
You can renew all Kubernetes certificates using the `all` subcommand or renew them selectively.
@@ -42,6 +50,15 @@ to enable the automatic copy of certificates when joining additional control-pla
{{< tab name="certificate-key" include="generated/kubeadm_alpha_certs_certificate-key.md" />}}
{{< /tabs >}}
+## kubeadm alpha certs generate-csr {#cmd-certs-generate-csr}
+
+This command can be used to generate certificate signing requests (CSRs) which
+can be submitted to a certificate authority (CA) for signing.
+
+{{< tabs name="tab-certs-generate-csr" >}}
+{{< tab name="certificate-generate-csr" include="generated/kubeadm_alpha_certs_generate-csr.md" />}}
+{{< /tabs >}}
+
## kubeadm alpha certs check-expiration {#cmd-certs-check-expiration}
This command checks expiration for the certificates in the local PKI managed by kubeadm.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md
index 655f9ec875f14..23dff658e9eee 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-config.md
@@ -16,6 +16,10 @@ You can use `kubeadm config print` to print the default configuration and `kubea
convert your old configuration files to a newer version. `kubeadm config images list` and
`kubeadm config images pull` can be used to list and pull the images that kubeadm requires.
+For more information navigate to
+[Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file)
+or [Using kubeadm join with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-join/#config-file).
+
In Kubernetes v1.13.0 and later to list/pull kube-dns images instead of the CoreDNS image
the `--config` method described [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon)
has to be used.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md
index 289767e1e13cf..21ab7a863d73f 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md
@@ -119,6 +119,17 @@ Use the following phase to configure bootstrap tokens.
{{< tab name="bootstrap-token" include="generated/kubeadm_init_phase_bootstrap-token.md" />}}
{{< /tabs >}}
+## kubeadm init phase kubelet-finialize {#cmd-phase-kubelet-finalize-all}
+
+Use the following phase to update settings relevant to the kubelet after TLS
+bootstrap. You can use the `all` subcommand to run all `kubelet-finalize`
+phases.
+
+{{< tabs name="tab-kubelet-finalize" >}}
+{{< tab name="kublet-finalize" include="generated/kubeadm_init_phase_kubelet-finalize.md" />}}
+{{< tab name="kublet-finalize-all" include="generated/kubeadm_init_phase_kubelet-finalize_all.md" />}}
+{{< tab name="kublet-finalize-cert-rotation" include="generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md" />}}
+{{< /tabs >}}
## kubeadm init phase addon {#cmd-phase-addon}
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md
index 997240399e435..7a210ba5de257 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md
@@ -114,16 +114,18 @@ The config file is still considered beta and may change in future versions.
It's possible to configure `kubeadm init` with a configuration file instead of command
line flags, and some more advanced features may only be available as
-configuration file options. This file is passed with the `--config` option.
+configuration file options. This file is passed using the `--config` flag and it must
+contain a `ClusterConfiguration` structure and optionally more structures separated by `---\n`
+Mixing `--config` with others flags may not be allowed in some cases.
The default configuration can be printed out using the
[kubeadm config print](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command.
-It is **recommended** that you migrate your old `v1beta1` configuration to `v1beta2` using
+If your configuration is not using the latest version it is **recommended** that you migrate using
the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command.
-For more details on each field in the `v1beta2` configuration you can navigate to our
-[API reference pages](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2).
+For more information on the fields and usage of the configuration you can navigate to our API reference
+page and pick a version from [the list](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#pkg-subdirectories).
### Adding kube-proxy parameters {#kube-proxy}
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md
index 28d489cfb6ae9..0a39f709273f4 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md
@@ -273,15 +273,17 @@ The config file is still considered beta and may change in future versions.
It's possible to configure `kubeadm join` with a configuration file instead of command
line flags, and some more advanced features may only be available as
configuration file options. This file is passed using the `--config` flag and it must
-contain a `JoinConfiguration` structure.
+contain a `JoinConfiguration` structure. Mixing `--config` with others flags may not be
+allowed in some cases.
-To print the default values of `JoinConfiguration` run the following command:
+The default configuration can be printed out using the
+[kubeadm config print](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command.
-```shell
-kubeadm config print join-defaults
-```
+If your configuration is not using the latest version it is **recommended** that you migrate using
+the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command.
-For details on individual fields in `JoinConfiguration` see [the godoc](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#JoinConfiguration).
+For more information on the fields and usage of the configuration you can navigate to our API reference
+page and pick a version from [the list](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#pkg-subdirectories).
## {{% heading "whatsnext" %}}
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase.md
index a7f4b6d1a6468..1f712f912cbc3 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-upgrade-phase.md
@@ -15,6 +15,7 @@ be called on a primary control-plane node.
{{< tabs name="tab-phase" >}}
{{< tab name="phase" include="generated/kubeadm_upgrade_node_phase.md" />}}
+{{< tab name="preflight" include="generated/kubeadm_upgrade_node_phase_preflight.md" />}}
{{< tab name="control-plane" include="generated/kubeadm_upgrade_node_phase_control-plane.md" />}}
{{< tab name="kubelet-config" include="generated/kubeadm_upgrade_node_phase_kubelet-config.md" />}}
{{< /tabs >}}
diff --git a/content/en/docs/setup/best-practices/multiple-zones.md b/content/en/docs/setup/best-practices/multiple-zones.md
index 7c2622641b865..501e9546428bf 100644
--- a/content/en/docs/setup/best-practices/multiple-zones.md
+++ b/content/en/docs/setup/best-practices/multiple-zones.md
@@ -4,401 +4,141 @@ reviewers:
- justinsb
- quinton-hoole
title: Running in multiple zones
-weight: 10
+weight: 20
content_type: concept
---
-This page describes how to run a cluster in multiple zones.
-
-
+This page describes running Kubernetes across multiple zones.
-## Introduction
-
-Kubernetes 1.2 adds support for running a single cluster in multiple failure zones
-(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones").
-This is a lightweight version of a broader Cluster Federation feature (previously referred to by the affectionate
-nickname ["Ubernetes"](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/multicluster/federation.md)).
-Full Cluster Federation allows combining separate
-Kubernetes clusters running in different regions or cloud providers
-(or on-premises data centers). However, many
-users simply want to run a more available Kubernetes cluster in multiple zones
-of their single cloud provider, and this is what the multizone support in 1.2 allows
-(this previously went by the nickname "Ubernetes Lite").
-
-Multizone support is deliberately limited: a single Kubernetes cluster can run
-in multiple zones, but only within the same region (and cloud provider). Only
-GCE and AWS are currently supported automatically (though it is easy to
-add similar support for other clouds or even bare metal, by simply arranging
-for the appropriate labels to be added to nodes and volumes).
-
-
-## Functionality
-
-When nodes are started, the kubelet automatically adds labels to them with
-zone information.
-
-Kubernetes will automatically spread the pods in a replication controller
-or service across nodes in a single-zone cluster (to reduce the impact of
-failures.) With multiple-zone clusters, this spreading behavior is
-extended across zones (to reduce the impact of zone failures.) (This is
-achieved via `SelectorSpreadPriority`). This is a best-effort
-placement, and so if the zones in your cluster are heterogeneous
-(e.g. different numbers of nodes, different types of nodes, or
-different pod resource requirements), this might prevent perfectly
-even spreading of your pods across zones. If desired, you can use
-homogeneous zones (same number and types of nodes) to reduce the
-probability of unequal spreading.
-
-When persistent volumes are created, the `PersistentVolumeLabel`
-admission controller automatically adds zone labels to them. The scheduler (via the
-`VolumeZonePredicate` predicate) will then ensure that pods that claim a
-given volume are only placed into the same zone as that volume, as volumes
-cannot be attached across zones.
-
-## Limitations
-
-There are some important limitations of the multizone support:
-
-* We assume that the different zones are located close to each other in the
-network, so we don't perform any zone-aware routing. In particular, traffic
-that goes via services might cross zones (even if some pods backing that service
-exist in the same zone as the client), and this may incur additional latency and cost.
-
-* Volume zone-affinity will only work with a `PersistentVolume`, and will not
-work if you directly specify an EBS volume in the pod spec (for example).
-
-* Clusters cannot span clouds or regions (this functionality will require full
-federation support).
-
-* Although your nodes are in multiple zones, kube-up currently builds
-a single master node by default. While services are highly
-available and can tolerate the loss of a zone, the control plane is
-located in a single zone. Users that want a highly available control
-plane should follow the [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) instructions.
-
-### Volume limitations
-The following limitations are addressed with [topology-aware volume binding](/docs/concepts/storage/storage-classes/#volume-binding-mode).
-
-* StatefulSet volume zone spreading when using dynamic provisioning is currently not compatible with
- pod affinity or anti-affinity policies.
-
-* If the name of the StatefulSet contains dashes ("-"), volume zone spreading
- may not provide a uniform distribution of storage across zones.
-
-* When specifying multiple PVCs in a Deployment or Pod spec, the StorageClass
- needs to be configured for a specific single zone, or the PVs need to be
- statically provisioned in a specific zone. Another workaround is to use a
- StatefulSet, which will ensure that all the volumes for a replica are
- provisioned in the same zone.
-
-## Walkthrough
-
-We're now going to walk through setting up and using a multi-zone
-cluster on both GCE & AWS. To do so, you bring up a full cluster
-(specifying `MULTIZONE=true`), and then you add nodes in additional zones
-by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`).
-
-### Bringing up your cluster
-
-Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage multiple zones; creating nodes in us-central1-a.
-
-GCE:
-
-```shell
-curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
-```
-
-AWS:
-
-```shell
-curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
-```
-
-This step brings up a cluster as normal, still running in a single zone
-(but `MULTIZONE=true` has enabled multi-zone capabilities).
-
-### Nodes are labeled
-
-View the nodes; you can see that they are labeled with zone information.
-They are all in `us-central1-a` (GCE) or `us-west-2a` (AWS) so far. The
-labels are `failure-domain.beta.kubernetes.io/region` for the region,
-and `failure-domain.beta.kubernetes.io/zone` for the zone:
-
-```shell
-kubectl get nodes --show-labels
-```
-
-The output is similar to this:
-
-```shell
-NAME STATUS ROLES AGE VERSION LABELS
-kubernetes-master Ready,SchedulingDisabled 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
-kubernetes-minion-87j9 Ready 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
-kubernetes-minion-9vlv Ready 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-a12q Ready 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
-```
-
-### Add more nodes in a second zone
-
-Let's add another set of nodes to the existing cluster, reusing the
-existing master, running in a different zone (us-central1-b or us-west-2b).
-We run kube-up again, but by specifying `KUBE_USE_EXISTING_MASTER=true`
-kube-up will not create a new master, but will reuse one that was previously
-created instead.
-
-GCE:
-
-```shell
-KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
-```
-
-On AWS we also need to specify the network CIDR for the additional
-subnet, along with the master internal IP address:
-
-```shell
-KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
-```
-
-
-View the nodes again; 3 more nodes should have launched and be tagged
-in us-central1-b:
+## Background
-```shell
-kubectl get nodes --show-labels
-```
+Kubernetes is designed so that a single Kubernetes cluster can run
+across multiple failure zones, typically where these zones fit within
+a logical grouping called a _region_. Major cloud providers define a region
+as a set of failure zones (also called _availability zones_) that provide
+a consistent set of features: within a region, each zone offers the same
+APIs and services.
-The output is similar to this:
+Typical cloud architectures aim to minimize the chance that a failure in
+one zone also impairs services in another zone.
-```shell
-NAME STATUS ROLES AGE VERSION LABELS
-kubernetes-master Ready,SchedulingDisabled 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
-kubernetes-minion-281d Ready 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
-kubernetes-minion-87j9 Ready 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
-kubernetes-minion-9vlv Ready 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-a12q Ready 17m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
-kubernetes-minion-pp2f Ready 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f
-kubernetes-minion-wf8i Ready 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i
-```
+## Control plane behavior
-### Volume affinity
+All [control plane components](/docs/concepts/overview/components/#control-plane-components)
+support running as a pool of interchangable resources, replicated per
+component.
-Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity):
-
-```bash
-kubectl apply -f - <}}
-For version 1.3+ Kubernetes will distribute dynamic PV claims across
-the configured zones. For version 1.2, dynamic persistent volumes were
-always created in the zone of the cluster master
-(here us-central1-a / us-west-2a); that issue
-([#23330](https://github.com/kubernetes/kubernetes/issues/23330))
-was addressed in 1.3+.
+Kubernetes does not provide cross-zone resilience for the API server
+endpoints. You can use various techniques to improve availability for
+the cluster API server, including DNS round-robin, SRV records, or
+a third-party load balancing solution with health checking.
{{< /note >}}
-Now let's validate that Kubernetes automatically labeled the zone & region the PV was created in.
-
-```shell
-kubectl get pv --show-labels
-```
-
-The output is similar to this:
-
-```shell
-NAME CAPACITY ACCESSMODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
-pv-gce-mj4gm 5Gi RWO Retain Bound default/claim1 manual 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a
-```
-
-So now we will create a pod that uses the persistent volume claim.
-Because GCE PDs / AWS EBS volumes cannot be attached across zones,
-this means that this pod can only be created in the same zone as the volume:
-
-```yaml
-kubectl apply -f - <}}
+or {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}})
+across different nodes in a cluster. This spreading helps
+reduce the impact of failures.
-The pods should be spread across all 3 zones:
+When nodes start up, the kubelet on each node automatically adds
+{{< glossary_tooltip text="labels" term_id="label" >}} to the Node object
+that represents that specific kubelet in the Kubernetes API.
+These labels can include
+[zone information](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone).
-```shell
-kubectl describe pod -l app=guestbook | grep Node
-```
+If your cluster spans multiple zones or regions, you can use node labels
+in conjunction with
+[Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
+to control how Pods are spread across your cluster among fault domains:
+regions, zones, and even specific nodes.
+These hints enable the
+{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} to place
+Pods for better expected availability, reducing the risk that a correlated
+failure affects your whole workload.
-```shell
-Node: kubernetes-minion-9vlv/10.240.0.5
-Node: kubernetes-minion-281d/10.240.0.8
-Node: kubernetes-minion-olsh/10.240.0.11
-```
+For example, you can set a constraint to make sure that the
+3 replicas of a StatefulSet are all running in different zones to each
+other, whenever that is feasible. You can define this declaratively
+without explicitly defining which availability zones are in use for
+each workload.
-```shell
-kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels
-```
+### Distributing nodes across zones
-```shell
-NAME STATUS ROLES AGE VERSION LABELS
-kubernetes-minion-9vlv Ready 34m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-281d Ready 20m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
-kubernetes-minion-olsh Ready 3m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh
-```
+Kubernetes' core does not create nodes for you; you need to do that yourself,
+or use a tool such as the [Cluster API](https://cluster-api.sigs.k8s.io/) to
+manage nodes on your behalf.
+Using tools such as the Cluster API you can define sets of machines to run as
+worker nodes for your cluster across multiple failure domains, and rules to
+automatically heal the cluster in case of whole-zone service disruption.
-Load-balancers span all zones in a cluster; the guestbook-go example
-includes an example load-balanced service:
+## Manual zone assignment for Pods
-```shell
-kubectl describe service guestbook | grep LoadBalancer.Ingress
-```
-
-The output is similar to this:
-
-```shell
-LoadBalancer Ingress: 130.211.126.21
-```
-
-Set the above IP:
-
-```shell
-export IP=130.211.126.21
-```
-
-Explore with curl via IP:
-
-```shell
-curl -s http://${IP}:3000/env | grep HOSTNAME
-```
-
-The output is similar to this:
-
-```shell
- "HOSTNAME": "guestbook-44sep",
-```
-
-Again, explore multiple times:
-
-```shell
-(for i in `seq 20`; do curl -s http://${IP}:3000/env | grep HOSTNAME; done) | sort | uniq
-```
-
-The output is similar to this:
-
-```shell
- "HOSTNAME": "guestbook-44sep",
- "HOSTNAME": "guestbook-hum5n",
- "HOSTNAME": "guestbook-ppm40",
-```
-
-The load balancer correctly targets all the pods, even though they are in multiple zones.
-
-### Shutting down the cluster
-
-When you're done, clean up:
-
-GCE:
-
-```shell
-KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh
-KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh
-KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a kubernetes/cluster/kube-down.sh
-```
-
-AWS:
-
-```shell
-KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c kubernetes/cluster/kube-down.sh
-KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh
-KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh
-```
+You can apply [node selector constraints](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
+to Pods that you create, as well as to Pod templates in workload resources
+such as Deployment, StatefulSet, or Job.
+## Storage access for zones
+When persistent volumes are created, the `PersistentVolumeLabel`
+[admission controller](/docs/reference/access-authn-authz/admission-controllers/)
+automatically adds zone labels to any PersistentVolumes that are linked to a specific
+zone. The {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} then ensures,
+through its `NoVolumeZoneConflict` predicate, that pods which claim a given PersistentVolume
+are only placed into the same zone as that volume.
+
+You can specify a {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}
+for PersistentVolumeClaims that specifies the failure domains (zones) that the
+storage in that class may use.
+To learn about configuring a StorageClass that is aware of failure domains or zones,
+see [Allowed topologies](/docs/concepts/storage/storage-classes/#allowed-topologies).
+
+## Networking
+
+By itself, Kubernetes does not include zone-aware networking. You can use a
+[network plugin](docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
+to configure cluster networking, and that network solution might have zone-specific
+elements. For example, if your cloud provider supports Services with
+`type=LoadBalancer`, the load balancer might only send traffic to Pods running in the
+same zone as the load balancer element processing a given connection.
+Check your cloud provider's documentation for details.
+
+For custom or on-premises deployments, similar considerations apply.
+{{< glossary_tooltip text="Service" term_id="service" >}} and
+{{< glossary_tooltip text="Ingress" term_id="ingress" >}} behavior, including handling
+of different failure zones, does vary depending on exactly how your cluster is set up.
+
+## Fault recovery
+
+When you set up your cluster, you might also need to consider whether and how
+your setup can restore service if all of the failure zones in a region go
+off-line at the same time. For example, do you rely on there being at least
+one node able to run Pods in a zone?
+Make sure that any cluster-critical repair work does not rely
+on there being at least one healthy node in your cluster. For example: if all nodes
+are unhealthy, you might need to run a repair Job with a special
+{{< glossary_tooltip text="toleration" term_id="toleration" >}} so that the repair
+can complete enough to bring at least one node into service.
+
+Kubernetes doesn't come with an answer for this challenge; however, it's
+something to consider.
+
+## {{% heading "whatsnext" %}}
+
+To learn how the scheduler places Pods in a cluster, honoring the configured constraints,
+visit [Scheduling and Eviction](/docs/concepts/scheduling-eviction/).
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
index f82e4637c2dcd..daf4aaec1a670 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
@@ -139,7 +139,7 @@ is not supported by kubeadm.
For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
-For a complete list of configuration options, see the [configuration file documentation](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
+To configure `kubeadm init` with a configuration file see [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/).
diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md
index df4b49dc19f71..5b3fd114b0a24 100644
--- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md
+++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md
@@ -132,31 +132,12 @@ The following file is an Ingress resource that sends traffic to your Service via
1. Create `example-ingress.yaml` from the following file:
- ```yaml
- apiVersion: networking.k8s.io/v1
- kind: Ingress
- metadata:
- name: example-ingress
- annotations:
- nginx.ingress.kubernetes.io/rewrite-target: /$1
- spec:
- rules:
- - host: hello-world.info
- http:
- paths:
- - path: /
- pathType: Prefix
- backend:
- service:
- name: web
- port:
- number: 8080
- ```
+ {{< codenew file="service/networking/example-ingress.yaml" >}}
1. Create the Ingress resource by running the following command:
```shell
- kubectl apply -f example-ingress.yaml
+ kubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml
```
Output:
diff --git a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
index a6c2e217a50e2..3eceb4f6d2fbc 100644
--- a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
+++ b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
@@ -152,7 +152,7 @@ for database debugging.
or
```shell
- kubectl port-forward service/redis-master 7000:6379
+ kubectl port-forward service/redis-master 7000:redis
```
Any of the above commands works. The output is similar to this:
diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
index e762258c88c1b..8680abad43e6e 100644
--- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
+++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
@@ -222,7 +222,7 @@ data:
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
- proxy . /etc/resolv.conf
+ forward . /etc/resolv.conf
cache 30
loop
reload
diff --git a/content/en/docs/tasks/administer-cluster/out-of-resource.md b/content/en/docs/tasks/administer-cluster/out-of-resource.md
index 973eda947eacd..908d7155d541d 100644
--- a/content/en/docs/tasks/administer-cluster/out-of-resource.md
+++ b/content/en/docs/tasks/administer-cluster/out-of-resource.md
@@ -341,4 +341,11 @@ to prevent system OOMs, and promote eviction of workloads so cluster state can r
The Pod eviction may evict more Pods than needed due to stats collection timing gap. This can be mitigated by adding
the ability to get root container stats on an on-demand basis [(https://github.com/google/cadvisor/issues/1247)](https://github.com/google/cadvisor/issues/1247) in the future.
+### active_file memory is not considered as available memory
+
+On Linux, the kernel tracks the number of bytes of file-backed memory on active LRU list as the `active_file` statistic. The kubelet treats `active_file` memory areas as not reclaimable. For workloads that make intensive use of block-backed local storage, including ephemeral local storage, kernel-level caches of file and block data means that many recently accessed cache pages are likely to be counted as `active_file`. If enough of these kernel block buffers are on the active LRU list, the kubelet is liable to observe this as high resource use and taint the node as experiencing memory pressure - triggering Pod eviction.
+
+For more more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
+
+You can work around that behavior by setting the memory limit and memory request the same for containers likely to perform intensive I/O activity. You will need to estimate or measure an optimal memory limit value for that container.
diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
index 4f00675c3732f..b6249f50efa1a 100644
--- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
+++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
@@ -39,22 +39,7 @@ the kubelet command line option `--reserved-cpus` to set an
## Node Allocatable
-```text
- Node Capacity
----------------------------
-| kube-reserved |
-|-------------------------|
-| system-reserved |
-|-------------------------|
-| eviction-threshold |
-|-------------------------|
-| |
-| allocatable |
-| (available for pods) |
-| |
-| |
----------------------------
-```
+![node capacity](/images/docs/node-capacity.svg)
`Allocatable` on a Kubernetes node is defined as the amount of compute resources
that are available for pods. The scheduler does not over-subscribe
diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
index aa01c902e4e66..b1a7e565480c4 100644
--- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
+++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
@@ -11,7 +11,7 @@ content_type: concept
{{< feature-state state="beta" for_k8s_version="v1.11" >}}
-Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the {{< glossary_tooltip text="`cloud-controller-manager`" term_id="cloud-controller-manager" >}} binary allows cloud vendors to evolve independently from the core Kubernetes code.
+Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the `{{< glossary_tooltip text="cloud-controller-manager" term_id="cloud-controller-manager" >}}` binary allows cloud vendors to evolve independently from the core Kubernetes code.
The `cloud-controller-manager` can be linked to any cloud provider that satisfies [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go). For backwards compatibility, the [cloud-controller-manager](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager) provided in the core Kubernetes project uses the same cloud libraries as `kube-controller-manager`. Cloud providers already supported in Kubernetes core are expected to use the in-tree cloud-controller-manager to transition out of Kubernetes core.
diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md
index ed1b9657c81a1..3f37916660365 100644
--- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md
+++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md
@@ -6,23 +6,21 @@ reviewers:
- kow3ns
title: Safely Drain a Node while Respecting the PodDisruptionBudget
content_type: task
+min-kubernetes-server-version: 1.5
---
-This page shows how to safely drain a node, respecting the PodDisruptionBudget you have defined.
-
+This page shows how to safely drain a {{< glossary_tooltip text="node" term_id="node" >}},
+respecting the PodDisruptionBudget you have defined.
## {{% heading "prerequisites" %}}
-
-This task assumes that you have met the following prerequisites:
-
-* You are using Kubernetes release >= 1.5.
-* Either:
+{{% version-check %}}
+This task also assumes that you have met the following prerequisites:
1. You do not require your applications to be highly available during the
node drain, or
- 1. You have read about the [PodDisruptionBudget concept](/docs/concepts/workloads/pods/disruptions/)
- and [Configured PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/) for
+ 1. You have read about the [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) concept,
+ and have [configured PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/) for
applications that need them.
@@ -35,10 +33,10 @@ You can use `kubectl drain` to safely evict all of your pods from a
node before you perform maintenance on the node (e.g. kernel upgrade,
hardware maintenance, etc.). Safe evictions allow the pod's containers
to [gracefully terminate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
-and will respect the `PodDisruptionBudgets` you have specified.
+and will respect the PodDisruptionBudgets you have specified.
{{< note >}}
-By default `kubectl drain` will ignore certain system pods on the node
+By default `kubectl drain` ignores certain system pods on the node
that cannot be killed; see
the [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands/#drain)
documentation for more details.
@@ -78,29 +76,29 @@ The `kubectl drain` command should only be issued to a single node at a
time. However, you can run multiple `kubectl drain` commands for
different nodes in parallel, in different terminals or in the
background. Multiple drain commands running concurrently will still
-respect the `PodDisruptionBudget` you specify.
+respect the PodDisruptionBudget you specify.
For example, if you have a StatefulSet with three replicas and have
-set a `PodDisruptionBudget` for that set specifying `minAvailable:
-2`. `kubectl drain` will only evict a pod from the StatefulSet if all
-three pods are ready, and if you issue multiple drain commands in
-parallel, Kubernetes will respect the PodDisruptionBudget and ensure
-that only one pod is unavailable at any given time. Any drains that
-would cause the number of ready replicas to fall below the specified
-budget are blocked.
+set a PodDisruptionBudget for that set specifying `minAvailable: 2`,
+`kubectl drain` only evicts a pod from the StatefulSet if all three
+replicas pods are ready; if then you issue multiple drain commands in
+parallel, Kubernetes respects the PodDisruptionBudget and ensure
+that only 1 (calculated as `replicas - minAvailable`) Pod is unavailable
+at any given time. Any drains that would cause the number of ready
+replicas to fall below the specified budget are blocked.
-## The Eviction API
+## The Eviction API {#eviction-api}
If you prefer not to use [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands/#drain) (such as
to avoid calling to an external command, or to get finer control over the pod
eviction process), you can also programmatically cause evictions using the eviction API.
-You should first be familiar with using [Kubernetes language clients](/docs/tasks/administer-cluster/access-cluster-api/#programmatic-access-to-the-api).
+You should first be familiar with using [Kubernetes language clients](/docs/tasks/administer-cluster/access-cluster-api/#programmatic-access-to-the-api) to access the API.
The eviction subresource of a
-pod can be thought of as a kind of policy-controlled DELETE operation on the pod
-itself. To attempt an eviction (perhaps more REST-precisely, to attempt to
-*create* an eviction), you POST an attempted operation. Here's an example:
+Pod can be thought of as a kind of policy-controlled DELETE operation on the Pod
+itself. To attempt an eviction (more precisely: to attempt to
+*create* an Eviction), you POST an attempted operation. Here's an example:
```json
{
@@ -116,21 +114,19 @@ itself. To attempt an eviction (perhaps more REST-precisely, to attempt to
You can attempt an eviction using `curl`:
```bash
-curl -v -H 'Content-type: application/json' http://127.0.0.1:8080/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json
+curl -v -H 'Content-type: application/json' https://your-cluster-api-endpoint.example/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json
```
The API can respond in one of three ways:
-- If the eviction is granted, then the pod is deleted just as if you had sent
- a `DELETE` request to the pod's URL and you get back `200 OK`.
+- If the eviction is granted, then the Pod is deleted just as if you had sent
+ a `DELETE` request to the Pod's URL and you get back `200 OK`.
- If the current state of affairs wouldn't allow an eviction by the rules set
forth in the budget, you get back `429 Too Many Requests`. This is
typically used for generic rate limiting of *any* requests, but here we mean
that this request isn't allowed *right now* but it may be allowed later.
- Currently, callers do not get any `Retry-After` advice, but they may in
- future versions.
-- If there is some kind of misconfiguration, like multiple budgets pointing at
- the same pod, you will get `500 Internal Server Error`.
+- If there is some kind of misconfiguration; for example multiple PodDisruptionBudgets
+ that refer the same Pod, you get a `500 Internal Server Error` response.
For a given eviction request, there are two cases:
@@ -139,21 +135,25 @@ For a given eviction request, there are two cases:
- There is at least one budget. In this case, any of the three above responses may
apply.
-In some cases, an application may reach a broken state where it will never return anything
-other than 429 or 500. This can happen, for example, if the replacement pod created by the
-application's controller does not become ready, or if the last pod evicted has a very long
-termination grace period.
+## Stuck evictions
+
+In some cases, an application may reach a broken state, one where unless you intervene the
+eviction API will never return anything other than 429 or 500.
+
+For example: this can happen if ReplicaSet is creating Pods for your application but
+the replacement Pods do not become `Ready`. You can also see similar symptoms if the
+last Pod evicted has a very long termination grace period.
In this case, there are two potential solutions:
-- Abort or pause the automated operation. Investigate the reason for the stuck application, and restart the automation.
-- After a suitably long wait, `DELETE` the pod instead of using the eviction API.
+- Abort or pause the automated operation. Investigate the reason for the stuck application,
+ and restart the automation.
+- After a suitably long wait, `DELETE` the Pod from your cluster's control plane, instead
+ of using the eviction API.
Kubernetes does not specify what the behavior should be in this case; it is up to the
application owners and cluster owners to establish an agreement on behavior in these cases.
-
-
## {{% heading "whatsnext" %}}
@@ -162,4 +162,3 @@ application owners and cluster owners to establish an agreement on behavior in t
-
diff --git a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md
index 3afda46609fc9..76c555aace1f9 100644
--- a/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md
+++ b/content/en/docs/tasks/configure-pod-container/assign-cpu-resource.md
@@ -222,6 +222,13 @@ Container is automatically assigned the default limit. Cluster administrators ca
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core/)
to specify a default value for the CPU limit.
+## If you specify a CPU limit but do not specify a CPU request
+
+If you specify a CPU limit for a Container but do not specify a CPU request, Kubernetes automatically
+assigns a CPU request that matches the limit. Similarly, if a Container specifies its own memory limit,
+but does not specify a memory request, Kubernetes automatically assigns a memory request that matches
+the limit.
+
## Motivation for CPU requests and limits
By configuring the CPU requests and limits of the Containers that run in your
diff --git a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
index 00c5eb89447f3..f5f2e78ba2db3 100644
--- a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
+++ b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
@@ -9,29 +9,29 @@ content_type: concept
Resource usage metrics, such as container CPU and memory usage,
-are available in Kubernetes through the Metrics API. These metrics can be either accessed directly
-by user, for example by using `kubectl top` command, or used by a controller in the cluster, e.g.
+are available in Kubernetes through the Metrics API. These metrics can be accessed either directly
+by the user with the `kubectl top` command, or by a controller in the cluster, for example
Horizontal Pod Autoscaler, to make decisions.
## The Metrics API
-Through the Metrics API you can get the amount of resource currently used
+Through the Metrics API, you can get the amount of resource currently used
by a given node or a given pod. This API doesn't store the metric values,
-so it's not possible for example to get the amount of resources used by a
+so it's not possible, for example, to get the amount of resources used by a
given node 10 minutes ago.
The API is no different from any other API:
-- it is discoverable through the same endpoint as the other Kubernetes APIs under `/apis/metrics.k8s.io/` path
-- it offers the same security, scalability and reliability guarantees
+- it is discoverable through the same endpoint as the other Kubernetes APIs under the path: `/apis/metrics.k8s.io/`
+- it offers the same security, scalability, and reliability guarantees
The API is defined in [k8s.io/metrics](https://github.com/kubernetes/metrics/blob/master/pkg/apis/metrics/v1beta1/types.go)
repository. You can find more information about the API there.
{{< note >}}
-The API requires metrics server to be deployed in the cluster. Otherwise it will be not available.
+The API requires the metrics server to be deployed in the cluster. Otherwise it will be not available.
{{< /note >}}
## Measuring Resource Usage
@@ -49,22 +49,19 @@ The kubelet chooses the window for the rate calculation.
Memory is reported as the working set, in bytes, at the instant the metric was collected.
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure.
However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
-It includes all anonymous (non-file-backed) memory since kubernetes does not support swap.
+It includes all anonymous (non-file-backed) memory since Kubernetes does not support swap.
The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
## Metrics Server
[Metrics Server](https://github.com/kubernetes-incubator/metrics-server) is a cluster-wide aggregator of resource usage data.
-It is deployed by default in clusters created by `kube-up.sh` script
-as a Deployment object. If you use a different Kubernetes setup mechanism you can deploy it using the provided
+By default, it is deployed in clusters created by `kube-up.sh` script
+as a Deployment object. If you use a different Kubernetes setup mechanism, you can deploy it using the provided
[deployment components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases) file.
-Metric server collects metrics from the Summary API, exposed by
-[Kubelet](/docs/reference/command-line-tools-reference/kubelet/) on each node.
-
-Metrics Server is registered with the main API server through
+Metrics Server collects metrics from the Summary API, exposed by
+[Kubelet](/docs/reference/command-line-tools-reference/kubelet/) on each node, and is registered with the main API server via
[Kubernetes aggregator](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
Learn more about the metrics server in
[the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
-
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index 9c8e1f89515b6..a89fb778a952b 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -114,11 +114,7 @@ Now, we will see how the autoscaler reacts to increased load.
We will start a container, and send an infinite loop of queries to the php-apache service (please run it in a different terminal):
```shell
-kubectl run -it --rm load-generator --image=busybox /bin/sh
-
-Hit enter for command prompt
-
-while true; do wget -q -O- http://php-apache; done
+kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
```
Within a minute or so, we should see the higher CPU load by executing:
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
index c79059d335da1..1da20c5219411 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -319,7 +319,7 @@ For instance if there are 80 replicas and the target has to be scaled down to 10
then during the first step 8 replicas will be reduced. In the next iteration when the number
of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of
the autoscaler controller the number of pods to be change is re-calculated based on the number
-of current replicas. When the number of replicas falls below 40 the first policy_(Pods)_ is applied
+of current replicas. When the number of replicas falls below 40 the first policy _(Pods)_ is applied
and 4 replicas will be reduced at a time.
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
@@ -328,7 +328,7 @@ allows at most 10% of the current replicas to be scaled down in one minute.
The policy selection can be changed by specifying the `selectPolicy` field for a scaling
direction. By setting the value to `Min` which would select the policy which allows the
-smallest change in the replica count. Setting the value to `Disabled` completely disabled
+smallest change in the replica count. Setting the value to `Disabled` completely disables
scaling in that direction.
### Stabilization Window
@@ -405,8 +405,9 @@ behavior:
periodSeconds: 60
```
-To allow a final drop of 5 pods, another policy can be added with a selection
-strategy of maximum:
+To ensure that no more than 5 Pods are removed per minute, you can add a second scale-down
+policy with a fixed size of 5, and set `selectPolicy` to minimum. Setting `selectPolicy` to `Min` means
+that the autoscaler chooses the policy that affects the smallest number of Pods:
```yaml
behavior:
@@ -418,7 +419,7 @@ behavior:
- type: Pods
value: 5
periodSeconds: 60
- selectPolicy: Max
+ selectPolicy: Min
```
### Example: disable scale down
@@ -441,4 +442,3 @@ behavior:
* kubectl autoscale command: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).
-
diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
index e35fddcd467aa..62e5cfc9cfc88 100644
--- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
+++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
@@ -76,11 +76,16 @@ cat <View kubectl Install and Set Up Guide
-## Minikube
+You can also read the
+[`kubectl` reference documentation](/docs/reference/kubectl/).
-[Minikube](https://minikube.sigs.k8s.io/) is a tool that lets you run
-Kubernetes locally. Minikube runs a single-node Kubernetes cluster on your personal
-computer (including Windows, macOS and Linux PCs) so that you can try out Kubernetes,
-or for daily development work.
+## minikube
-You can follow the official [Get Started!](https://minikube.sigs.k8s.io/docs/start/)
-guide, or read [Install Minikube](/docs/tasks/tools/install-minikube/) if your focus
-is on getting the tool installed.
+[`minikube`](https://minikube.sigs.k8s.io/) is a tool that lets you run Kubernetes
+locally. `minikube` runs a single-node Kubernetes cluster on your personal
+computer (including Windows, macOS and Linux PCs) so that you can try out
+Kubernetes, or for daily development work.
-Once you have Minikube working, you can use it to
+You can follow the official
+[Get Started!](https://minikube.sigs.k8s.io/docs/start/) guide if your focus is
+on getting the tool installed.
+
+View minikube Get Started! Guide
+
+Once you have `minikube` working, you can use it to
[run a sample application](/docs/tutorials/hello-minikube/).
## kind
-Like Minikube, [kind](https://kind.sigs.k8s.io/docs/) lets you run Kubernetes on
-your local computer. Unlike Minikube, kind only works with a single container runtime:
-it requires that you have [Docker](https://docs.docker.com/get-docker/) installed
-and configured.
+Like `minikube`, [`kind`](https://kind.sigs.k8s.io/docs/) lets you run Kubernetes on
+your local computer. Unlike `minikube`, `kind` only works with a single container
+runtime: it requires that you have [Docker](https://docs.docker.com/get-docker/)
+installed and configured.
+
+[Quick Start](https://kind.sigs.k8s.io/docs/user/quick-start/) shows you what
+you need to do to get up and running with `kind`.
-[Quick Start](https://kind.sigs.k8s.io/docs/user/quick-start/) shows you what you
-need to do to get up and running with kind.
+View kind Quick Start Guide
diff --git a/content/en/docs/tasks/tools/install-minikube.md b/content/en/docs/tasks/tools/install-minikube.md
deleted file mode 100644
index d8b5b101c4971..0000000000000
--- a/content/en/docs/tasks/tools/install-minikube.md
+++ /dev/null
@@ -1,262 +0,0 @@
----
-title: Install Minikube
-content_type: task
-weight: 20
-card:
- name: tasks
- weight: 10
----
-
-
-
-This page shows you how to install [Minikube](/docs/tutorials/hello-minikube), a tool that runs a single-node Kubernetes cluster in a virtual machine on your personal computer.
-
-
-
-## {{% heading "prerequisites" %}}
-
-
-{{< tabs name="minikube_before_you_begin" >}}
-{{% tab name="Linux" %}}
-To check if virtualization is supported on Linux, run the following command and verify that the output is non-empty:
-```
-grep -E --color 'vmx|svm' /proc/cpuinfo
-```
-{{% /tab %}}
-
-{{% tab name="macOS" %}}
-To check if virtualization is supported on macOS, run the following command on your terminal.
-```
-sysctl -a | grep -E --color 'machdep.cpu.features|VMX'
-```
-If you see `VMX` in the output (should be colored), the VT-x feature is enabled in your machine.
-{{% /tab %}}
-
-{{% tab name="Windows" %}}
-To check if virtualization is supported on Windows 8 and above, run the following command on your Windows terminal or command prompt.
-```
-systeminfo
-```
-If you see the following output, virtualization is supported on Windows.
-```
-Hyper-V Requirements: VM Monitor Mode Extensions: Yes
- Virtualization Enabled In Firmware: Yes
- Second Level Address Translation: Yes
- Data Execution Prevention Available: Yes
-```
-
-If you see the following output, your system already has a Hypervisor installed and you can skip the next step.
-```
-Hyper-V Requirements: A hypervisor has been detected. Features required for Hyper-V will not be displayed.
-```
-
-
-{{% /tab %}}
-{{< /tabs >}}
-
-
-
-
-
-## Installing minikube
-
-{{< tabs name="tab_with_md" >}}
-{{% tab name="Linux" %}}
-
-### Install kubectl
-
-Make sure you have kubectl installed. You can install kubectl according to the instructions in [Install and Set Up kubectl](/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux).
-
-### Install a Hypervisor
-
-If you do not already have a hypervisor installed, install one of these now:
-
-• [KVM](https://www.linux-kvm.org/), which also uses QEMU
-
-• [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
-
-Minikube also supports a `--driver=none` option that runs the Kubernetes components on the host and not in a VM.
-Using this driver requires [Docker](https://www.docker.com/products/docker-desktop) and a Linux environment but not a hypervisor.
-
-If you're using the `none` driver in Debian or a derivative, use the `.deb` packages for
-Docker rather than the snap package, which does not work with Minikube.
-You can download `.deb` packages from [Docker](https://www.docker.com/products/docker-desktop).
-
-{{< caution >}}
-The `none` VM driver can result in security and data loss issues.
-Before using `--driver=none`, consult [this documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) for more information.
-{{< /caution >}}
-
-Minikube also supports a `vm-driver=podman` similar to the Docker driver. Podman run as superuser privilege (root user) is the best way to ensure that your containers have full access to any feature available on your system.
-
-{{< caution >}}
-The `podman` driver requires running the containers as root because regular user accounts don't have full access to all operating system features that their containers might need to run.
-{{< /caution >}}
-
-### Install Minikube using a package
-
-There are *experimental* packages for Minikube available; you can find Linux (AMD64) packages
-from Minikube's [releases](https://github.com/kubernetes/minikube/releases) page on GitHub.
-
-Use your Linux's distribution's package tool to install a suitable package.
-
-### Install Minikube via direct download
-
-If you're not installing via a package, you can download a stand-alone
-binary and use that.
-
-```shell
-curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
- && chmod +x minikube
-```
-
-Here's an easy way to add the Minikube executable to your path:
-
-```shell
-sudo mkdir -p /usr/local/bin/
-sudo install minikube /usr/local/bin/
-```
-
-### Install Minikube using Homebrew
-
-As yet another alternative, you can install Minikube using Linux [Homebrew](https://docs.brew.sh/Homebrew-on-Linux):
-
-```shell
-brew install minikube
-```
-
-{{% /tab %}}
-{{% tab name="macOS" %}}
-### Install kubectl
-
-Make sure you have kubectl installed. You can install kubectl according to the instructions in [Install and Set Up kubectl](/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos).
-
-### Install a Hypervisor
-
-If you do not already have a hypervisor installed, install one of these now:
-
-• [HyperKit](https://github.com/moby/hyperkit)
-
-• [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
-
-• [VMware Fusion](https://www.vmware.com/products/fusion)
-
-### Install Minikube
-The easiest way to install Minikube on macOS is using [Homebrew](https://brew.sh):
-
-```shell
-brew install minikube
-```
-
-You can also install it on macOS by downloading a stand-alone binary:
-
-```shell
-curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
- && chmod +x minikube
-```
-
-Here's an easy way to add the Minikube executable to your path:
-
-```shell
-sudo mv minikube /usr/local/bin
-```
-
-{{% /tab %}}
-{{% tab name="Windows" %}}
-### Install kubectl
-
-Make sure you have kubectl installed. You can install kubectl according to the instructions in [Install and Set Up kubectl](/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows).
-
-### Install a Hypervisor
-
-If you do not already have a hypervisor installed, install one of these now:
-
-• [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install)
-
-• [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
-
-{{< note >}}
-Hyper-V can run on three versions of Windows 10: Windows 10 Enterprise, Windows 10 Professional, and Windows 10 Education.
-{{< /note >}}
-
-### Install Minikube using Chocolatey
-
-The easiest way to install Minikube on Windows is using [Chocolatey](https://chocolatey.org/) (run as an administrator):
-
-```shell
-choco install minikube
-```
-
-After Minikube has finished installing, close the current CLI session and restart. Minikube should have been added to your path automatically.
-
-### Install Minikube using an installer executable
-
-To install Minikube manually on Windows using [Windows Installer](https://docs.microsoft.com/en-us/windows/desktop/msi/windows-installer-portal), download [`minikube-installer.exe`](https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe) and execute the installer.
-
-### Install Minikube via direct download
-
-To install Minikube manually on Windows, download [`minikube-windows-amd64`](https://github.com/kubernetes/minikube/releases/latest), rename it to `minikube.exe`, and add it to your path.
-
-{{% /tab %}}
-{{< /tabs >}}
-
-## Confirm Installation
-
-To confirm successful installation of both a hypervisor and Minikube, you can run the following command to start up a local Kubernetes cluster:
-
-{{< note >}}
-
-For setting the `--driver` with `minikube start`, enter the name of the hypervisor you installed in lowercase letters where `` is mentioned below. A full list of `--driver` values is available in [specifying the VM driver documentation](/docs/setup/learning-environment/minikube/#specifying-the-vm-driver).
-
-{{< /note >}}
-
-{{< caution >}}
-When using KVM, note that libvirt's default QEMU URI under Debian and some other systems is `qemu:///session` whereas Minikube's default QEMU URI is `qemu:///system`. If this is the case for your system, you will need to pass `--kvm-qemu-uri qemu:///session` to `minikube start`.
-{{< /caution >}}
-
-```shell
-minikube start --driver=
-```
-
-Once `minikube start` finishes, run the command below to check the status of the cluster:
-
-```shell
-minikube status
-```
-
-If your cluster is running, the output from `minikube status` should be similar to:
-
-```
-host: Running
-kubelet: Running
-apiserver: Running
-kubeconfig: Configured
-```
-
-After you have confirmed whether Minikube is working with your chosen hypervisor, you can continue to use Minikube or you can stop your cluster. To stop your cluster, run:
-
-```shell
-minikube stop
-```
-
-## Clean up local state {#cleanup-local-state}
-
-If you have previously installed Minikube, and run:
-```shell
-minikube start
-```
-
-and `minikube start` returned an error:
-```
-machine does not exist
-```
-
-then you need to clear minikube's local state:
-```shell
-minikube delete
-```
-
-## {{% heading "whatsnext" %}}
-
-
-* [Running Kubernetes Locally via Minikube](/docs/setup/learning-environment/minikube/)
diff --git a/content/en/docs/tutorials/configuration/configure-java-microservice/_index.md b/content/en/docs/tutorials/configuration/configure-java-microservice/_index.md
new file mode 100755
index 0000000000000..8a5bc5d60471a
--- /dev/null
+++ b/content/en/docs/tutorials/configuration/configure-java-microservice/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Example: Configuring a Java Microservice"
+weight: 10
+---
+
diff --git a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html
new file mode 100644
index 0000000000000..bb926a1d197d5
--- /dev/null
+++ b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html
@@ -0,0 +1,30 @@
+---
+title: "Interactive Tutorial - Configuring a Java Microservice"
+weight: 20
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ To interact with the Terminal, please use the desktop/tablet version
+
+
+
+
+
+
+
+
+
diff --git a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md
new file mode 100644
index 0000000000000..712bc64d55eae
--- /dev/null
+++ b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md
@@ -0,0 +1,39 @@
+---
+title: "Externalizing config using MicroProfile, ConfigMaps and Secrets"
+content_type: tutorial
+weight: 10
+---
+
+
+
+In this tutorial you will learn how and why to externalize your microservice’s configuration. Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment variables and then consume them using MicroProfile Config.
+
+
+## {{% heading "prerequisites" %}}
+
+### Creating Kubernetes ConfigMaps & Secrets
+There are several ways to set environment variables for a Docker container in Kubernetes, including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the tutorial, you will learn how to use the latter two for setting your environment variables whose values will be injected into your microservices. One of the benefits for using ConfigMaps and Secrets is that they can be re-used across multiple containers, including being assigned to different environment variables for the different containers.
+
+ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive Tutorial you will learn how to use a ConfigMap to store the application's name. For more information regarding ConfigMaps, you can find the documentation [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/).
+
+Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that they're intended for confidential/sensitive information and are stored using Base64 encoding. This makes secrets the appropriate choice for storing such things as credentials, keys, and tokens, the former of which you'll do in the Interactive Tutorial. For more information on Secrets, you can find the documentation [here](https://kubernetes.io/docs/concepts/configuration/secret/).
+
+
+### Externalizing Config from Code
+Externalized application configuration is useful because configuration usually changes depending on your environment. In order to accomplish this, we'll use Java's Contexts and Dependency Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of MicroProfile, a set of open Java technologies for developing and deploying cloud-native microservices.
+
+CDI provides a standard dependency injection capability enabling an application to be assembled from collaborating, loosely-coupled beans. MicroProfile Config provides apps and microservices a standard way to obtain config properties from various sources, including the application, runtime, and environment. Based on the source's defined priority, the properties are automatically combined into a single set of properties that the application can access via an API. Together, CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the externally provided properties from the Kubernetes ConfigMaps and Secrets and get injected into your application code.
+
+Many open source frameworks and runtimes implement and support MicroProfile Config. Throughout the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java runtime for building and running cloud-native apps and microservices. However, any MicroProfile compatible runtime could be used instead.
+
+
+## {{% heading "objectives" %}}
+
+* Create a Kubernetes ConfigMap and Secret
+* Inject microservice configuration using MicroProfile Config
+
+
+
+
+## Example: Externalizing config using MicroProfile, ConfigMaps and Secrets
+### [Start Interactive Tutorial](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/)
\ No newline at end of file
diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md
index 901e0063cfbd9..6cc03c2198678 100644
--- a/content/en/docs/tutorials/hello-minikube.md
+++ b/content/en/docs/tutorials/hello-minikube.md
@@ -136,6 +136,9 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/).
The `--type=LoadBalancer` flag indicates that you want to expose your Service
outside of the cluster.
+
+ The application code inside the image `k8s.gcr.io/echoserver` only listens on TCP port 8080. If you used
+ `kubectl expose` to expose a different port, clients could not connect to that other port.
2. View the Service you just created:
@@ -283,4 +286,3 @@ minikube delete
* Learn more about [Deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/).
* Learn more about [Service objects](/docs/concepts/services-networking/service/).
-
diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
index 13d3d99758338..5ac682d7af020 100644
--- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
+++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
@@ -72,7 +72,7 @@
Cluster Diagram
The Master is responsible for managing the cluster. The master coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.
-
A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker or rkt. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.
+
A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.
diff --git a/content/en/examples/application/guestbook/redis-master-service.yaml b/content/en/examples/application/guestbook/redis-master-service.yaml
index a484014f1fe3b..65cef2191c493 100644
--- a/content/en/examples/application/guestbook/redis-master-service.yaml
+++ b/content/en/examples/application/guestbook/redis-master-service.yaml
@@ -8,7 +8,8 @@ metadata:
tier: backend
spec:
ports:
- - port: 6379
+ - name: redis
+ port: 6379
targetPort: 6379
selector:
app: redis
diff --git a/content/en/examples/service/networking/example-ingress.yaml b/content/en/examples/service/networking/example-ingress.yaml
new file mode 100644
index 0000000000000..b309d13275105
--- /dev/null
+++ b/content/en/examples/service/networking/example-ingress.yaml
@@ -0,0 +1,18 @@
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: example-ingress
+ annotations:
+ nginx.ingress.kubernetes.io/rewrite-target: /$1
+spec:
+ rules:
+ - host: hello-world.info
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: web
+ port:
+ number: 8080
\ No newline at end of file
diff --git a/content/fr/docs/concepts/services-networking/ingress.md b/content/fr/docs/concepts/services-networking/ingress.md
index f445ae25931f9..81018b3ede49d 100644
--- a/content/fr/docs/concepts/services-networking/ingress.md
+++ b/content/fr/docs/concepts/services-networking/ingress.md
@@ -72,7 +72,7 @@ Assurez-vous de consulter la documentation de votre contrôleur d’Ingress pour
Exemple de ressource Ingress minimale :
```yaml
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
@@ -83,9 +83,12 @@ spec:
- http:
paths:
- path: /testpath
+ pathType: Prefix
backend:
- serviceName: test
- servicePort: 80
+ service:
+ name: test
+ port:
+ number: 80
```
Comme pour toutes les autres ressources Kubernetes, un Ingress (une entrée) a besoin des champs `apiVersion`,` kind` et `metadata`.
@@ -126,14 +129,16 @@ Il existe des concepts Kubernetes qui vous permettent d’exposer un seul servic
```yaml
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
- backend:
- serviceName: testsvc
- servicePort: 80
+ defaultBackend:
+ service:
+ name: testsvc
+ port:
+ number: 80
```
Si vous le créez en utilisant `kubectl create -f`, vous devriez voir :
@@ -166,7 +171,7 @@ foo.bar.com -> 178.91.123.132 -> / foo service1:4200
ceci nécessitera un Ingress défini comme suit :
```yaml
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
@@ -178,13 +183,19 @@ spec:
http:
paths:
- path: /foo
+ pathType: Prefix
backend:
- serviceName: service1
- servicePort: 4200
+ service:
+ name: service1
+ port:
+ number: 4200
- path: /bar
+ pathType: Prefix
backend:
- serviceName: service2
- servicePort: 8080
+ service:
+ name: service2
+ port:
+ number: 8080
```
Lorsque vous créez l'ingress avec `kubectl create -f`:
@@ -233,7 +244,7 @@ bar.foo.com --| |-> bar.foo.com s2:80
L’Ingress suivant indique au load-balancer de router les requêtes en fonction de [En-tête du hôte](https://tools.ietf.org/html/rfc7230#section-5.4).
```yaml
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
@@ -242,21 +253,29 @@ spec:
- host: foo.bar.com
http:
paths:
- - backend:
- serviceName: service1
- servicePort: 80
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: service1
+ port:
+ number: 80
- host: bar.foo.com
http:
paths:
- - backend:
- serviceName: service2
- servicePort: 80
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: service2
+ port:
+ number: 80
```
Si vous créez une ressource Ingress sans aucun hôte défini dans les règles, tout trafic Web à destination de l'adresse IP de votre contrôleur d'Ingress peut être mis en correspondance sans qu'un hôte virtuel basé sur le nom ne soit requis. Par exemple, la ressource Ingress suivante acheminera le trafic demandé pour `first.bar.com` au `service1` `second.foo.com` au `service2`, et à tout trafic à l'adresse IP sans nom d'hôte défini dans la demande (c'est-à-dire sans en-tête de requête présenté) au `service3`.
```yaml
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
@@ -265,20 +284,32 @@ spec:
- host: first.bar.com
http:
paths:
- - backend:
- serviceName: service1
- servicePort: 80
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: service1
+ port:
+ number: 80
- host: second.foo.com
http:
paths:
- - backend:
- serviceName: service2
- servicePort: 80
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: service2
+ port:
+ number: 80
- http:
paths:
- - backend:
- serviceName: service3
- servicePort: 80
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: service3
+ port:
+ number: 80
```
### TLS
@@ -300,7 +331,7 @@ type: kubernetes.io/tls
Référencer ce secret dans un Ingress indiquera au contrôleur d'ingress de sécuriser le canal du client au load-balancer à l'aide de TLS. Vous devez vous assurer que le secret TLS que vous avez créé provenait d'un certificat contenant un CN pour `sslexample.foo.com`.
```yaml
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-example-ingress
@@ -314,9 +345,12 @@ spec:
http:
paths:
- path: /
+ pathType: Prefix
backend:
- serviceName: service1
- servicePort: 80
+ service:
+ name: service1
+ port:
+ number: 80
```
{{< note >}}
@@ -373,16 +407,22 @@ spec:
http:
paths:
- backend:
- serviceName: s1
- servicePort: 80
+ service:
+ name: s1
+ port:
+ number: 80
path: /foo
+ pathType: Prefix
- host: bar.baz.com
http:
paths:
- backend:
- serviceName: s2
- servicePort: 80
+ service:
+ name: s2
+ port:
+ number: 80
path: /foo
+ pathType: Prefix
..
```
diff --git a/content/fr/docs/concepts/storage/persistent-volumes.md b/content/fr/docs/concepts/storage/persistent-volumes.md
index e1a1701fcbead..1f149bb6e5582 100644
--- a/content/fr/docs/concepts/storage/persistent-volumes.md
+++ b/content/fr/docs/concepts/storage/persistent-volumes.md
@@ -411,7 +411,7 @@ Un PV sans `storageClassName` n'a pas de classe et ne peut être lié qu'à des
Dans le passé, l'annotation `volume.beta.kubernetes.io/storage-class` a été utilisé à la place de l'attribut `storageClassName`.
Cette annotation fonctionne toujours; cependant, il deviendra complètement obsolète dans une future version de Kubernetes.
-### Politique de récupration
+### Politique de récupération
Les politiques de récupération actuelles sont:
diff --git a/content/fr/docs/concepts/workloads/controllers/deployment.md b/content/fr/docs/concepts/workloads/controllers/deployment.md
index e8034cc9adf69..e12c3ffbab257 100644
--- a/content/fr/docs/concepts/workloads/controllers/deployment.md
+++ b/content/fr/docs/concepts/workloads/controllers/deployment.md
@@ -116,7 +116,7 @@ Avant de commencer, assurez-vous que votre cluster Kubernetes est opérationnel.
```shell
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
- deployment.apps/nginx-deployment successfully rolled out
+ deployment "nginx-deployment" successfully rolled out
```
1. Exécutez à nouveau `kubectl get deployments` quelques secondes plus tard.
@@ -223,7 +223,7 @@ Suivez les étapes ci-dessous pour mettre à jour votre déploiement:
ou
```text
- deployment.apps/nginx-deployment successfully rolled out
+ deployment "nginx-deployment" successfully rolled out
```
Obtenez plus de détails sur votre déploiement mis à jour:
@@ -932,7 +932,7 @@ La sortie est similaire à ceci:
```text
Waiting for rollout to finish: 2 of 3 updated replicas are available...
-deployment.apps/nginx-deployment successfully rolled out
+deployment "nginx-deployment" successfully rolled out
$ echo $?
0
```
diff --git a/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
new file mode 100644
index 0000000000000..dc427699f6a55
--- /dev/null
+++ b/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
@@ -0,0 +1,299 @@
+---
+title: Configurer les Liveness, Readiness et Startup Probes
+content_template: templates/task
+weight: 110
+---
+
+{{% capture overview %}}
+
+Cette page montre comment configurer les liveness, readiness et startup probes pour les conteneurs.
+
+Le [Kubelet](/docs/admin/kubelet/) utilise les liveness probes pour détecter quand redémarrer un conteneur. Par exemple, les Liveness probes pourraient attraper un deadlock dans le cas où une application est en cours d'exécution, mais qui est incapable de traiter les requêtes. Le redémarrage d'un conteneur dans un tel état rend l'application plus disponible malgré les bugs.
+
+Le Kubelet utilise readiness probes pour savoir quand un conteneur est prêt à accepter le trafic. Un Pod est considéré comme prêt lorsque tous ses conteneurs sont prêts.
+Ce signal sert notamment à contrôler les pods qui sont utilisés comme backends pour les Services. Lorsqu'un Pod n'est pas prêt, il est retiré des équilibreurs de charge des Services.
+
+Le Kubelet utilise startup probes pour savoir quand une application d'un conteneur a démarré.
+Si une telle probe est configurée, elle désactive les contrôles de liveness et readiness jusqu'à cela réussit, en s'assurant que ces probes n'interfèrent pas avec le démarrage de l'application.
+Cela peut être utilisé dans le cas des liveness checks sur les conteneurs à démarrage lent, en les évitant de se faire tuer par le Kubelet avant qu'ils ne soient opérationnels.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+{{% /capture %}}
+
+{{% capture steps %}}
+
+## Définir une commande de liveness
+
+De nombreuses applications fonctionnant pour des longues périodes finissent par passer à des états de rupture et ne peuvent pas se rétablir, sauf en étant redémarrées. Kubernetes fournit des liveness probes pour détecter et remédier à ces situations.
+
+Dans cet exercice, vous allez créer un Pod qui exécute un conteneur basé sur l'image `k8s.gcr.io/busybox`. Voici le fichier de configuration pour le Pod :
+
+{{< codenew file="pods/probe/exec-liveness.yaml" >}}
+
+Dans le fichier de configuration, vous constatez que le Pod a un seul conteneur.
+Le champ `periodSeconds` spécifie que le Kubelet doit effectuer un check de liveness toutes les 5 secondes. Le champ `initialDelaySeconds` indique au Kubelet qu'il devrait attendre 5 secondes avant d'effectuer la première probe. Pour effectuer une probe, le Kubelet exécute la commande `cat /tmp/healthy` dans le conteneur. Si la commande réussit, elle renvoie 0, et le Kubelet considère que le conteneur est vivant et en bonne santé. Si la commande renvoie une valeur non nulle, le Kubelet tue le conteneur et le redémarre.
+
+Au démarrage, le conteneur exécute cette commande :
+
+```shell
+/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
+```
+
+Pour les 30 premières secondes de la vie du conteneur, il y a un fichier `/tmp/healthy`.
+Donc pendant les 30 premières secondes, la commande `cat /tmp/healthy` renvoie un code de succès. Après 30 secondes, `cat /tmp/healthy` renvoie un code d'échec.
+
+Créez le Pod :
+
+```shell
+kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml
+```
+
+Dans les 30 secondes, visualisez les événements du Pod :
+
+```shell
+kubectl describe pod liveness-exec
+```
+
+La sortie indique qu'aucune liveness probe n'a encore échoué :
+
+```shell
+FirstSeen LastSeen Count From SubobjectPath Type Reason Message
+--------- -------- ----- ---- ------------- -------- ------ -------
+24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
+23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
+23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
+23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
+23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
+```
+
+Après 35 secondes, visualisez à nouveau les événements du Pod :
+
+```shell
+kubectl describe pod liveness-exec
+```
+
+Au bas de la sortie, il y a des messages indiquant que les liveness probes ont échoué, et que les conteneurs ont été tués et recréés.
+
+```shell
+FirstSeen LastSeen Count From SubobjectPath Type Reason Message
+--------- -------- ----- ---- ------------- -------- ------ -------
+37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
+36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
+36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
+36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
+36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
+2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
+```
+
+Attendez encore 30 secondes et vérifiez que le conteneur a été redémarré :
+
+```shell
+kubectl get pod liveness-exec
+```
+
+La sortie montre que `RESTARTS` a été incrémenté :
+
+```shell
+NAME READY STATUS RESTARTS AGE
+liveness-exec 1/1 Running 1 1m
+```
+
+## Définir une requête HTTP de liveness
+
+Un autre type de liveness probe utilise une requête GET HTTP. Voici la configuration
+d'un Pod qui fait fonctionner un conteneur basé sur l'image `k8s.gcr.io/liveness`.
+
+{{< codenew file="pods/probe/http-liveness.yaml" >}}
+
+Dans le fichier de configuration, vous pouvez voir que le Pod a un seul conteneur.
+Le champ `periodSeconds` spécifie que le Kubelet doit effectuer une liveness probe toutes les 3 secondes. Le champ `initialDelaySeconds` indique au Kubelet qu'il devrait attendre 3 secondes avant d'effectuer la première probe. Pour effectuer une probe, le Kubelet envoie une requête HTTP GET au serveur qui s'exécute dans le conteneur et écoute sur le port 8080. Si le handler du chemin `/healthz` du serveur renvoie un code de succès, le Kubelet considère que le conteneur est vivant et en bonne santé. Si le handler renvoie un code d'erreur, le Kubelet tue le conteneur et le redémarre.
+
+Tout code supérieur ou égal à 200 et inférieur à 400 indique un succès. Tout autre code indique un échec.
+
+Vous pouvez voir le code source du serveur dans
+[server.go](https://github.com/kubernetes/kubernetes/blob/master/test/images/agnhost/liveness/server.go).
+
+Pendant les 10 premières secondes où le conteneur est en vie, le handler `/healthz` renvoie un statut de 200. Après cela, le handler renvoie un statut de 500.
+
+```go
+http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
+ duration := time.Now().Sub(started)
+ if duration.Seconds() > 10 {
+ w.WriteHeader(500)
+ w.Write([]byte(fmt.Sprintf("erreur: %v", duration.Seconds())))
+ } else {
+ w.WriteHeader(200)
+ w.Write([]byte("ok"))
+ }
+})
+```
+
+Le Kubelet commence à effectuer des contrôles de santé 3 secondes après le démarrage du conteneur.
+Ainsi, les premiers contrôles de santé seront réussis. Mais après 10 secondes, les contrôles de santé échoueront, et le Kubelet tuera et redémarrera le conteneur.
+
+Pour essayer le HTTP liveness check, créez un Pod :
+
+```shell
+kubectl apply -f https://k8s.io/examples/pods/probe/http-liveness.yaml
+```
+
+Après 10 secondes, visualisez les événements du Pod pour vérifier que les liveness probes ont échoué et le conteneur a été redémarré :
+
+```shell
+kubectl describe pod liveness-http
+```
+
+Dans les versions antérieures à la v1.13 (y compris la v1.13), au cas où la variable d'environnement `http_proxy` (ou `HTTP_PROXY`) est définie sur le noeud où tourne un Pod, le HTTP liveness probe utilise ce proxy.
+Dans les versions postérieures à la v1.13, les paramètres de la variable d'environnement du HTTP proxy local n'affectent pas le HTTP liveness probe.
+
+## Définir une TCP liveness probe
+
+Un troisième type de liveness probe utilise un TCP Socket. Avec cette configuration, le Kubelet tentera d'ouvrir un socket vers votre conteneur sur le port spécifié.
+S'il arrive à établir une connexion, le conteneur est considéré comme étant en bonne santé, s'il n'y arrive pas, c'est un échec.
+
+{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}}
+
+Comme vous le voyez, la configuration pour un check TCP est assez similaire à un check HTTP.
+Cet exemple utilise à la fois des readiness et liveness probes. Le Kubelet transmettra la première readiness probe 5 secondes après le démarrage du conteneur. Il tentera de se connecter au conteneur `goproxy` sur le port 8080. Si la probe réussit, le conteneur sera marqué comme prêt. Kubelet continuera à effectuer ce check tous les 10 secondes.
+
+En plus de la readiness probe, cette configuration comprend une liveness probe.
+Le Kubelet effectuera la première liveness probe 15 secondes après que le conteneur démarre. Tout comme la readiness probe, celle-ci tentera de se connecter au conteneur de `goproxy` sur le port 8080. Si la liveness probe échoue, le conteneur sera redémarré.
+
+Pour essayer la TCP liveness check, créez un Pod :
+
+```shell
+kubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml
+```
+
+Après 15 secondes, visualisez les événements de Pod pour vérifier les liveness probes :
+
+```shell
+kubectl describe pod goproxy
+```
+
+## Utilisation d'un port nommé
+
+Vous pouvez utiliser un [ContainerPort](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerport-v1-core) nommé pour les HTTP or TCP liveness probes :
+
+```yaml
+ports:
+- name: liveness-port
+ containerPort: 8080
+ hostPort: 8080
+
+livenessProbe:
+ httpGet:
+ path: /healthz
+ port: liveness-port
+```
+
+## Protéger les conteneurs à démarrage lent avec des startup probes {#define-startup-probes}
+
+Parfois, vous devez faire face à des applications legacy qui peuvent nécessiter un temps de démarrage supplémentaire lors de leur première initialisation.
+Dans de telles situations, il peut être compliqué de régler les paramètres de la liveness probe sans compromettant la réponse rapide aux blocages qui ont motivé une telle probe.
+L'astuce est de configurer une startup probe avec la même commande, HTTP ou TCP check avec un `failureThreshold * periodSeconds` assez long pour couvrir le pire des scénarios des temps de démarrage.
+
+Ainsi, l'exemple précédent deviendrait :
+
+```yaml
+ports:
+- name: liveness-port
+ containerPort: 8080
+ hostPort: 8080
+
+livenessProbe:
+ httpGet:
+ path: /healthz
+ port: liveness-port
+ failureThreshold: 1
+ periodSeconds: 10
+
+startupProbe:
+ httpGet:
+ path: /healthz
+ port: liveness-port
+ failureThreshold: 30
+ periodSeconds: 10
+```
+
+Grâce à la startup probe, l'application aura un maximum de 5 minutes (30 * 10 = 300s) pour terminer son démarrage.
+Une fois que la startup probe a réussi, la liveness probe prend le relais pour fournir une réponse rapide aux blocages de conteneurs.
+Si la startup probe ne réussit jamais, le conteneur est tué après 300s puis soumis à la `restartPolicy` (politique de redémarrage) du Pod.
+
+## Définir les readiness probes
+
+Parfois, les applications sont temporairement incapables de servir le trafic.
+Par exemple, une application peut avoir besoin de charger des larges données ou des fichiers de configuration pendant le démarrage, ou elle peut dépendre de services externes après le démarrage.
+Dans ces cas, vous ne voulez pas tuer l'application, mais tu ne veux pas non plus lui envoyer de requêtes. Kubernetes fournit des readiness probes pour détecter et atténuer ces situations. Un pod avec des conteneurs qui signale qu'elle n'est pas prête ne reçoit pas de trafic par les services de Kubernetes.
+
+{{< note >}}
+Readiness probes fonctionnent sur le conteneur pendant tout son cycle de vie.
+{{< /note >}}
+
+Readiness probes sont configurées de la même façon que les liveness probes. La seule différence est que vous utilisez le champ `readinessProbe` au lieu du champ `livenessProbe`.
+
+```yaml
+readinessProbe:
+ exec:
+ command:
+ - cat
+ - /tmp/healthy
+ initialDelaySeconds: 5
+ periodSeconds: 5
+```
+
+La configuration des readiness probes HTTP et TCP reste également identique à celle des liveness probes.
+
+Les readiness et liveness probes peuvent être utilisées en parallèle pour le même conteneur.
+L'utilisation des deux peut garantir que le trafic n'atteigne pas un conteneur qui n'est pas prêt et que les conteneurs soient redémarrés en cas de défaillance.
+
+## Configurer les Probes
+
+{{< comment >}}
+Éventuellement, une partie de cette section pourrait être déplacée vers un sujet conceptuel.
+{{< /comment >}}
+
+[Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) ont un certain nombre de champs qui vous pouvez utiliser pour contrôler plus précisément le comportement de la vivacité et de la disponibilité des probes :
+
+* `initialDelaySeconds`: Nombre de secondes après le démarrage du conteneur avant que les liveness et readiness probes ne soient lancées. La valeur par défaut est de 0 seconde. La valeur minimale est 0.
+* `periodSeconds`: La fréquence (en secondes) à laquelle la probe doit être effectuée. La valeur par défaut est de 10 secondes. La valeur minimale est de 1.
+* `timeoutSeconds`: Nombre de secondes après lequel la probe time out. Valeur par défaut à 1 seconde. La valeur minimale est de 1.
+* `successThreshold`: Le minimum de succès consécutifs pour que la probe soit considérée comme réussie après avoir échoué. La valeur par défaut est 1. Doit être 1 pour la liveness probe. La valeur minimale est de 1.
+* `failureThreshold`: Quand un Pod démarre et que la probe échoue, Kubernetes va tenter pour un temps de `failureThreshold` avant d'abandonner. Abandonner en cas de liveness probe signifie le redémarrage du conteneur. En cas de readiness probe, le Pod sera marqué Unready.
+La valeur par défaut est 3, la valeur minimum est 1.
+
+[HTTP probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)
+ont des champs supplémentaires qui peuvent être définis sur `httpGet` :
+
+* `host`: Nom de l'hôte auquel se connecter, par défaut l'IP du pod. Vous voulez peut être mettre "Host" en httpHeaders à la place.
+* `scheme`: Schéma à utiliser pour se connecter à l'hôte (HTTP ou HTTPS). La valeur par défaut est HTTP.
+* `path`: Chemin d'accès sur le serveur HTTP.
+* `httpHeaders`: En-têtes personnalisés à définir dans la requête. HTTP permet des en-têtes répétés.
+* `port`: Nom ou numéro du port à accéder sur le conteneur. Le numéro doit être dans un intervalle de 1 à 65535.
+
+Pour une probe HTTP, le Kubelet envoie une requête HTTP au chemin et au port spécifiés pour effectuer la vérification. Le Kubelet envoie la probe à l'adresse IP du Pod, à moins que l'adresse ne soit surchargée par le champ optionnel `host` dans `httpGet`. Si Le champ `scheme` est mis à `HTTPS`, le Kubelet envoie une requête HTTPS en ignorant la vérification du certificat. Dans la plupart des scénarios, vous ne voulez pas définir le champ `host`.
+Voici un scénario où vous le mettriez en place. Supposons que le conteneur écoute sur 127.0.0.1 et que le champ `hostNetwork` du Pod a la valeur true. Alors `host`, sous `httpGet`, devrait être défini à 127.0.0.1. Si votre Pod repose sur des hôtes virtuels, ce qui est probablement plus courant, vous ne devriez pas utiliser `host`, mais plutôt mettre l'en-tête `Host` dans `httpHeaders`.
+
+Le Kubelet fait la connexion de la probe au noeud, pas dans le Pod, ce qui signifie que vous ne pouvez pas utiliser un nom de service dans le paramètre `host` puisque le Kubelet est incapable pour le résoudre.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Pour en savoir plus sur
+[Probes des Conteneurs](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
+
+### Référence
+
+* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
+* [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
+* [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
+
+{{% /capture %}}
+
+
diff --git a/content/fr/docs/tasks/configure-pod-container/share-process-namespace.md b/content/fr/docs/tasks/configure-pod-container/share-process-namespace.md
new file mode 100644
index 0000000000000..f55431571d8fc
--- /dev/null
+++ b/content/fr/docs/tasks/configure-pod-container/share-process-namespace.md
@@ -0,0 +1,102 @@
+---
+title: Partager l'espace de nommage des processus entre les conteneurs d'un Pod
+min-kubernetes-server-version: v1.10
+content_template: templates/task
+weight: 160
+---
+
+{{% capture overview %}}
+
+{{< feature-state state="stable" for_k8s_version="v1.17" >}}
+
+Cette page montre comment configurer le partage de l'espace de noms d'un processus pour un pod. Lorsque le partage de l'espace de noms des processus est activé, les processus d'un conteneur sont visibles pour tous les autres conteneurs de ce pod.
+
+Vous pouvez utiliser cette fonctionnalité pour configurer les conteneurs coopérants, comme un conteneur de sidecar de gestionnaire de journaux, ou pour dépanner les images de conteneurs qui n'incluent pas d'utilitaires de débogage comme un shell.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+{{% /capture %}}
+
+{{% capture steps %}}
+
+## Configurer un Pod
+
+Le partage de l'espace de nommage du processus est activé en utilisant le champ `shareProcessNamespace` de `v1.PodSpec`. Par exemple:
+
+{{< codenew file="pods/share-process-namespace.yaml" >}}
+
+1. Créez le pod `nginx` sur votre cluster :
+
+ ```shell
+ kubectl apply -f https://k8s.io/examples/pods/share-process-namespace.yaml
+ ```
+
+1. Attachez-le au conteneur `shell` et lancez `ps` :
+
+ ```shell
+ kubectl attach -it nginx -c shell
+ ```
+
+ Si vous ne verrez pas d'invite de commande, appuyez sur la touche Entrée.
+
+ ```
+ / # ps ax
+ PID USER TIME COMMAND
+ 1 root 0:00 /pause
+ 8 root 0:00 nginx: master process nginx -g daemon off;
+ 14 101 0:00 nginx: worker process
+ 15 root 0:00 sh
+ 21 root 0:00 ps ax
+ ```
+
+Vous pouvez signaler les processus dans d'autres conteneurs. Par exemple, envoyez `SIGHUP` à
+nginx pour relancer le processus de worker. Cela nécessite la fonctionnalité `SYS_PTRACE`.
+
+```
+/ # kill -HUP 8
+/ # ps ax
+PID USER TIME COMMAND
+ 1 root 0:00 /pause
+ 8 root 0:00 nginx: master process nginx -g daemon off;
+ 15 root 0:00 sh
+ 22 101 0:00 nginx: worker process
+ 23 root 0:00 ps ax
+```
+
+Il est même possible d'accéder aux autres conteneurs en utilisant le lien `/proc/$pid/root`.
+
+```
+/ # head /proc/8/root/etc/nginx/nginx.conf
+
+user nginx;
+worker_processes 1;
+
+error_log /var/log/nginx/error.log warn;
+pid /var/run/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+```
+
+{{% /capture %}}
+
+{{% capture discussion %}}
+
+## Comprendre le processus de partage de l'espace de nommage
+
+Les pods partagent de nombreuses ressources, il est donc logique qu'elles partagent également un espace de noms des processus. Pour certaines images de conteneur, on peut envisager de les isoler les uns des autres. Il est donc important de comprendre ces différences :
+
+1. **Le processus de conteneur n'a plus de PID 1.** Certaines images de conteneurs refusent de démarrer sans PID 1 (par exemple, les conteneurs utilisant `systemd`) ou exécuter des commandes comme `kill -HUP 1` pour signaler le processus du conteneur. Dans les pods avec un espace de noms partagé du processus, `kill -HUP 1` signalera la sandbox du pod. (`/pause` dans l'exemple ci-dessus.)
+
+1. **Les processus sont visibles par les autres conteneurs du pod.** Cela inclut tout les informations visibles dans `/proc`, comme les mots de passe passés en argument ou les variables d'environnement. Celles-ci ne sont protégées que par des permissions Unix régulières.
+
+1. **Les systèmes de fichiers des conteneurs sont visibles par les autres conteneurs du pod à travers le lien `/proc/$pid/root`.** Cela rend le débogage plus facile, mais cela signifie aussi que les secrets du système de fichiers ne sont protégés que par les permissions du système de fichiers.
+
+{{% /capture %}}
+
+
diff --git a/content/fr/examples/pods/probe/exec-liveness.yaml b/content/fr/examples/pods/probe/exec-liveness.yaml
new file mode 100644
index 0000000000000..07bf75f85c6f3
--- /dev/null
+++ b/content/fr/examples/pods/probe/exec-liveness.yaml
@@ -0,0 +1,21 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ labels:
+ test: liveness
+ name: liveness-exec
+spec:
+ containers:
+ - name: liveness
+ image: k8s.gcr.io/busybox
+ args:
+ - /bin/sh
+ - -c
+ - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
+ livenessProbe:
+ exec:
+ command:
+ - cat
+ - /tmp/healthy
+ initialDelaySeconds: 5
+ periodSeconds: 5
diff --git a/content/fr/examples/pods/probe/http-liveness.yaml b/content/fr/examples/pods/probe/http-liveness.yaml
new file mode 100644
index 0000000000000..670af18399e20
--- /dev/null
+++ b/content/fr/examples/pods/probe/http-liveness.yaml
@@ -0,0 +1,21 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ labels:
+ test: liveness
+ name: liveness-http
+spec:
+ containers:
+ - name: liveness
+ image: k8s.gcr.io/liveness
+ args:
+ - /server
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: 8080
+ httpHeaders:
+ - name: Custom-Header
+ value: Awesome
+ initialDelaySeconds: 3
+ periodSeconds: 3
diff --git a/content/fr/examples/pods/probe/tcp-liveness-readiness.yaml b/content/fr/examples/pods/probe/tcp-liveness-readiness.yaml
new file mode 100644
index 0000000000000..08fb77ff0f58c
--- /dev/null
+++ b/content/fr/examples/pods/probe/tcp-liveness-readiness.yaml
@@ -0,0 +1,22 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: goproxy
+ labels:
+ app: goproxy
+spec:
+ containers:
+ - name: goproxy
+ image: k8s.gcr.io/goproxy:0.1
+ ports:
+ - containerPort: 8080
+ readinessProbe:
+ tcpSocket:
+ port: 8080
+ initialDelaySeconds: 5
+ periodSeconds: 10
+ livenessProbe:
+ tcpSocket:
+ port: 8080
+ initialDelaySeconds: 15
+ periodSeconds: 20
diff --git a/content/fr/examples/pods/share-process-namespace.yaml b/content/fr/examples/pods/share-process-namespace.yaml
new file mode 100644
index 0000000000000..af812732a247a
--- /dev/null
+++ b/content/fr/examples/pods/share-process-namespace.yaml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx
+spec:
+ shareProcessNamespace: true
+ containers:
+ - name: nginx
+ image: nginx
+ - name: shell
+ image: busybox
+ securityContext:
+ capabilities:
+ add:
+ - SYS_PTRACE
+ stdin: true
+ tty: true
diff --git a/content/id/docs/concepts/workloads/controllers/deployment.md b/content/id/docs/concepts/workloads/controllers/deployment.md
index 8eae6c579feec..b4e728cd9283f 100644
--- a/content/id/docs/concepts/workloads/controllers/deployment.md
+++ b/content/id/docs/concepts/workloads/controllers/deployment.md
@@ -96,7 +96,7 @@ Dalam contoh ini:
3. Untuk melihat status rilis Deployment, jalankan `kubectl rollout status deployment.v1.apps/nginx-deployment`. Keluaran akan tampil seperti berikut:
```shell
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
- deployment.apps/nginx-deployment successfully rolled out
+ deployment "nginx-deployment" successfully rolled out
```
4. Jalankan `kubectl get deployments` lagi beberapa saat kemudian. Keluaran akan tampil seperti berikut:
@@ -179,7 +179,7 @@ Ikuti langkah-langkah berikut untuk membarui Deployment:
```
atau
```
- deployment.apps/nginx-deployment successfully rolled out
+ deployment "nginx-deployment" successfully rolled out
```
Untuk menampilkan detail lain dari Deployment yang terbaru:
@@ -826,7 +826,7 @@ kubectl rollout status deployment.v1.apps/nginx-deployment
Keluaran akan tampil seperti berikut:
```
Waiting for rollout to finish: 2 of 3 updated replicas are available...
-deployment.apps/nginx-deployment successfully rolled out
+deployment "nginx-deployment" successfully rolled out
$ echo $?
0
```
diff --git a/content/id/docs/reference/glossary/deployment.md b/content/id/docs/reference/glossary/deployment.md
new file mode 100644
index 0000000000000..6d21100fbb64c
--- /dev/null
+++ b/content/id/docs/reference/glossary/deployment.md
@@ -0,0 +1,18 @@
+---
+title: Deployment
+id: deployment
+date: 2018-04-12
+full_link: /id/docs/concepts/workloads/controllers/deployment/
+short_description: >
+ Mengelola aplikasi yang direplikasi di dalam klastermu.
+aka:
+tags:
+- fundamental
+- core-object
+- workload
+---
+Sebuah objek API yang mengelola aplikasi yang direplikasi, biasanya dengan menjalankan Pod tanpa keadaan (_state_) lokal.
+
+
+
+Setiap replika direpresentasikan oleh sebuah {{< glossary_tooltip term_id="pod" >}}, dan Pod tersebut didistribusikan di antara {{< glossary_tooltip term_id="node" >}} dari suatu klaster. Untuk beban kerja yang membutuhkan keadaan lokal, pertimbangkan untuk menggunakan {{< glossary_tooltip term_id="StatefulSet" >}}.
diff --git a/content/id/docs/reference/glossary/device-plugin.md b/content/id/docs/reference/glossary/device-plugin.md
new file mode 100644
index 0000000000000..1318f448124e7
--- /dev/null
+++ b/content/id/docs/reference/glossary/device-plugin.md
@@ -0,0 +1,20 @@
+---
+title: Pugasan Peranti
+id: device-plugin
+date: 2019-02-02
+full_link: /id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/
+short_description: >
+ Ekstensi perangkat lunak untuk memungkinkan Pod mengakses peranti yang membutuhkan inisialisasi atau penyiapan khusus.
+aka:
+- Device Plugin
+tags:
+- fundamental
+- extension
+---
+Pugasan peranti berjalan pada {{< glossary_tooltip term_id="node" >}} pekerja dan menyediakan akses ke sumber daya untuk {{< glossary_tooltip term_id="pod" >}}, seperti perangkat keras lokal, yang membutuhkan langkah inisialisasi atau penyiapan khusus.
+
+
+
+Pugasan peranti menawarkan sumber daya ke {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}, sehingga beban kerja Pod dapat mengakses fitur perangkat keras yang berhubungan dengan Node di mana Pod tersebut berjalan. Kamu dapat menggelar sebuah pugasan peranti sebagai sebuah {{< glossary_tooltip term_id="daemonset" >}}, atau menginstal perangkat lunak pugasan peranti secara langsung pada setiap Node target.
+
+Lihat [Pugasan Peranti](/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) untuk informasi lebih lanjut.
diff --git a/content/id/docs/reference/glossary/disruption.md b/content/id/docs/reference/glossary/disruption.md
new file mode 100644
index 0000000000000..fb4898f397e76
--- /dev/null
+++ b/content/id/docs/reference/glossary/disruption.md
@@ -0,0 +1,18 @@
+---
+title: Disrupsi
+id: disruption
+date: 2019-09-10
+full_link: /id/docs/concepts/workloads/pods/disruptions/
+short_description: >
+ Peristiwa yang menyebabkan hilangnya Pod
+aka:
+tags:
+- fundamental
+---
+Disrupsi merupakan kejadian yang menyebabkan hilangnya satu atau beberapa {{< glossary_tooltip term_id="pod" >}}. Suatu disrupsi memiliki konsekuensi terhadap sumber daya beban kerja, seperti {{< glossary_tooltip term_id="deployment" >}}, yang bergantung pada Pod yang terpengaruh.
+
+
+
+Jika kamu, sebagai operator klaster, menghancurkan sebuah Pod milik suatu aplikasi, maka hal ini dalam Kubernetes dikenal sebagai disrupsi disengaja (_voluntary disruption_). Jika sebuah Pod menghilang karena kegagalan Node, atau pemadaman yang mempengaruhi zona kegagalan yang lebih luas, maka dalam Kubernetes dikenal dengan istilah disrupsi tidak disengaja (_involuntary disruption_).
+
+Lihat [Disrupsi](/id/docs/concepts/workloads/pods/disruptions/) untuk informasi lebih lanjut.
diff --git a/content/id/docs/reference/glossary/docker.md b/content/id/docs/reference/glossary/docker.md
new file mode 100644
index 0000000000000..12e40468b621f
--- /dev/null
+++ b/content/id/docs/reference/glossary/docker.md
@@ -0,0 +1,16 @@
+---
+title: Docker
+id: docker
+date: 2018-04-12
+full_link: https://docs.docker.com/engine/
+short_description: >
+ Docker merupakan suatu teknologi perangkat lunak yang menyediakan virtualisasi pada level sistem operasi yang juga dikenal sebagai Container.
+aka:
+tags:
+- fundamental
+---
+Docker (secara spesifik, Docker Engine) merupakan suatu teknologi perangkat lunak yang menyediakan virtualisasi pada level sistem operasi yang juga dikenal sebagai {{< glossary_tooltip term_id="container" >}}.
+
+
+
+Docker menggunakan fitur isolasi sumber daya pada kernel Linux seperti cgroup dan _namespace_, dan [UnionFS](https://docs.docker.com/get-started/overview/#union-file-systems) seperti OverlayFS dan lainnya untuk memungkinkan masing-masing Container dijalankan pada satu instans Linux, menghindari beban tambahan (_overhead_) saat memulai dan menjalankan VM.
diff --git a/content/id/docs/reference/glossary/ephemeral-container.md b/content/id/docs/reference/glossary/ephemeral-container.md
new file mode 100644
index 0000000000000..bed099180769d
--- /dev/null
+++ b/content/id/docs/reference/glossary/ephemeral-container.md
@@ -0,0 +1,17 @@
+---
+title: Container Sementara
+id: ephemeral-container
+date: 2019-08-26
+full_link: /id/docs/concepts/workloads/pods/ephemeral-containers/
+short_description: >
+ Jenis tipe Container yang dapat kamu jalankan sementara di dalam sebuah Pod.
+aka:
+- Ephemeral Container
+tags:
+- fundamental
+---
+Jenis tipe {{< glossary_tooltip term_id="container" >}} yang dapat kamu jalankan sementara di dalam sebuah {{< glossary_tooltip term_id="pod" >}}.
+
+
+
+Jika kamu ingin menyelidiki sebuah Pod yang bermasalah, kamu dapat menambahkan Container sementara ke Pod tersebut dan menjalankan diagnosis. Container sementara tidak memiliki jaminan sumber daya atau penjadwalan, dan kamu tidak boleh menggunakannya untuk menjalankan bagian mana pun dari beban kerja.
diff --git a/content/id/docs/reference/glossary/extensions.md b/content/id/docs/reference/glossary/extensions.md
new file mode 100644
index 0000000000000..2e8e73d55325a
--- /dev/null
+++ b/content/id/docs/reference/glossary/extensions.md
@@ -0,0 +1,17 @@
+---
+title: Ekstensi
+id: Extensions
+date: 2019-02-01
+full_link: /id/docs/concepts/extend-kubernetes/extend-cluster/#perluasan
+short_description: >
+ Ekstensi adalah komponen perangkat lunak yang memperluas dan terintegrasi secara mendalam dengan Kubernetes untuk mendukung perangkat keras baru.
+aka:
+tags:
+- fundamental
+- extension
+---
+Ekstensi adalah komponen perangkat lunak yang memperluas dan terintegrasi secara mendalam dengan Kubernetes untuk mendukung perangkat keras baru.
+
+
+
+Sebagian besar admin klaster akan menggunakan instans Kubernetes yang dihoskan (_hosted_) atau didistribusikan. Akibatnya, hampir semua pengguna Kubernetes perlu menginstal [ekstensi](/id/docs/concepts/extend-kubernetes/extend-cluster/#perluasan) dan sedikit pengguna yang perlu membuat ekstensi baru.
diff --git a/content/id/docs/reference/glossary/image.md b/content/id/docs/reference/glossary/image.md
index bc26114e60bf4..d0c09fc978dc5 100644
--- a/content/id/docs/reference/glossary/image.md
+++ b/content/id/docs/reference/glossary/image.md
@@ -4,15 +4,14 @@ id: image
date: 2019-04-24
full_link:
short_description: >
- Instans yang disimpan dari sebuah kontainer yang mengandung seperangkat perangkat lunak yang dibutuhkan untuk menjalankan sebuah aplikasi.
+ Instans yang disimpan dari sebuah Container yang memuat seperangkat perangkat lunak yang dibutuhkan untuk menjalankan sebuah aplikasi.
-aka:
+aka:
tags:
- fundamental
---
- Instans yang disimpan dari sebuah kontainer yang mengandung seperangkat perangkat lunak yang dibutuhkan untuk menjalankan sebuah aplikasi.
+Instans yang disimpan dari sebuah Container yang memuat seperangkat perangkat lunak yang dibutuhkan untuk menjalankan sebuah aplikasi.
-
-
-Sebuah mekanisme untuk mengemas perangkat lunak yang mengizinkan perangkat lunak tersebut untuk disimpan di dalam registri kontainer, di-_pull_ kedalam filesystem lokal, dan dijalankan sebagai suatu aplikasi. Meta data yang dimasukkan mengindikasikan _executable_ apa sajakah yang perlu dijalanmkan, siapa yang membuat _executable_ tersebut, dan informasi lainnya.
+
+Sebuah mekanisme untuk mengemas perangkat lunak yang memungkinkan perangkat lunak tersebut untuk disimpan di dalam register Container, ditarik ke dalam sistem lokal, dan dijalankan sebagai suatu aplikasi. Metadata disertakan di dalam _image_ yang mengindikasikan _executable_ apa saja yang perlu dijalankan, siapa yang membuatnya, dan informasi lainnya.
diff --git a/content/id/docs/reference/glossary/init-container.md b/content/id/docs/reference/glossary/init-container.md
new file mode 100644
index 0000000000000..3300504d9e245
--- /dev/null
+++ b/content/id/docs/reference/glossary/init-container.md
@@ -0,0 +1,17 @@
+---
+title: Container Inisialisasi
+id: init-container
+date: 2018-04-12
+full_link:
+short_description: >
+ Satu atau beberapa Container inisialisasi yang harus berjalan hingga selesai sebelum Container aplikasi apapun dijalankan.
+aka:
+- Init Container
+tags:
+- fundamental
+---
+Satu atau beberapa {{< glossary_tooltip term_id="container" >}} inisialisasi yang harus berjalan hingga selesai sebelum Container aplikasi apapun dijalankan.
+
+
+
+Container inisialisasi mirip seperti Container aplikasi biasa, dengan satu perbedaan: Container inisialisasi harus berjalan sampai selesai sebelum Container aplikasi lainnya dijalankan. Container inisialisasi dijalankan secara seri: setiap Container inisialisasi harus berjalan sampai selesai sebelum Container inisialisasi berikutnya dijalankan.
diff --git a/content/id/docs/reference/glossary/job.md b/content/id/docs/reference/glossary/job.md
new file mode 100644
index 0000000000000..2cc72f407338a
--- /dev/null
+++ b/content/id/docs/reference/glossary/job.md
@@ -0,0 +1,18 @@
+---
+title: Job
+id: job
+date: 2018-04-12
+full_link: /docs/concepts/workloads/controllers/job/
+short_description: >
+ Tugas terbatas atau bertumpuk (_batch_) yang berjalan sampai selesai.
+aka:
+tags:
+- fundamental
+- core-object
+- workload
+---
+Tugas terbatas atau bertumpuk (_batch_) yang berjalan sampai selesai.
+
+
+
+Membuat satu atau beberapa objek {{< glossary_tooltip term_id="pod" >}} dan memastikan bahwa sejumlah objek tersebut berhasil dihentikan. Saat Pod berhasil diselesaikan (_complete_), maka Job melacak keberhasilan penyelesaian tersebut.
diff --git a/content/id/docs/reference/glossary/kube-apiserver.md b/content/id/docs/reference/glossary/kube-apiserver.md
index fda5ec39d5841..a1b04754bd746 100644
--- a/content/id/docs/reference/glossary/kube-apiserver.md
+++ b/content/id/docs/reference/glossary/kube-apiserver.md
@@ -4,16 +4,15 @@ id: kube-apiserver
date: 2019-04-21
full_link: /docs/reference/generated/kube-apiserver/
short_description: >
- Komponen di master yang mengekspos API Kubernetes. Merupakan front-end dari kontrol plane Kubernetes.
+ Komponen _control plane_ yang mengekspos API Kubernetes. Merupakan _front-end_ dari _control plane_ Kubernetes.
aka:
tags:
- architecture
- fundamental
---
- Komponen di master yang mengekspos API Kubernetes. Merupakan front-end dari kontrol plane Kubernetes.
+Komponen _control plane_ yang mengekspos API Kubernetes. Merupakan _front-end_ dari _control plane_ Kubernetes.
-Komponen ini didesain agar dapat di-scale secara horizontal. Lihat [Membangun Klaster HA](/docs/admin/high-availability/).
-
+Komponen ini didesain agar dapat diskalakan secara horizontal. Lihat [Membangun Klaster HA](/docs/admin/high-availability/).
diff --git a/content/id/docs/tasks/administer-cluster/cluster-management.md b/content/id/docs/tasks/administer-cluster/cluster-management.md
new file mode 100644
index 0000000000000..0473dde9f3d8b
--- /dev/null
+++ b/content/id/docs/tasks/administer-cluster/cluster-management.md
@@ -0,0 +1,221 @@
+---
+title: Manajemen Klaster
+content_type: concept
+---
+
+
+
+Dokumen ini menjelaskan beberapa topik yang terkait dengan siklus hidup sebuah klaster: membuat klaster baru,
+memperbarui Node _control plane_ dan Node pekerja dari klaster kamu,
+melakukan pemeliharaan Node (misalnya pembaruan kernel), dan meningkatkan versi API Kubernetes dari
+klaster yang berjalan.
+
+
+
+
+## Membuat dan mengonfigurasi klaster
+
+Untuk menginstal Kubernetes dalam sekumpulan mesin, konsultasikan dengan salah satu [Panduan Memulai](/id/docs/setup) tergantung dengan lingkungan kamu.
+
+## Memperbarui klaster
+
+Status saat ini pembaruan klaster bergantung pada penyedia, dan beberapa rilis yang mungkin memerlukan perhatian khusus saat memperbaruinya. Direkomendasikan agar admin membaca [Catatan Rilis](https://git.k8s.io/kubernetes/CHANGELOG/README.md), serta catatan khusus pembaruan versi sebelum memperbarui klaster mereka.
+
+### Memperbarui klaster Azure Kubernetes Service (AKS)
+
+Azure Kubernetes Service memungkinkan pembaruan layanan mandiri yang mudah dari _control plane_ dan Node pada klaster kamu. Prosesnya adalah
+saat ini dimulai oleh pengguna dan dijelaskan dalam [Azure AKS documentation](https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster).
+
+### Memperbarui klaster Google Compute Engine
+
+Google Compute Engine Open Source (GCE-OSS) mendukung pembaruan _control plane_ dengan menghapus dan
+membuat ulang _control plane_, sambil mempertahankan _Persistent Disk_ (PD) yang sama untuk memastikan bahwa data disimpan pada berkas
+untuk setiap kali pembaruan.
+
+Pembaruan Node untuk GCE menggunakan [grup _instance_ yang di-_manage_](https://cloud.google.com/compute/docs/instance-groups/), dimana setiap Node
+dihancurkan secara berurutan dan kemudian dibuat ulang dengan perangkat lunak baru. Semua Pod yang berjalan di Node tersebut harus
+dikontrol oleh pengontrol replikasi (_Replication Controller_), atau dibuat ulang secara manual setelah peluncuran.
+
+Pembaruan versi pada klaster open source Google Compute Engine (GCE) yang dikontrol oleh skrip `cluster/gce/upgrade.sh`.
+
+Dapatkan penggunaan dengan menjalankan `cluster/gce/upgrade.sh -h`.
+
+Misalnya, untuk meningkatkan hanya _control plane_ kamu ke versi tertentu (v1.0.2):
+
+```shell
+cluster/gce/upgrade.sh -M v1.0.2
+```
+
+Sebagai alternatif, untuk meningkatkan seluruh klaster kamu ke rilis yang stabil terbaru gunakan:
+
+```shell
+cluster/gce/upgrade.sh release/stable
+```
+
+### Memperbarui klaster Google Kubernetes Engine
+
+Google Kubernetes Engine secara otomatis memperbarui komponen _control plane_ (misalnya, `kube-apiserver`, ` kube-scheduler`) ke versi yang terbaru. Ini juga menangani pembaruan sistem operasi dan komponen lain yang dijalankan oleh _control plane_.
+
+Proses pembaruan Node dimulai oleh pengguna dan dijelaskan dalam [Dokumentasi Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade).
+
+### Memperbarui klaster Amazon EKS
+
+Komponen _control plane_ klaster pada Amazon EKS dapat diperbarui dengan menggunakan eksctl, AWS Management Console, atau AWS CLI. Prosesnya dimulai oleh pengguna dan dijelaskan di [Dokumentasi Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html).
+
+### Memperbarui klaster Oracle Cloud Infrastructure Container Engine untuk Kubernetes (OKE)
+
+Oracle membuat dan mengelola sekumpulan Node _control plane_ pada _control plane_ Oracle atas nama kamu (dan infrastruktur Kubernetes terkait seperti Node etcd) untuk memastikan kamu memiliki Kubernetes _control plane_ yang terkelola dengan ketersedian tinggi. Kamu juga dapat memperbarui Node _control plane_ ini dengan mulus ke versi Kubernetes baru tanpa berhenti. Tindakan ini dijelaskan dalam [Dokumentasi OKE](https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengupgradingk8smasternode.htm).
+
+### Memperbarui klaster pada platform yang lain
+
+Penyedia dan alat yang berbeda akan mengelola pembaruan secara berbeda. Kamu disarankan untuk membaca dokumentasi utama mereka terkait pembaruan.
+
+* [kops](https://github.com/kubernetes/kops)
+* [kubespray](https://github.com/kubernetes-incubator/kubespray)
+* [CoreOS Tectonic](https://coreos.com/tectonic/docs/latest/admin/upgrade.html)
+* [Digital Rebar](https://provision.readthedocs.io/en/tip/doc/content-packages/krib.html)
+* ...
+
+Untuk memperbarukan sebuah klaster pada platform yang tidak disebutkan dalam daftar di atas, periksa urutan pembaruan komponen pada
+halaman [Versi Skewed](/docs/setup/release/version-skew-policy/#supported-component-upgrade-order).
+
+## Merubah ukuran klaster
+
+Jika klaster kamu kekurangan sumber daya, kamu dapat dengan mudah menambahkan lebih banyak mesin ke klaster tersebut jika klaster kamu
+menjalankan [Mode Node Registrasi Sendiri](/docs/concepts/architecture/nodes/#self-registration-of-nodes).
+Jika kamu menggunakan GCE atau Google Kubernetes Engine, itu dilakukan dengan mengubah ukuran grup _instance_ yang mengelola Node kamu.
+Ini dapat dilakukan dengan mengubah jumlah _instance_ pada
+`Compute > Compute Engine > Instance groups > your group > Edit group`
+[Laman Google Cloud Console](https://console.developers.google.com) atau dengan baris perintah gcloud:
+
+```shell
+gcloud compute instance-groups managed resize kubernetes-node-pool --size=42 --zone=$ZONE
+```
+
+Grup _instance_ akan menangani penempatan _image_ yang sesuai pada mesin baru dan memulainya,
+sedangkan Kubelet akan mendaftarkan Node-nya ke server API agar tersedia untuk penjadwalan.
+Jika kamu menurunkan skala grup _instance_, sistem akan secara acak memilih Node untuk dimatikan.
+
+Di lingkungan lain kamu mungkin perlu mengonfigurasi mesin sendiri dan memberi tahu Kubelet di mana server API mesin itu berjalan.
+
+### Merubah ukuran klaster Azure Kubernetes Service (AKS)
+
+Azure Kubernetes Service memungkinkan perubahan ukuran klaster yang dimulai oleh pengguna dari CLI atau
+portal Azure dan dijelaskan dalam [Dokumentasi Azure AKS](https://docs.microsoft.com/en-us/azure/aks/scale-cluster).
+
+
+### Penyekalaan otomatis klaster
+
+Jika kamu menggunakan GCE atau Google Kubernetes Engine, kamu dapat mengonfigurasi klaster kamu sehingga secara otomatis diskalakan berdasarkan
+kebutuhan Pod.
+
+Seperti yang dideskripsikan dalam [Sumber daya komputasi](/id/docs/concepts/configuration/manage-resources-containers/),
+pengguna dapat memesan berapa banyak CPU dan memori yang dialokasikan ke Pod.
+Informasi ini digunakan oleh penjadwal Kubernetes untuk menemukan tempat menjalankan Pod. Jika
+tidak ada Node yang memiliki kapasitas kosong yang cukup (atau tidak sesuai dengan persyaratan Pod yang lainnya) maka Pod
+menunggu sampai beberapa Pod dihentikan atau Node baru ditambahkan.
+
+Penyekala otomatis klaster mencari Pod yang tidak dapat dijadwalkan dan memeriksa apakah perlu menambahkan Node baru, yang serupa
+dengan Node yang lain dalam klaster untuk membantu. Jika ya, maka itu mengubah ukuran klaster agar dapat mengakomodasi Pod yang menunggu.
+
+Penyekala otomatis klaster juga menurunkan skala klaster jika mengetahui bahwa satu atau beberapa Node tidak diperlukan lagi untuk
+periode waktu tambahan (selama 10 menit tetapi dapat berubah di masa mendatang).
+
+Penyekala otomatis klaster dikonfigurasikan untuk per grup _instance_ (GCE) atau kumpulan Node (Google Kubernetes Engine).
+
+Jika kamu menggunakan GCE, kamu dapat mengaktifkannya sambil membuat klaster dengan skrip kube-up.sh.
+Untuk mengonfigurasi penyekala otomatis klaster, kamu harus menyetel tiga variabel lingkungan:
+
+* `KUBE_ENABLE_CLUSTER_AUTOSCALER` - mengaktifkan penyekala otomatis klaster kalau di setel menjadi _true_.
+* `KUBE_AUTOSCALER_MIN_NODES` - minimal jumlah Node dalam klaster.
+* `KUBE_AUTOSCALER_MAX_NODES` - maksimal jumlah Node dalam klaster.
+
+Contoh:
+
+```shell
+KUBE_ENABLE_CLUSTER_AUTOSCALER=true KUBE_AUTOSCALER_MIN_NODES=3 KUBE_AUTOSCALER_MAX_NODES=10 NUM_NODES=5 ./cluster/kube-up.sh
+```
+
+Pada Google Kubernetes Engine, kamu mengonfigurasi penyekala otomatis klaster baik saat pembuatan atau pembaruan klaster atau saat membuat kumpulan Node tertentu
+(yang ingin kamu skalakan secara otomatis) dengan meneruskan _flag_ `--enable-autoscaling`, `--min-nodes` dan `--max-nodes`
+yang sesuai dengan perintah `gcloud`.
+
+Contoh:
+
+```shell
+gcloud container clusters create mytestcluster --zone=us-central1-b --enable-autoscaling --min-nodes=3 --max-nodes=10 --num-nodes=5
+```
+
+```shell
+gcloud container clusters update mytestcluster --enable-autoscaling --min-nodes=1 --max-nodes=15
+```
+
+**Penyekala otomatis klaster mengharapkan bahwa Node belum dimodifikasi secara manual (misalnya dengan menambahkan label melalui kubectl) karena properti tersebut tidak akan disebarkan ke Node baru dalam grup _instance_ yang sama.**
+
+Untuk detail selengkapnya tentang cara penyekala otomatis klaster memutuskan apakah, kapan dan bagaimana
+melakukan penyekalaan sebuah klaster, silahkan lihat dokumentasi [FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md)
+dari proyek penyekala otomatis klaster.
+
+## Memelihara dalam Node
+
+Jika kamu perlu memulai ulang Node (seperti untuk pembaruan kernel, pembaruan libc, pembaruan perangkat keras, dll.) dan waktu kegagalan (_downtime_) yang
+singkat, lalu ketika Kubelet memulai ulang, maka ia akan mencoba untuk memulai ulang Pod yang dijadwalkan. Jika mulai ulang membutuhkan waktu yang lebih lama
+(waktu bawaan adalah 5 menit, yang dikontrol oleh `--pod-eviction-timeout` pada _controller-manager_),
+maka pengontrol Node akan menghentikan Pod yang terikat ke Node yang tidak tersedia. Jika ada yang sesuai dengan
+kumpulan replika (atau pengontrol replikasi), maka salinan baru dari Pod akan dimulai pada Node yang berbeda. Jadi, dalam kasus di mana semua
+Pod direplikasi, pembaruan dapat dilakukan tanpa koordinasi khusus, dengan asumsi bahwa tidak semua Node akan mati pada saat yang bersamaan.
+
+Jika kamu ingin lebih mengontrol proses pembaruan, kamu dapat menggunakan alur kerja berikut ini:
+
+Gunakan `kubectl drain` untuk meghentikan perlahan-lahan semua Pod dalam Node ketika menandai Node sebagai _unschedulable_:
+
+```shell
+kubectl drain $NODENAME
+```
+
+Ini mencegah Pod baru mendarat pada Node saat kamu mencoba melepaskannya.
+
+Untuk Pod dengan sebuah kumpulan replika, Pod tersebut akan diganti dengan Pod baru yang akan dijadwalkan ke Node baru. Selain itu, jika Pod adalah bagian dari layanan, maka klien akan secara otomatis dialihkan ke Pod baru.
+
+Untuk Pod yang tidak memiliki replika, kamu perlu memunculkan salinan baru dari Pod tersebut, dan menganggapnya bukan bagian dari layanan, alihkan klien ke Pod tersebut.
+
+Lakukan pekerjaan pemeliharaan pada Node.
+
+Buat Node dapat dijadwal lagi:
+
+
+```shell
+kubectl uncordon $NODENAME
+```
+
+Jika kamu menghapus Node dari _instance_ VM dan membuat yang baru, maka sumber daya Node baru yang dapat dijadwalkan akan
+dibuat secara otomatis (jika kamu menggunakan penyedia cloud yang mendukung
+pencarian Node; saat ini hanya Google Compute Engine, tidak termasuk CoreOS di Google Compute Engine menggunakan kube-register).
+Lihatlah [Node](/docs/concepts/architecture/nodes/) untuk lebih detail.
+
+## Topik lebih lanjut
+
+### Mengaktifkan atau menonaktifkan versi API untuk klaster kamu
+
+Versi API spesifik dapat dinyalakan dan dimatikan dengan meneruskan _flag_ `--runtime-config=api/` ketika menjalankan server API. Sebagai contoh: untuk menyalakan APIv1, teruskan `--runtime-config=api/v1=false`.
+_runtime-config_ juga mendukung 2 kunci khusus: api/all dan api/legacy yang masing-masing untuk mengontrol semua dan API lama.
+Sebagai contoh, untuk mematikan versi API semua kecuali v1, teruskan `--runtime-config=api/all=false,api/v1=true`.
+Untuk tujuan _flag_ ini, API lama adalah API yang sudah tidak digunakan lagi secara eksplisit (misalnya, `v1beta3`).
+
+### Mengalihkan versi API penyimpanan dari klaster kamu
+
+Objek yang disimpan ke diska untuk representasi internal klaster dari sumber daya Kubernetes yang aktif dalam klaster ditulis menggunakan versi API tertentu.
+Saat API yang didukung berubah, objek ini mungkin perlu ditulis ulang dalam API yang lebih baru. Kegagalan melakukan ini pada akhirnya akan menghasilkan sumber daya yang tidak lagi dapat didekodekan atau digunakan
+oleh server API Kubernetes.
+
+### Mengalihkan berkas konfigurasi kamu ke versi API baru
+
+Kamu dapat menggunakan perintah `kubectl convert` untuk mengubah berkas konfigurasi di antara versi API berbeda.
+
+```shell
+kubectl convert -f pod.yaml --output-version v1
+```
+
+Untuk opsi yang lainnya, silakan merujuk pada penggunaan dari perintah [kubectl convert](/docs/reference/generated/kubectl/kubectl-commands#convert).
+
+
diff --git a/content/ja/case-studies/appdirect/index.html b/content/ja/case-studies/appdirect/index.html
index e2b8948bf6937..0be120c3b501b 100644
--- a/content/ja/case-studies/appdirect/index.html
+++ b/content/ja/case-studies/appdirect/index.html
@@ -12,7 +12,7 @@
私たちはたくさんの人からの関心を得るためにさまざまな戦略を試みています。Kubernetesとクラウドネイティブ技術は、いまやデファクトのエコシステムとみなされています。
---
-
China Unicom社:KubernetesによるITコスト削減と効率性向上をいかにして実現したか
@@ -57,7 +57,7 @@
China Unicomは、3億人を超えるユーザーを抱える、中国国内
そこで新しい技術、研究開発(R&D)、およびプラットフォームの責務を担うZhangのチームは、IT管理におけるソリューションの探索を始めました。以前は完全な国営企業だったChina Unicomは、近年BAT(Baidu、Alibaba、Tencent)およびJD.comからの民間投資を受け、今は商用製品ではなくオープンソース技術を活用した社内開発に注力するようになりました。こういったこともあり、Zhangのチームはクラウドインフラのオープンソースオーケストレーションツールを探し始めたのです。
SOS Internationalは60年にわたり、北欧諸国の顧客に信頼性の
プラットフォームは2018年春に公開されました。マイクロサービスアーキテクチャに基づく6つの未開発のプロジェクトが最初に開始されました。さらに、同社のJavaアプリケーションはすべて「リフト&シフト」移行を行っています。最初に稼働しているKubernetesベースのプロジェクトの一つがRemote Medical Treatmentです。これは顧客が音声、チャット、ビデオを介してSOSアラームセンターに連絡できるソリューションです。「完全なCI/CDパイプラインと最新のマイクロサービスアーキテクチャをすべて2つのOpenShiftクラスターセットアップで実行することに焦点を当てて、非常に短時間で開発できました。」とAhrentsen氏は言います。北欧諸国へのレスキュートラックの派遣に使用されるOnsite、および、レッカー車の追跡を可能にするFollow Your Truckも展開されています。
-
+
「ITプロフェッショナルが新しい技術を提供したという理由で我が社を選んでいたことが新人研修の時にわかりました。」
- SOS International エンタープライズアーキテクチャ責任者 Martin Ahrentsen
diff --git a/content/ja/case-studies/spotify/index.html b/content/ja/case-studies/spotify/index.html
index 0725723b68351..49d929995daae 100644
--- a/content/ja/case-studies/spotify/index.html
+++ b/content/ja/case-studies/spotify/index.html
@@ -11,7 +11,7 @@
Kubernetesを中心に成長した素晴らしいコミュニティを見て、その一部になりたかったのです。スピードの向上とコスト削減のメリットを享受し、ベストプラクティスとツールについて業界の他の企業と連携したいとも思いました。
---
-
The New York Times: From Print to the Web to Cloud Native
@@ -64,7 +64,7 @@
Impact
-
+
"We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?"
@@ -79,7 +79,7 @@
Impact
-
+
"Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem," says Kapadia. "Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward."
diff --git a/content/ko/docs/concepts/workloads/controllers/deployment.md b/content/ko/docs/concepts/workloads/controllers/deployment.md
index 0e5c5e94fbf36..745a33b52d031 100644
--- a/content/ko/docs/concepts/workloads/controllers/deployment.md
+++ b/content/ko/docs/concepts/workloads/controllers/deployment.md
@@ -100,7 +100,7 @@ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
다음과 유사하게 출력된다.
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
- deployment.apps/nginx-deployment successfully rolled out
+ deployment "nginx-deployment" successfully rolled out
```
4. 몇 초 후 `kubectl get deployments` 를 다시 실행한다.
@@ -203,7 +203,7 @@ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
```
또는
```
- deployment.apps/nginx-deployment successfully rolled out
+ deployment "nginx-deployment" successfully rolled out
```
업데이트된 디플로이먼트에 대해 자세한 정보 보기
@@ -855,7 +855,7 @@ kubectl rollout status deployment.v1.apps/nginx-deployment
이와 유사하게 출력된다.
```
Waiting for rollout to finish: 2 of 3 updated replicas are available...
-deployment.apps/nginx-deployment successfully rolled out
+deployment "nginx-deployment" successfully rolled out
```
그리고 `kubectl rollout` 의 종료 상태는 0(success)이다.
```shell
diff --git a/content/zh/case-studies/adform/index.html b/content/zh/case-studies/adform/index.html
index e9a8acc7a22f2..be35a2d8375ee 100644
--- a/content/zh/case-studies/adform/index.html
+++ b/content/zh/case-studies/adform/index.html
@@ -12,7 +12,7 @@
Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier.
---
-
+
CASE STUDY:
Improving Performance and Morale with Cloud Native
"On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress." - RANGER YU, GLOBAL TECHNOLOGY PARTNERSHIP & DEVELOPMENT, ANT FINANCIAL
@@ -65,7 +65,7 @@
A spinoff of the multinational conglomerate Alibaba, Ant Financial boasts a
All core financial systems were containerized by November 2017, and the migration to Kubernetes is ongoing. Ant’s platform also leverages a number of other CNCF projects, including Prometheus, OpenTracing, etcd and CoreDNS. “On Double 11 this year, we had plenty of nodes on Kubernetes, but compared to the whole scale of our infrastructure, this is still in progress,” says Ranger Yu, Global Technology Partnership & Development.
-
+
"We’re very grateful for CNCF and this amazing technology, which we need as we continue to scale globally. We’re definitely embracing the community and open source more in the future." - HAOJIE HANG, PRODUCT MANAGEMENT, ANT FINANCIAL
diff --git a/content/zh/case-studies/appdirect/index.html b/content/zh/case-studies/appdirect/index.html
index 16d93cce5cb4e..ca6b0b8fe92a4 100644
--- a/content/zh/case-studies/appdirect/index.html
+++ b/content/zh/case-studies/appdirect/index.html
@@ -12,7 +12,7 @@
We made the right decisions at the right time. Kubernetes and the cloud native technologies are now seen as the de facto ecosystem.
---
-
+
CASE STUDY:
AppDirect: How AppDirect Supported the 10x Growth of Its Engineering Staff with Kubernetess
@@ -53,7 +53,7 @@
With its end-to-end commerce platform for cloud-based products and services,
-
+
"We made the right decisions at the right time. Kubernetes and the cloud native technologies are now seen as the de facto ecosystem. We know where to focus our efforts in order to tackle the new wave of challenges we face as we scale out. The community is so active and vibrant, which is a great complement to our awesome internal team." - Alexandre Gervais, Staff Software Developer, AppDirect
@@ -69,7 +69,7 @@
With its end-to-end commerce platform for cloud-based products and services,
Lacerte’s strategy ultimately worked because of the very real impact the Kubernetes platform has had to deployment time. Due to less dependency on custom-made, brittle shell scripts with SCP commands, time to deploy a new version has shrunk from 4 hours to a few minutes. Additionally, the company invested a lot of effort to make things self-service for developers. "Onboarding a new service doesn’t require Jira tickets or meeting with three different teams," says Lacerte. Today, the company sees 1,600 deployments per week, compared to 1-30 before.
-
+
"I think our velocity would have slowed down a lot if we didn’t have this new infrastructure." - Pierre-Alexandre Lacerte, Director of Software Development, AppDirect
diff --git a/content/zh/case-studies/bose/index.html b/content/zh/case-studies/bose/index.html
index d22de2187af9c..c77f416c13715 100644
--- a/content/zh/case-studies/bose/index.html
+++ b/content/zh/case-studies/bose/index.html
@@ -11,7 +11,7 @@
The CNCF Landscape quickly explains what’s going on in all the different areas from storage to cloud providers to automation and so forth. This is our shopping cart to build a cloud infrastructure. We can go choose from the different aisles.
---
-
+
CASE STUDY:
Bose: Supporting Rapid Development for Millions of IoT Products With Kubernetes
@@ -56,7 +56,7 @@
A household name in high-quality audio equipment,
+