diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index ed3ccdbb96fd5..904657365867c 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -226,9 +226,11 @@ aliases:
- kpucynski
sig-docs-uk-owners: # Admins for Ukrainian content
- anastyakulyk
+ - butuzov
- MaxymVlasov
sig-docs-uk-reviews: # PR reviews for Ukrainian content
- anastyakulyk
+ - butuzov
- idvoretskyi
- MaxymVlasov
- Potapy4
diff --git a/README-uk.md b/README-uk.md
index 68d3b0db0af78..43d782e09a2ff 100644
--- a/README-uk.md
+++ b/README-uk.md
@@ -39,7 +39,7 @@ make docker-image
make docker-serve
```
-Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
+Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
## Запуск сайту локально зa допомогою Hugo
@@ -51,7 +51,7 @@ make docker-serve
make serve
```
-Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте початковий код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
+Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
## Спільнота, обговорення, внесок і підтримка
diff --git a/content/uk/_common-resources/index.md b/content/uk/_common-resources/index.md
new file mode 100644
index 0000000000000..ca03031f1ee91
--- /dev/null
+++ b/content/uk/_common-resources/index.md
@@ -0,0 +1,3 @@
+---
+headless: true
+---
diff --git a/content/uk/_index.html b/content/uk/_index.html
new file mode 100644
index 0000000000000..02df4d395da18
--- /dev/null
+++ b/content/uk/_index.html
@@ -0,0 +1,85 @@
+---
+title: "Довершена система оркестрації контейнерів"
+abstract: "Автоматичне розгортання, масштабування і управління контейнерами"
+cid: home
+---
+
+{{< announcement >}}
+
+{{< deprecationwarning >}}
+
+{{< blocks/section id="oceanNodes" >}}
+{{% blocks/feature image="flower" %}}
+
+### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) - це система з відкритим вихідним кодом для автоматичного розгортання, масштабування і управління контейнеризованими застосунками.
+
+
+Вона об'єднує контейнери, що утворюють застосунок, у логічні елементи для легкого управління і виявлення. В основі Kubernetes - [15 років досвіду запуску і виконання застосунків у продуктивних середовищах Google](http://queue.acm.org/detail.cfm?id=2898444), поєднані з найкращими ідеями і практиками від спільноти.
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="scalable" %}}
+
+#### Глобальне масштабування
+
+
+Заснований на тих самих принципах, завдяки яким Google запускає мільярди контейнерів щотижня, Kubernetes масштабується без потреби збільшення вашого штату з експлуатації.
+
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="blocks" %}}
+
+#### Невичерпна функціональність
+
+
+Запущений для локального тестування чи у глобальній корпорації, Kubernetes динамічно зростатиме з вами, забезпечуючи регулярну і легку доставку ваших застосунків незалежно від рівня складності ваших потреб.
+
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="suitcase" %}}
+
+#### Працює всюди
+
+
+Kubernetes - проект з відкритим вихідним кодом. Він дозволяє скористатися перевагами локальної, гібридної чи хмарної інфраструктури, щоб легко переміщати застосунки туди, куди вам потрібно.
+
+{{% /blocks/feature %}}
+
+{{< /blocks/section >}}
+
+{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
+
+
+
Проблеми міграції 150+ мікросервісів у Kubernetes
+
+
Сара Уеллз, технічний директор з експлуатації і безпеки роботи, Financial Times
+{{< /blocks/section >}}
+
+{{< blocks/kubernetes-features >}}
+
+{{< blocks/case-studies >}}
diff --git a/content/uk/case-studies/_index.html b/content/uk/case-studies/_index.html
new file mode 100644
index 0000000000000..6c9c75fc44c7d
--- /dev/null
+++ b/content/uk/case-studies/_index.html
@@ -0,0 +1,13 @@
+---
+title: Case Studies
+title: Приклади використання
+linkTitle: Case Studies
+linkTitle: Приклади використання
+bigheader: Kubernetes User Case Studies
+bigheader: Приклади використання Kubernetes від користувачів.
+abstract: A collection of users running Kubernetes in production.
+abstract: Підбірка користувачів, що використовують Kubernetes для робочих навантажень.
+layout: basic
+class: gridPage
+cid: caseStudies
+---
diff --git a/content/uk/docs/_index.md b/content/uk/docs/_index.md
new file mode 100644
index 0000000000000..a601666b678f9
--- /dev/null
+++ b/content/uk/docs/_index.md
@@ -0,0 +1,3 @@
+---
+title: Документація
+---
diff --git a/content/uk/docs/concepts/_index.md b/content/uk/docs/concepts/_index.md
new file mode 100644
index 0000000000000..695068aa4aa56
--- /dev/null
+++ b/content/uk/docs/concepts/_index.md
@@ -0,0 +1,123 @@
+---
+title: Концепції
+main_menu: true
+content_template: templates/concept
+weight: 40
+---
+
+{{% capture overview %}}
+
+
+В розділі "Концепції" описані складові системи Kubernetes і абстракції, за допомогою яких Kubernetes реалізовує ваш {{< glossary_tooltip text="кластер" term_id="cluster" length="all" >}}. Цей розділ допоможе вам краще зрозуміти, як працює Kubernetes.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+
+
+## Загальна інформація
+
+
+Для роботи з Kubernetes ви використовуєте *об'єкти API Kubernetes* для того, щоб описати *бажаний стан* вашого кластера: які застосунки або інші робочі навантаження ви плануєте запускати, які образи контейнерів вони використовують, кількість реплік, скільки ресурсів мережі та диску ви хочете виділити тощо. Ви задаєте бажаний стан, створюючи об'єкти в Kubernetes API, зазвичай через інтерфейс командного рядка `kubectl`. Ви також можете взаємодіяти із кластером, задавати або змінювати його бажаний стан безпосередньо через Kubernetes API.
+
+
+Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Генератора подій життєвого циклу Пода ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері:
+
+
+
+* **Kubernetes master** становить собою набір із трьох процесів, запущених на одному вузлі вашого кластера, що визначений як керівний (master). До цих процесів належать: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) і [kube-scheduler](/docs/admin/kube-scheduler/).
+* На кожному не-мастер вузлі вашого кластера виконуються два процеси:
+ * **[kubelet](/docs/admin/kubelet/)**, що обмінюється даними з Kubernetes master.
+ * **[kube-proxy](/docs/admin/kube-proxy/)**, мережевий проксі, що відображає мережеві сервіси Kubernetes на кожному вузлі.
+
+
+
+## Об'єкти Kubernetes
+
+
+Kubernetes оперує певною кількістю абстракцій, що відображають стан вашої системи: розгорнуті у контейнерах застосунки та робочі навантаження, пов'язані з ними ресурси мережі та диску, інша інформація щодо функціонування вашого кластера. Ці абстракції представлені як об'єкти Kubernetes API. Для більш детальної інформації ознайомтесь з [Об'єктами Kubernetes](/docs/concepts/overview/working-with-objects/kubernetes-objects/).
+
+
+До базових об'єктів Kubernetes належать:
+
+* [Под *(Pod)*](/docs/concepts/workloads/pods/pod-overview/)
+* [Сервіс *(Service)*](/docs/concepts/services-networking/service/)
+* [Volume](/docs/concepts/storage/volumes/)
+* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
+
+
+В Kubernetes є також абстракції вищого рівня, які надбудовуються над базовими об'єктами за допомогою [контролерів](/docs/concepts/architecture/controller/) і забезпечують додаткову функціональність і зручність. До них належать:
+
+* [Deployment](/docs/concepts/workloads/controllers/deployment/)
+* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
+* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
+* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
+* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
+
+
+
+## Площина управління Kubernetes (*Kubernetes Control Plane*)
+
+
+Різні частини площини управління Kubernetes, такі як Kubernetes Master і kubelet, регулюють, як Kubernetes спілкується з вашим кластером. Площина управління веде облік усіх об'єктів Kubernetes в системі та безперервно, в циклі перевіряє стан цих об'єктів. У будь-який момент часу контрольні цикли, запущені площиною управління, реагуватимуть на зміни у кластері і намагатимуться привести поточний стан об'єктів до бажаного, що заданий у конфігурації.
+
+
+Наприклад, коли за допомогою API Kubernetes ви створюєте Deployment, ви задаєте новий бажаний стан для системи. Площина управління Kubernetes фіксує створення цього об'єкта і виконує ваші інструкції шляхом запуску потрібних застосунків та їх розподілу між вузлами кластера. В такий спосіб досягається відповідність поточного стану бажаному.
+
+
+
+### Kubernetes Master
+
+
+Kubernetes Master відповідає за підтримку бажаного стану вашого кластера. Щоразу, як ви взаємодієте з Kubernetes, наприклад при використанні інтерфейсу командного рядка `kubectl`, ви обмінюєтесь даними із Kubernetes master вашого кластера.
+
+
+Слово "master" стосується набору процесів, які управляють станом кластера. Переважно всі ці процеси виконуються на одному вузлі кластера, який також називається master. Master-вузол можна реплікувати для забезпечення високої доступності кластера.
+
+
+
+### Вузли Kubernetes
+
+
+Вузлами кластера називають машини (ВМ, фізичні сервери тощо), на яких запущені ваші застосунки та хмарні робочі навантаження. Кожен вузол керується Kubernetes master; ви лише зрідка взаємодіятимете безпосередньо із вузлами.
+
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+
+Якщо ви хочете створити нову сторінку у розділі Концепції, у статті
+[Використання шаблонів сторінок](/docs/home/contribute/page-templates/)
+ви знайдете інформацію щодо типу і шаблона сторінки.
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/configuration/_index.md b/content/uk/docs/concepts/configuration/_index.md
new file mode 100644
index 0000000000000..588d144f6e596
--- /dev/null
+++ b/content/uk/docs/concepts/configuration/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Конфігурація"
+weight: 80
+---
+
diff --git a/content/uk/docs/concepts/configuration/manage-compute-resources-container.md b/content/uk/docs/concepts/configuration/manage-compute-resources-container.md
new file mode 100644
index 0000000000000..a90b224f8ca3a
--- /dev/null
+++ b/content/uk/docs/concepts/configuration/manage-compute-resources-container.md
@@ -0,0 +1,623 @@
+---
+title: Managing Compute Resources for Containers
+content_template: templates/concept
+weight: 20
+feature:
+ # title: Automatic bin packing
+ title: Автоматичне пакування у контейнери
+ # description: >
+ # Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
+ description: >
+ Автоматичне розміщення контейнерів з огляду на їхні потреби у ресурсах та інші обмеження, при цьому не поступаючись доступністю. Поєднання критичних і "найкращих з можливих" робочих навантажень для ефективнішого використання і більшого заощадження ресурсів.
+---
+
+{{% capture overview %}}
+
+When you specify a [Pod](/docs/concepts/workloads/pods/pod/), you can optionally specify how
+much CPU and memory (RAM) each Container needs. When Containers have resource
+requests specified, the scheduler can make better decisions about which nodes to
+place Pods on. And when Containers have their limits specified, contention for
+resources on a node can be handled in a specified manner. For more details about
+the difference between requests and limits, see
+[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md).
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Resource types
+
+*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
+CPU is specified in units of cores, and memory is specified in units of bytes.
+If you're using Kubernetes v1.14 or newer, you can specify _huge page_ resources.
+Huge pages are a Linux-specific feature where the node kernel allocates blocks of memory
+that are much larger than the default page size.
+
+For example, on a system where the default page size is 4KiB, you could specify a limit,
+`hugepages-2Mi: 80Mi`. If the container tries allocating over 40 2MiB huge pages (a
+total of 80 MiB), that allocation fails.
+
+{{< note >}}
+You cannot overcommit `hugepages-*` resources.
+This is different from the `memory` and `cpu` resources.
+{{< /note >}}
+
+CPU and memory are collectively referred to as *compute resources*, or just
+*resources*. Compute
+resources are measurable quantities that can be requested, allocated, and
+consumed. They are distinct from
+[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and
+[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified
+through the Kubernetes API server.
+
+## Resource requests and limits of Pod and Container
+
+Each Container of a Pod can specify one or more of the following:
+
+* `spec.containers[].resources.limits.cpu`
+* `spec.containers[].resources.limits.memory`
+* `spec.containers[].resources.limits.hugepages-`
+* `spec.containers[].resources.requests.cpu`
+* `spec.containers[].resources.requests.memory`
+* `spec.containers[].resources.requests.hugepages-`
+
+Although requests and limits can only be specified on individual Containers, it
+is convenient to talk about Pod resource requests and limits. A
+*Pod resource request/limit* for a particular resource type is the sum of the
+resource requests/limits of that type for each Container in the Pod.
+
+
+## Meaning of CPU
+
+Limits and requests for CPU resources are measured in *cpu* units.
+One cpu, in Kubernetes, is equivalent to:
+
+- 1 AWS vCPU
+- 1 GCP Core
+- 1 Azure vCore
+- 1 IBM vCPU
+- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
+
+Fractional requests are allowed. A Container with
+`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much
+CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the
+expression `100m`, which can be read as "one hundred millicpu". Some people say
+"one hundred millicores", and this is understood to mean the same thing. A
+request with a decimal point, like `0.1`, is converted to `100m` by the API, and
+precision finer than `1m` is not allowed. For this reason, the form `100m` might
+be preferred.
+
+CPU is always requested as an absolute quantity, never as a relative quantity;
+0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.
+
+## Meaning of memory
+
+Limits and requests for `memory` are measured in bytes. You can express memory as
+a plain integer or as a fixed-point integer using one of these suffixes:
+E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
+Mi, Ki. For example, the following represent roughly the same value:
+
+```shell
+128974848, 129e6, 129M, 123Mi
+```
+
+Here's an example.
+The following Pod has two Containers. Each Container has a request of 0.25 cpu
+and 64MiB (226 bytes) of memory. Each Container has a limit of 0.5
+cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
+MiB of memory, and a limit of 1 cpu and 256MiB of memory.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: frontend
+spec:
+ containers:
+ - name: db
+ image: mysql
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ value: "password"
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
+ - name: wp
+ image: wordpress
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
+```
+
+## How Pods with resource requests are scheduled
+
+When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
+run on. Each node has a maximum capacity for each of the resource types: the
+amount of CPU and memory it can provide for Pods. The scheduler ensures that,
+for each resource type, the sum of the resource requests of the scheduled
+Containers is less than the capacity of the node. Note that although actual memory
+or CPU resource usage on nodes is very low, the scheduler still refuses to place
+a Pod on a node if the capacity check fails. This protects against a resource
+shortage on a node when resource usage later increases, for example, during a
+daily peak in request rate.
+
+## How Pods with resource limits are run
+
+When the kubelet starts a Container of a Pod, it passes the CPU and memory limits
+to the container runtime.
+
+When using Docker:
+
+- The `spec.containers[].resources.requests.cpu` is converted to its core value,
+ which is potentially fractional, and multiplied by 1024. The greater of this number
+ or 2 is used as the value of the
+ [`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint)
+ flag in the `docker run` command.
+
+- The `spec.containers[].resources.limits.cpu` is converted to its millicore value and
+ multiplied by 100. The resulting value is the total amount of CPU time that a container can use
+ every 100ms. A container cannot use more than its share of CPU time during this interval.
+
+ {{< note >}}
+ The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
+ {{ note >}}
+
+- The `spec.containers[].resources.limits.memory` is converted to an integer, and
+ used as the value of the
+ [`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints)
+ flag in the `docker run` command.
+
+If a Container exceeds its memory limit, it might be terminated. If it is
+restartable, the kubelet will restart it, as with any other type of runtime
+failure.
+
+If a Container exceeds its memory request, it is likely that its Pod will
+be evicted whenever the node runs out of memory.
+
+A Container might or might not be allowed to exceed its CPU limit for extended
+periods of time. However, it will not be killed for excessive CPU usage.
+
+To determine whether a Container cannot be scheduled or is being killed due to
+resource limits, see the
+[Troubleshooting](#troubleshooting) section.
+
+## Monitoring compute resource usage
+
+The resource usage of a Pod is reported as part of the Pod status.
+
+If [optional monitoring](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md)
+is configured for your cluster, then Pod resource usage can be retrieved from
+the monitoring system.
+
+## Troubleshooting
+
+### My Pods are pending with event message failedScheduling
+
+If the scheduler cannot find any node where a Pod can fit, the Pod remains
+unscheduled until a place can be found. An event is produced each time the
+scheduler fails to find a place for the Pod, like this:
+
+```shell
+kubectl describe pod frontend | grep -A 3 Events
+```
+```
+Events:
+ FirstSeen LastSeen Count From Subobject PathReason Message
+ 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
+```
+
+In the preceding example, the Pod named "frontend" fails to be scheduled due to
+insufficient CPU resource on the node. Similar error messages can also suggest
+failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod
+is pending with a message of this type, there are several things to try:
+
+- Add more nodes to the cluster.
+- Terminate unneeded Pods to make room for pending Pods.
+- Check that the Pod is not larger than all the nodes. For example, if all the
+ nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will
+ never be scheduled.
+
+You can check node capacities and amounts allocated with the
+`kubectl describe nodes` command. For example:
+
+```shell
+kubectl describe nodes e2e-test-node-pool-4lw4
+```
+```
+Name: e2e-test-node-pool-4lw4
+[ ... lines removed for clarity ...]
+Capacity:
+ cpu: 2
+ memory: 7679792Ki
+ pods: 110
+Allocatable:
+ cpu: 1800m
+ memory: 7474992Ki
+ pods: 110
+[ ... lines removed for clarity ...]
+Non-terminated Pods: (5 in total)
+ Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
+ --------- ---- ------------ ---------- --------------- -------------
+ kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)
+ kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)
+ kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)
+ kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)
+ kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)
+Allocated resources:
+ (Total limits may be over 100 percent, i.e., overcommitted.)
+ CPU Requests CPU Limits Memory Requests Memory Limits
+ ------------ ---------- --------------- -------------
+ 680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
+```
+
+In the preceding output, you can see that if a Pod requests more than 1120m
+CPUs or 6.23Gi of memory, it will not fit on the node.
+
+By looking at the `Pods` section, you can see which Pods are taking up space on
+the node.
+
+The amount of resources available to Pods is less than the node capacity, because
+system daemons use a portion of the available resources. The `allocatable` field
+[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core)
+gives the amount of resources that are available to Pods. For more information, see
+[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md).
+
+The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured
+to limit the total amount of resources that can be consumed. If used in conjunction
+with namespaces, it can prevent one team from hogging all the resources.
+
+### My Container is terminated
+
+Your Container might get terminated because it is resource-starved. To check
+whether a Container is being killed because it is hitting a resource limit, call
+`kubectl describe pod` on the Pod of interest:
+
+```shell
+kubectl describe pod simmemleak-hra99
+```
+```
+Name: simmemleak-hra99
+Namespace: default
+Image(s): saadali/simmemleak
+Node: kubernetes-node-tf0f/10.240.216.66
+Labels: name=simmemleak
+Status: Running
+Reason:
+Message:
+IP: 10.244.2.75
+Replication Controllers: simmemleak (1/1 replicas created)
+Containers:
+ simmemleak:
+ Image: saadali/simmemleak
+ Limits:
+ cpu: 100m
+ memory: 50Mi
+ State: Running
+ Started: Tue, 07 Jul 2015 12:54:41 -0700
+ Last Termination State: Terminated
+ Exit Code: 1
+ Started: Fri, 07 Jul 2015 12:54:30 -0700
+ Finished: Fri, 07 Jul 2015 12:54:33 -0700
+ Ready: False
+ Restart Count: 5
+Conditions:
+ Type Status
+ Ready False
+Events:
+ FirstSeen LastSeen Count From SubobjectPath Reason Message
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
+ Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
+```
+
+In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
+Container in the Pod was terminated and restarted five times.
+
+You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status
+of previously terminated Containers:
+
+```shell
+kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99
+```
+```
+Container Name: simmemleak
+LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
+```
+
+You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory.
+
+## Local ephemeral storage
+{{< feature-state state="beta" >}}
+
+Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers.
+
+This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope.
+
+{{< note >}}
+If an optional runtime partition is used, root partition will not hold any image layer or writable layers.
+{{< /note >}}
+
+### Requests and limits setting for local ephemeral storage
+Each Container of a Pod can specify one or more of the following:
+
+* `spec.containers[].resources.limits.ephemeral-storage`
+* `spec.containers[].resources.requests.ephemeral-storage`
+
+Limits and requests for `ephemeral-storage` are measured in bytes. You can express storage as
+a plain integer or as a fixed-point integer using one of these suffixes:
+E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
+Mi, Ki. For example, the following represent roughly the same value:
+
+```shell
+128974848, 129e6, 129M, 123Mi
+```
+
+For example, the following Pod has two Containers. Each Container has a request of 2GiB of local ephemeral storage. Each Container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of storage.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: frontend
+spec:
+ containers:
+ - name: db
+ image: mysql
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ value: "password"
+ resources:
+ requests:
+ ephemeral-storage: "2Gi"
+ limits:
+ ephemeral-storage: "4Gi"
+ - name: wp
+ image: wordpress
+ resources:
+ requests:
+ ephemeral-storage: "2Gi"
+ limits:
+ ephemeral-storage: "4Gi"
+```
+
+### How Pods with ephemeral-storage requests are scheduled
+
+When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
+run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see ["Node Allocatable"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
+
+The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node.
+
+### How Pods with ephemeral-storage limits run
+
+For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the Pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the limit, the Pod will be evicted.
+
+### Monitoring ephemeral-storage consumption
+
+When local ephemeral storage is used, it is monitored on an ongoing
+basis by the kubelet. The monitoring is performed by scanning each
+emptyDir volume, log directories, and writable layers on a periodic
+basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log
+directories or writable layers) may, at the cluster operator's option,
+be managed by use of [project
+quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html).
+Project quotas were originally implemented in XFS, and have more
+recently been ported to ext4fs. Project quotas can be used for both
+monitoring and enforcement; as of Kubernetes 1.16, they are available
+as alpha functionality for monitoring only.
+
+Quotas are faster and more accurate than directory scanning. When a
+directory is assigned to a project, all files created under a
+directory are created in that project, and the kernel merely has to
+keep track of how many blocks are in use by files in that project. If
+a file is created and deleted, but with an open file descriptor, it
+continues to consume space. This space will be tracked by the quota,
+but will not be seen by a directory scan.
+
+Kubernetes uses project IDs starting from 1048576. The IDs in use are
+registered in `/etc/projects` and `/etc/projid`. If project IDs in
+this range are used for other purposes on the system, those project
+IDs must be registered in `/etc/projects` and `/etc/projid` to prevent
+Kubernetes from using them.
+
+To enable use of project quotas, the cluster operator must do the
+following:
+
+* Enable the `LocalStorageCapacityIsolationFSQuotaMonitoring=true`
+ feature gate in the kubelet configuration. This defaults to `false`
+ in Kubernetes 1.16, so must be explicitly set to `true`.
+
+* Ensure that the root partition (or optional runtime partition) is
+ built with project quotas enabled. All XFS filesystems support
+ project quotas, but ext4 filesystems must be built specially.
+
+* Ensure that the root partition (or optional runtime partition) is
+ mounted with project quotas enabled.
+
+#### Building and mounting filesystems with project quotas enabled
+
+XFS filesystems require no special action when building; they are
+automatically built with project quotas enabled.
+
+Ext4fs filesystems must be built with quotas enabled, then they must
+be enabled in the filesystem:
+
+```
+% sudo mkfs.ext4 other_ext4fs_args... -E quotatype=prjquota /dev/block_device
+% sudo tune2fs -O project -Q prjquota /dev/block_device
+
+```
+
+To mount the filesystem, both ext4fs and XFS require the `prjquota`
+option set in `/etc/fstab`:
+
+```
+/dev/block_device /var/kubernetes_data defaults,prjquota 0 0
+```
+
+
+## Extended resources
+
+Extended resources are fully-qualified resource names outside the
+`kubernetes.io` domain. They allow cluster operators to advertise and users to
+consume the non-Kubernetes-built-in resources.
+
+There are two steps required to use Extended Resources. First, the cluster
+operator must advertise an Extended Resource. Second, users must request the
+Extended Resource in Pods.
+
+### Managing extended resources
+
+#### Node-level extended resources
+
+Node-level extended resources are tied to nodes.
+
+##### Device plugin managed resources
+See [Device
+Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
+for how to advertise device plugin managed resources on each node.
+
+##### Other resources
+To advertise a new node-level extended resource, the cluster operator can
+submit a `PATCH` HTTP request to the API server to specify the available
+quantity in the `status.capacity` for a node in the cluster. After this
+operation, the node's `status.capacity` will include a new resource. The
+`status.allocatable` field is updated automatically with the new resource
+asynchronously by the kubelet. Note that because the scheduler uses the node
+`status.allocatable` value when evaluating Pod fitness, there may be a short
+delay between patching the node capacity with a new resource and the first Pod
+that requests the resource to be scheduled on that node.
+
+**Example:**
+
+Here is an example showing how to use `curl` to form an HTTP request that
+advertises five "example.com/foo" resources on node `k8s-node-1` whose master
+is `k8s-master`.
+
+```shell
+curl --header "Content-Type: application/json-patch+json" \
+--request PATCH \
+--data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \
+http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
+```
+
+{{< note >}}
+In the preceding request, `~1` is the encoding for the character `/`
+in the patch path. The operation path value in JSON-Patch is interpreted as a
+JSON-Pointer. For more details, see
+[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
+{{< /note >}}
+
+#### Cluster-level extended resources
+
+Cluster-level extended resources are not tied to nodes. They are usually managed
+by scheduler extenders, which handle the resource consumption and resource quota.
+
+You can specify the extended resources that are handled by scheduler extenders
+in [scheduler policy
+configuration](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31).
+
+**Example:**
+
+The following configuration for a scheduler policy indicates that the
+cluster-level extended resource "example.com/foo" is handled by the scheduler
+extender.
+
+- The scheduler sends a Pod to the scheduler extender only if the Pod requests
+ "example.com/foo".
+- The `ignoredByScheduler` field specifies that the scheduler does not check
+ the "example.com/foo" resource in its `PodFitsResources` predicate.
+
+```json
+{
+ "kind": "Policy",
+ "apiVersion": "v1",
+ "extenders": [
+ {
+ "urlPrefix":"",
+ "bindVerb": "bind",
+ "managedResources": [
+ {
+ "name": "example.com/foo",
+ "ignoredByScheduler": true
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Consuming extended resources
+
+Users can consume extended resources in Pod specs just like CPU and memory.
+The scheduler takes care of the resource accounting so that no more than the
+available amount is simultaneously allocated to Pods.
+
+The API server restricts quantities of extended resources to whole numbers.
+Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of
+_invalid_ quantities are `0.5` and `1500m`.
+
+{{< note >}}
+Extended resources replace Opaque Integer Resources.
+Users can use any domain name prefix other than `kubernetes.io` which is reserved.
+{{< /note >}}
+
+To consume an extended resource in a Pod, include the resource name as a key
+in the `spec.containers[].resources.limits` map in the container spec.
+
+{{< note >}}
+Extended resources cannot be overcommitted, so request and limit
+must be equal if both are present in a container spec.
+{{< /note >}}
+
+A Pod is scheduled only if all of the resource requests are satisfied, including
+CPU, memory and any extended resources. The Pod remains in the `PENDING` state
+as long as the resource request cannot be satisfied.
+
+**Example:**
+
+The Pod below requests 2 CPUs and 1 "example.com/foo" (an extended resource).
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: my-pod
+spec:
+ containers:
+ - name: my-container
+ image: myimage
+ resources:
+ requests:
+ cpu: 2
+ example.com/foo: 1
+ limits:
+ example.com/foo: 1
+```
+
+
+
+{{% /capture %}}
+
+
+{{% capture whatsnext %}}
+
+* Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
+
+* Get hands-on experience [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
+
+* [Container API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
+
+* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core)
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/configuration/secret.md b/content/uk/docs/concepts/configuration/secret.md
new file mode 100644
index 0000000000000..62616506921b6
--- /dev/null
+++ b/content/uk/docs/concepts/configuration/secret.md
@@ -0,0 +1,1054 @@
+---
+reviewers:
+- mikedanese
+title: Secrets
+content_template: templates/concept
+feature:
+ title: Управління секретами та конфігурацією
+ description: >
+ Розгортайте та оновлюйте секрети та конфігурацію застосунку без перезбирання образів, не розкриваючи секрети в конфігурацію стека.
+weight: 50
+---
+
+
+{{% capture overview %}}
+
+Kubernetes `secret` objects let you store and manage sensitive information, such
+as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
+is safer and more flexible than putting it verbatim in a
+{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Overview of Secrets
+
+A Secret is an object that contains a small amount of sensitive data such as
+a password, a token, or a key. Such information might otherwise be put in a
+Pod specification or in an image; putting it in a Secret object allows for
+more control over how it is used, and reduces the risk of accidental exposure.
+
+Users can create secrets, and the system also creates some secrets.
+
+To use a secret, a pod needs to reference the secret.
+A secret can be used with a pod in two ways: as files in a
+{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of
+its containers, or used by kubelet when pulling images for the pod.
+
+### Built-in Secrets
+
+#### Service Accounts Automatically Create and Attach Secrets with API Credentials
+
+Kubernetes automatically creates secrets which contain credentials for
+accessing the API and it automatically modifies your pods to use this type of
+secret.
+
+The automatic creation and use of API credentials can be disabled or overridden
+if desired. However, if all you need to do is securely access the apiserver,
+this is the recommended workflow.
+
+See the [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) documentation for more
+information on how Service Accounts work.
+
+### Creating your own Secrets
+
+#### Creating a Secret Using kubectl create secret
+
+Say that some pods need to access a database. The
+username and password that the pods should use is in the files
+`./username.txt` and `./password.txt` on your local machine.
+
+```shell
+# Create files needed for rest of example.
+echo -n 'admin' > ./username.txt
+echo -n '1f2d1e2e67df' > ./password.txt
+```
+
+The `kubectl create secret` command
+packages these files into a Secret and creates
+the object on the Apiserver.
+
+```shell
+kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
+```
+```
+secret "db-user-pass" created
+```
+{{< note >}}
+Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way:
+
+```
+kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb'
+```
+
+ You do not need to escape special characters in passwords from files (`--from-file`).
+{{< /note >}}
+
+You can check that the secret was created like this:
+
+```shell
+kubectl get secrets
+```
+```
+NAME TYPE DATA AGE
+db-user-pass Opaque 2 51s
+```
+```shell
+kubectl describe secrets/db-user-pass
+```
+```
+Name: db-user-pass
+Namespace: default
+Labels:
+Annotations:
+
+Type: Opaque
+
+Data
+====
+password.txt: 12 bytes
+username.txt: 5 bytes
+```
+
+{{< note >}}
+`kubectl get` and `kubectl describe` avoid showing the contents of a secret by
+default.
+This is to protect the secret from being exposed accidentally to an onlooker,
+or from being stored in a terminal log.
+{{< /note >}}
+
+See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret.
+
+#### Creating a Secret Manually
+
+You can also create a Secret in a file first, in json or yaml format,
+and then create that object. The
+[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) contains two maps:
+data and stringData. The data field is used to store arbitrary data, encoded using
+base64. The stringData field is provided for convenience, and allows you to provide
+secret data as unencoded strings.
+
+For example, to store two strings in a Secret using the data field, convert
+them to base64 as follows:
+
+```shell
+echo -n 'admin' | base64
+YWRtaW4=
+echo -n '1f2d1e2e67df' | base64
+MWYyZDFlMmU2N2Rm
+```
+
+Write a Secret that looks like this:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: mysecret
+type: Opaque
+data:
+ username: YWRtaW4=
+ password: MWYyZDFlMmU2N2Rm
+```
+
+Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply):
+
+```shell
+kubectl apply -f ./secret.yaml
+```
+```
+secret "mysecret" created
+```
+
+For certain scenarios, you may wish to use the stringData field instead. This
+field allows you to put a non-base64 encoded string directly into the Secret,
+and the string will be encoded for you when the Secret is created or updated.
+
+A practical example of this might be where you are deploying an application
+that uses a Secret to store a configuration file, and you want to populate
+parts of that configuration file during your deployment process.
+
+If your application uses the following configuration file:
+
+```yaml
+apiUrl: "https://my.api.com/api/v1"
+username: "user"
+password: "password"
+```
+
+You could store this in a Secret using the following:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: mysecret
+type: Opaque
+stringData:
+ config.yaml: |-
+ apiUrl: "https://my.api.com/api/v1"
+ username: {{username}}
+ password: {{password}}
+```
+
+Your deployment tool could then replace the `{{username}}` and `{{password}}`
+template variables before running `kubectl apply`.
+
+stringData is a write-only convenience field. It is never output when
+retrieving Secrets. For example, if you run the following command:
+
+```shell
+kubectl get secret mysecret -o yaml
+```
+
+The output will be similar to:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ creationTimestamp: 2018-11-15T20:40:59Z
+ name: mysecret
+ namespace: default
+ resourceVersion: "7225"
+ uid: c280ad2e-e916-11e8-98f2-025000000001
+type: Opaque
+data:
+ config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19
+```
+
+If a field is specified in both data and stringData, the value from stringData
+is used. For example, the following Secret definition:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: mysecret
+type: Opaque
+data:
+ username: YWRtaW4=
+stringData:
+ username: administrator
+```
+
+Results in the following secret:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ creationTimestamp: 2018-11-15T20:46:46Z
+ name: mysecret
+ namespace: default
+ resourceVersion: "7579"
+ uid: 91460ecb-e917-11e8-98f2-025000000001
+type: Opaque
+data:
+ username: YWRtaW5pc3RyYXRvcg==
+```
+
+Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
+
+The keys of data and stringData must consist of alphanumeric characters,
+'-', '_' or '.'.
+
+**Encoding Note:** The serialized JSON and YAML values of secret data are
+encoded as base64 strings. Newlines are not valid within these strings and must
+be omitted. When using the `base64` utility on Darwin/macOS users should avoid
+using the `-b` option to split long lines. Conversely Linux users *should* add
+the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if
+`-w` option is not available.
+
+#### Creating a Secret from Generator
+Kubectl supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/)
+since 1.14. With this new feature,
+you can also create a Secret from generators and then apply it to create the object on
+the Apiserver. The generators
+should be specified in a `kustomization.yaml` inside a directory.
+
+For example, to generate a Secret from files `./username.txt` and `./password.txt`
+```shell
+# Create a kustomization.yaml file with SecretGenerator
+cat <./kustomization.yaml
+secretGenerator:
+- name: db-user-pass
+ files:
+ - username.txt
+ - password.txt
+EOF
+```
+Apply the kustomization directory to create the Secret object.
+```shell
+$ kubectl apply -k .
+secret/db-user-pass-96mffmfh4k created
+```
+
+You can check that the secret was created like this:
+
+```shell
+$ kubectl get secrets
+NAME TYPE DATA AGE
+db-user-pass-96mffmfh4k Opaque 2 51s
+
+$ kubectl describe secrets/db-user-pass-96mffmfh4k
+Name: db-user-pass
+Namespace: default
+Labels:
+Annotations:
+
+Type: Opaque
+
+Data
+====
+password.txt: 12 bytes
+username.txt: 5 bytes
+```
+
+For example, to generate a Secret from literals `username=admin` and `password=secret`,
+you can specify the secret generator in `kustomization.yaml` as
+```shell
+# Create a kustomization.yaml file with SecretGenerator
+$ cat <./kustomization.yaml
+secretGenerator:
+- name: db-user-pass
+ literals:
+ - username=admin
+ - password=secret
+EOF
+```
+Apply the kustomization directory to create the Secret object.
+```shell
+$ kubectl apply -k .
+secret/db-user-pass-dddghtt9b5 created
+```
+{{< note >}}
+The generated Secrets name has a suffix appended by hashing the contents. This ensures that a new
+Secret is generated each time the contents is modified.
+{{< /note >}}
+
+#### Decoding a Secret
+
+Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section:
+
+```shell
+kubectl get secret mysecret -o yaml
+```
+```
+apiVersion: v1
+kind: Secret
+metadata:
+ creationTimestamp: 2016-01-22T18:41:56Z
+ name: mysecret
+ namespace: default
+ resourceVersion: "164619"
+ uid: cfee02d6-c137-11e5-8d73-42010af00002
+type: Opaque
+data:
+ username: YWRtaW4=
+ password: MWYyZDFlMmU2N2Rm
+```
+
+Decode the password field:
+
+```shell
+echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
+```
+```
+1f2d1e2e67df
+```
+
+#### Editing a Secret
+
+An existing secret may be edited with the following command:
+
+```shell
+kubectl edit secrets mysecret
+```
+
+This will open the default configured editor and allow for updating the base64 encoded secret values in the `data` field:
+
+```
+# Please edit the object below. Lines beginning with a '#' will be ignored,
+# and an empty file will abort the edit. If an error occurs while saving this file will be
+# reopened with the relevant failures.
+#
+apiVersion: v1
+data:
+ username: YWRtaW4=
+ password: MWYyZDFlMmU2N2Rm
+kind: Secret
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: { ... }
+ creationTimestamp: 2016-01-22T18:41:56Z
+ name: mysecret
+ namespace: default
+ resourceVersion: "164619"
+ uid: cfee02d6-c137-11e5-8d73-42010af00002
+type: Opaque
+```
+
+## Using Secrets
+
+Secrets can be mounted as data volumes or be exposed as
+{{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}}
+to be used by a container in a pod. They can also be used by other parts of the
+system, without being directly exposed to the pod. For example, they can hold
+credentials that other parts of the system should use to interact with external
+systems on your behalf.
+
+### Using Secrets as Files from a Pod
+
+To consume a Secret in a volume in a Pod:
+
+1. Create a secret or use an existing one. Multiple pods can reference the same secret.
+1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name the volume anything, and have a `.spec.volumes[].secret.secretName` field equal to the name of the secret object.
+1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `.spec.containers[].volumeMounts[].readOnly = true` and `.spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear.
+1. Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`.
+
+This is an example of a pod that mounts a secret in a volume:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ readOnly: true
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+```
+
+Each secret you want to use needs to be referred to in `.spec.volumes`.
+
+If there are multiple containers in the pod, then each container needs its
+own `volumeMounts` block, but only one `.spec.volumes` is needed per secret.
+
+You can package many files into one secret, or use many secrets, whichever is convenient.
+
+**Projection of secret keys to specific paths**
+
+We can also control the paths within the volume where Secret keys are projected.
+You can use `.spec.volumes[].secret.items` field to change target path of each key:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ readOnly: true
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+ items:
+ - key: username
+ path: my-group/my-username
+```
+
+What will happen:
+
+* `username` secret is stored under `/etc/foo/my-group/my-username` file instead of `/etc/foo/username`.
+* `password` secret is not projected
+
+If `.spec.volumes[].secret.items` is used, only keys specified in `items` are projected.
+To consume all keys from the secret, all of them must be listed in the `items` field.
+All listed keys must exist in the corresponding secret. Otherwise, the volume is not created.
+
+**Secret files permissions**
+
+You can also specify the permission mode bits files part of a secret will have.
+If you don't specify any, `0644` is used by default. You can specify a default
+mode for the whole secret volume and override per key if needed.
+
+For example, you can specify a default mode like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+ defaultMode: 256
+```
+
+Then, the secret will be mounted on `/etc/foo` and all the files created by the
+secret volume mount will have permission `0400`.
+
+Note that the JSON spec doesn't support octal notation, so use the value 256 for
+0400 permissions. If you use yaml instead of json for the pod, you can use octal
+notation to specify permissions in a more natural way.
+
+You can also use mapping, as in the previous example, and specify different
+permission for different files like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+ items:
+ - key: username
+ path: my-group/my-username
+ mode: 511
+```
+
+In this case, the file resulting in `/etc/foo/my-group/my-username` will have
+permission value of `0777`. Owing to JSON limitations, you must specify the mode
+in decimal notation.
+
+Note that this permission value might be displayed in decimal notation if you
+read it later.
+
+**Consuming Secret Values from Volumes**
+
+Inside the container that mounts a secret volume, the secret keys appear as
+files and the secret values are base-64 decoded and stored inside these files.
+This is the result of commands
+executed inside the container from the example above:
+
+```shell
+ls /etc/foo/
+```
+```
+username
+password
+```
+
+```shell
+cat /etc/foo/username
+```
+```
+admin
+```
+
+
+```shell
+cat /etc/foo/password
+```
+```
+1f2d1e2e67df
+```
+
+The program in a container is responsible for reading the secrets from the
+files.
+
+**Mounted Secrets are updated automatically**
+
+When a secret being already consumed in a volume is updated, projected keys are eventually updated as well.
+Kubelet is checking whether the mounted secret is fresh on every periodic sync.
+However, it is using its local cache for getting the current value of the Secret.
+The type of the cache is configurable using the (`ConfigMapAndSecretChangeDetectionStrategy` field in
+[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)).
+It can be either propagated via watch (default), ttl-based, or simply redirecting
+all requests to directly kube-apiserver.
+As a result, the total delay from the moment when the Secret is updated to the moment
+when new keys are projected to the Pod can be as long as kubelet sync period + cache
+propagation delay, where cache propagation delay depends on the chosen cache type
+(it equals to watch propagation delay, ttl of cache, or zero corespondingly).
+
+{{< note >}}
+A container using a Secret as a
+[subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive
+Secret updates.
+{{< /note >}}
+
+### Using Secrets as Environment Variables
+
+To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}}
+in a pod:
+
+1. Create a secret or use an existing one. Multiple pods can reference the same secret.
+1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`.
+1. Modify your image and/or command line so that the program looks for values in the specified environment variables
+
+This is an example of a pod that uses secrets from environment variables:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-env-pod
+spec:
+ containers:
+ - name: mycontainer
+ image: redis
+ env:
+ - name: SECRET_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: mysecret
+ key: username
+ - name: SECRET_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: mysecret
+ key: password
+ restartPolicy: Never
+```
+
+**Consuming Secret Values from Environment Variables**
+
+Inside a container that consumes a secret in an environment variables, the secret keys appear as
+normal environment variables containing the base-64 decoded values of the secret data.
+This is the result of commands executed inside the container from the example above:
+
+```shell
+echo $SECRET_USERNAME
+```
+```
+admin
+```
+```shell
+echo $SECRET_PASSWORD
+```
+```
+1f2d1e2e67df
+```
+
+### Using imagePullSecrets
+
+An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry
+password to the Kubelet so it can pull a private image on behalf of your Pod.
+
+**Manually specifying an imagePullSecret**
+
+Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)
+
+### Arranging for imagePullSecrets to be Automatically Attached
+
+You can manually create an imagePullSecret, and reference it from
+a serviceAccount. Any pods created with that serviceAccount
+or that default to use that serviceAccount, will get their imagePullSecret
+field set to that of the service account.
+See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
+ for a detailed explanation of that process.
+
+### Automatic Mounting of Manually Created Secrets
+
+Manually created secrets (e.g. one containing a token for accessing a github account)
+can be automatically attached to pods based on their service account.
+See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data-application/podpreset/) for a detailed explanation of that process.
+
+## Details
+
+### Restrictions
+
+Secret volume sources are validated to ensure that the specified object
+reference actually points to an object of type `Secret`. Therefore, a secret
+needs to be created before any pods that depend on it.
+
+Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
+They can only be referenced by pods in that same namespace.
+
+Individual secrets are limited to 1MiB in size. This is to discourage creation
+of very large secrets which would exhaust apiserver and kubelet memory.
+However, creation of many smaller secrets could also exhaust memory. More
+comprehensive limits on memory usage due to secrets is a planned feature.
+
+Kubelet only supports use of secrets for Pods it gets from the API server.
+This includes any pods created using kubectl, or indirectly via a replication
+controller. It does not include pods created via the kubelets
+`--manifest-url` flag, its `--config` flag, or its REST API (these are
+not common ways to create pods.)
+
+Secrets must be created before they are consumed in pods as environment
+variables unless they are marked as optional. References to Secrets that do
+not exist will prevent the pod from starting.
+
+References via `secretKeyRef` to keys that do not exist in a named Secret
+will prevent the pod from starting.
+
+Secrets used to populate environment variables via `envFrom` that have keys
+that are considered invalid environment variable names will have those keys
+skipped. The pod will be allowed to start. There will be an event whose
+reason is `InvalidVariableNames` and the message will contain the list of
+invalid keys that were skipped. The example shows a pod which refers to the
+default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad.
+
+```shell
+kubectl get events
+```
+```
+LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON
+0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
+```
+
+### Secret and Pod Lifetime interaction
+
+When a pod is created via the API, there is no check whether a referenced
+secret exists. Once a pod is scheduled, the kubelet will try to fetch the
+secret value. If the secret cannot be fetched because it does not exist or
+because of a temporary lack of connection to the API server, kubelet will
+periodically retry. It will report an event about the pod explaining the
+reason it is not started yet. Once the secret is fetched, the kubelet will
+create and mount a volume containing it. None of the pod's containers will
+start until all the pod's volumes are mounted.
+
+## Use cases
+
+### Use-Case: Pod with ssh keys
+
+Create a kustomization.yaml with SecretGenerator containing some ssh keys:
+
+```shell
+kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
+```
+
+```
+secret "ssh-key-secret" created
+```
+
+{{< caution >}}
+Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke if they are compromised.
+{{< /caution >}}
+
+
+Now we can create a pod which references the secret with the ssh key and
+consumes it in a volume:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-test-pod
+ labels:
+ name: secret-test
+spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: ssh-key-secret
+ containers:
+ - name: ssh-test-container
+ image: mySshImage
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
+```
+
+When the container's command runs, the pieces of the key will be available in:
+
+```shell
+/etc/secret-volume/ssh-publickey
+/etc/secret-volume/ssh-privatekey
+```
+
+The container is then free to use the secret data to establish an ssh connection.
+
+### Use-Case: Pods with prod / test credentials
+
+This example illustrates a pod which consumes a secret containing prod
+credentials and another pod which consumes a secret with test environment
+credentials.
+
+Make the kustomization.yaml with SecretGenerator
+
+```shell
+kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11
+```
+```
+secret "prod-db-secret" created
+```
+
+```shell
+kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
+```
+```
+secret "test-db-secret" created
+```
+{{< note >}}
+Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way:
+
+```
+kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb'
+```
+
+ You do not need to escape special characters in passwords from files (`--from-file`).
+{{< /note >}}
+
+Now make the pods:
+
+```shell
+$ cat < pod.yaml
+apiVersion: v1
+kind: List
+items:
+- kind: Pod
+ apiVersion: v1
+ metadata:
+ name: prod-db-client-pod
+ labels:
+ name: prod-db-client
+ spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: prod-db-secret
+ containers:
+ - name: db-client-container
+ image: myClientImage
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
+- kind: Pod
+ apiVersion: v1
+ metadata:
+ name: test-db-client-pod
+ labels:
+ name: test-db-client
+ spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: test-db-secret
+ containers:
+ - name: db-client-container
+ image: myClientImage
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
+EOF
+```
+
+Add the pods to the same kustomization.yaml
+```shell
+$ cat <> kustomization.yaml
+resources:
+- pod.yaml
+EOF
+```
+
+Apply all those objects on the Apiserver by
+
+```shell
+kubectl apply -k .
+```
+
+Both containers will have the following files present on their filesystems with the values for each container's environment:
+
+```shell
+/etc/secret-volume/username
+/etc/secret-volume/password
+```
+
+Note how the specs for the two pods differ only in one field; this facilitates
+creating pods with different capabilities from a common pod config template.
+
+You could further simplify the base pod specification by using two Service Accounts:
+one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
+`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: prod-db-client-pod
+ labels:
+ name: prod-db-client
+spec:
+ serviceAccount: prod-db-client
+ containers:
+ - name: db-client-container
+ image: myClientImage
+```
+
+### Use-case: Dotfiles in secret volume
+
+In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply
+make that key begin with a dot. For example, when the following secret is mounted into a volume:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: dotfile-secret
+data:
+ .secret-file: dmFsdWUtMg0KDQo=
+---
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-dotfiles-pod
+spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: dotfile-secret
+ containers:
+ - name: dotfile-test-container
+ image: k8s.gcr.io/busybox
+ command:
+ - ls
+ - "-l"
+ - "/etc/secret-volume"
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
+```
+
+
+The `secret-volume` will contain a single file, called `.secret-file`, and
+the `dotfile-test-container` will have this file present at the path
+`/etc/secret-volume/.secret-file`.
+
+{{< note >}}
+Files beginning with dot characters are hidden from the output of `ls -l`;
+you must use `ls -la` to see them when listing directory contents.
+{{< /note >}}
+
+### Use-case: Secret visible to one container in a pod
+
+Consider a program that needs to handle HTTP requests, do some complex business
+logic, and then sign some messages with an HMAC. Because it has complex
+application logic, there might be an unnoticed remote file reading exploit in
+the server, which could expose the private key to an attacker.
+
+This could be divided into two processes in two containers: a frontend container
+which handles user interaction and business logic, but which cannot see the
+private key; and a signer container that can see the private key, and responds
+to simple signing requests from the frontend (e.g. over localhost networking).
+
+With this partitioned approach, an attacker now has to trick the application
+server into doing something rather arbitrary, which may be harder than getting
+it to read a file.
+
+
+
+## Best practices
+
+### Clients that use the secrets API
+
+When deploying applications that interact with the secrets API, access should be
+limited using [authorization policies](
+/docs/reference/access-authn-authz/authorization/) such as [RBAC](
+/docs/reference/access-authn-authz/rbac/).
+
+Secrets often hold values that span a spectrum of importance, many of which can
+cause escalations within Kubernetes (e.g. service account tokens) and to
+external systems. Even if an individual app can reason about the power of the
+secrets it expects to interact with, other apps within the same namespace can
+render those assumptions invalid.
+
+For these reasons `watch` and `list` requests for secrets within a namespace are
+extremely powerful capabilities and should be avoided, since listing secrets allows
+the clients to inspect the values of all secrets that are in that namespace. The ability to
+`watch` and `list` all secrets in a cluster should be reserved for only the most
+privileged, system-level components.
+
+Applications that need to access the secrets API should perform `get` requests on
+the secrets they need. This lets administrators restrict access to all secrets
+while [white-listing access to individual instances](
+/docs/reference/access-authn-authz/rbac/#referring-to-resources) that
+the app needs.
+
+For improved performance over a looping `get`, clients can design resources that
+reference a secret then `watch` the resource, re-requesting the secret when the
+reference changes. Additionally, a ["bulk watch" API](
+https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md)
+to let clients `watch` individual resources has also been proposed, and will likely
+be available in future releases of Kubernetes.
+
+## Security Properties
+
+
+### Protections
+
+Because `secret` objects can be created independently of the `pods` that use
+them, there is less risk of the secret being exposed during the workflow of
+creating, viewing, and editing pods. The system can also take additional
+precautions with `secret` objects, such as avoiding writing them to disk where
+possible.
+
+A secret is only sent to a node if a pod on that node requires it.
+Kubelet stores the secret into a `tmpfs` so that the secret is not written
+to disk storage. Once the Pod that depends on the secret is deleted, kubelet
+will delete its local copy of the secret data as well.
+
+There may be secrets for several pods on the same node. However, only the
+secrets that a pod requests are potentially visible within its containers.
+Therefore, one Pod does not have access to the secrets of another Pod.
+
+There may be several containers in a pod. However, each container in a pod has
+to request the secret volume in its `volumeMounts` for it to be visible within
+the container. This can be used to construct useful [security partitions at the
+Pod level](#use-case-secret-visible-to-one-container-in-a-pod).
+
+On most Kubernetes-project-maintained distributions, communication between user
+to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS.
+Secrets are protected when transmitted over these channels.
+
+{{< feature-state for_k8s_version="v1.13" state="beta" >}}
+
+You can enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
+for secret data, so that the secrets are not stored in the clear into {{< glossary_tooltip term_id="etcd" >}}.
+
+### Risks
+
+ - In the API server secret data is stored in {{< glossary_tooltip term_id="etcd" >}};
+ therefore:
+ - Administrators should enable encryption at rest for cluster data (requires v1.13 or later)
+ - Administrators should limit access to etcd to admin users
+ - Administrators may want to wipe/shred disks used by etcd when no longer in use
+ - If running etcd in a cluster, administrators should make sure to use SSL/TLS
+ for etcd peer-to-peer communication.
+ - If you configure the secret through a manifest (JSON or YAML) file which has
+ the secret data encoded as base64, sharing this file or checking it in to a
+ source repository means the secret is compromised. Base64 encoding is _not_ an
+ encryption method and is considered the same as plain text.
+ - Applications still need to protect the value of secret after reading it from the volume,
+ such as not accidentally logging it or transmitting it to an untrusted party.
+ - A user who can create a pod that uses a secret can also see the value of that secret. Even
+ if apiserver policy does not allow that user to read the secret object, the user could
+ run a pod which exposes the secret.
+ - Currently, anyone with root on any node can read _any_ secret from the apiserver,
+ by impersonating the kubelet. It is a planned feature to only send secrets to
+ nodes that actually require them, to restrict the impact of a root exploit on a
+ single node.
+
+
+{{% capture whatsnext %}}
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/overview/_index.md b/content/uk/docs/concepts/overview/_index.md
new file mode 100644
index 0000000000000..efffaf0892adf
--- /dev/null
+++ b/content/uk/docs/concepts/overview/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Огляд"
+weight: 20
+---
diff --git a/content/uk/docs/concepts/overview/what-is-kubernetes.md b/content/uk/docs/concepts/overview/what-is-kubernetes.md
new file mode 100644
index 0000000000000..7d484d3bd6bb1
--- /dev/null
+++ b/content/uk/docs/concepts/overview/what-is-kubernetes.md
@@ -0,0 +1,185 @@
+---
+reviewers:
+- bgrant0607
+- mikedanese
+title: Що таке Kubernetes?
+content_template: templates/concept
+weight: 10
+card:
+ name: concepts
+ weight: 10
+---
+
+{{% capture overview %}}
+
+Ця сторінка являє собою узагальнений огляд Kubernetes.
+{{% /capture %}}
+
+{{% capture body %}}
+
+Kubernetes - це платформа з відкритим вихідним кодом для управління контейнеризованими робочими навантаженнями та супутніми службами. Її основні характеристики - кросплатформенність, розширюваність, успішне використання декларативної конфігурації та автоматизації. Вона має гігантську, швидкопрогресуючу екосистему.
+
+
+Назва Kubernetes походить з грецької та означає керманич або пілот. Google відкрив доступ до вихідного коду проекту Kubernetes у 2014 році. Kubernetes побудовано [на базі п'ятнадцятирічного досвіду, що Google отримав, оперуючи масштабними робочими навантаженнями](https://ai.google/research/pubs/pub43438) у купі з найкращими у своєму класі ідеями та практиками, які може запропонувати спільнота.
+
+
+## Озираючись на першопричини
+
+
+Давайте повернемось назад у часі та дізнаємось, завдяки чому Kubernetes став таким корисним.
+
+![Еволюція розгортання](/images/docs/Container_Evolution.svg)
+
+
+**Ера традиційного розгортання:** На початку організації запускали застосунки на фізичних серверах. Оскільки в такий спосіб не було можливості задати обмеження використання ресурсів, це спричиняло проблеми виділення та розподілення ресурсів на фізичних серверах. Наприклад: якщо багато застосунків було запущено на фізичному сервері, могли траплятись випадки, коли один застосунок забирав собі найбільше ресурсів, внаслідок чого інші програми просто не справлялись з обов'язками. Рішенням може бути запуск кожного застосунку на окремому фізичному сервері. Але такий підхід погано масштабується, оскільки ресурси не повністю використовуються; на додачу, це дорого, оскільки організаціям потрібно опікуватись багатьма фізичними серверами.
+
+
+**Ера віртуалізованого розгортання:** Як рішення - була представлена віртуалізація. Вона дозволяє запускати численні віртуальні машини (Virtual Machines або VMs) на одному фізичному ЦПУ сервера. Віртуалізація дозволила застосункам бути ізольованими у межах віртуальних машин та забезпечувала безпеку, оскільки інформація застосунку на одній VM не була доступна застосунку на іншій VM.
+
+
+Віртуалізація забезпечує краще використання ресурсів на фізичному сервері та кращу масштабованість, оскільки дозволяє легко додавати та оновлювати застосунки, зменшує витрати на фізичне обладнання тощо. З віртуалізацією ви можете представити ресурси у вигляді одноразових віртуальних машин.
+
+
+Кожна VM є повноцінною машиною з усіма компонентами, включно з власною операційною системою, що запущені поверх віртуалізованого апаратного забезпечення.
+
+
+**Ера розгортання контейнерів:** Контейнери схожі на VM, але мають спрощений варіант ізоляції і використовують спільну операційну систему для усіх застосунків. Саму тому контейнери вважаються легковісними. Подібно до VM, контейнер має власну файлову систему, ЦПУ, пам'ять, простір процесів тощо. Оскільки контейнери вивільнені від підпорядкованої інфраструктури, їх можна легко переміщати між хмарними провайдерами чи дистрибутивами операційних систем.
+
+Контейнери стали популярними, бо надавали додаткові переваги, такі як:
+
+
+
+* Створення та розгортання застосунків за методологією Agile: спрощене та більш ефективне створення образів контейнерів у порівнянні до використання образів віртуальних машин.
+* Безперервна розробка, інтеграція та розгортання: забезпечення надійних та безперервних збирань образів контейнерів, їх швидке розгортання та легкі відкатування (за рахунок незмінності образів).
+* Розподіл відповідальності команд розробки та експлуатації: створення образів контейнерів застосунків під час збирання/релізу на противагу часу розгортання, і як наслідок, вивільнення застосунків із інфраструктури.
+* Спостереження не лише за інформацією та метриками на рівні операційної системи, але й за станом застосунку та іншими сигналами.
+* Однорідність середовища для розробки, тестування та робочого навантаження: запускається так само як на робочому комп'ютері, так і у хмарного провайдера.
+* ОС та хмарна кросплатформність: запускається на Ubuntu, RHEL, CoreOS, у власному дата-центрі, у Google Kubernetes Engine і взагалі будь-де.
+* Керування орієнтоване на застосунки: підвищення рівня абстракції від запуску операційної системи у віртуальному апаратному забезпеченні до запуску застосунку в операційній системі, використовуючи логічні ресурси.
+* Нещільно зв'язані, розподілені, еластичні, вивільнені мікросервіси: застосунки розбиваються на менші, незалежні частини для динамічного розгортання та управління, на відміну від монолітної архітектури, що працює на одній великій виділеній машині.
+* Ізоляція ресурсів: передбачувана продуктивність застосунку.
+* Використання ресурсів: висока ефективність та щільність.
+
+
+## Чому вам потрібен Kebernetes і що він може робити
+
+
+Контейнери - це прекрасний спосіб упакувати та запустити ваші застосунки. У прод оточенні вам потрібно керувати контейнерами, в яких працюють застосунки, і стежити, щоб не було простою. Наприклад, якщо один контейнер припиняє роботу, інший має бути запущений йому на заміну. Чи не легше було б, якби цим керувала сама система?
+
+
+Ось де Kubernetes приходить на допомогу! Kubernetes надає вам каркас для еластичного запуску розподілених систем. Він опікується масштабуванням та аварійним відновленням вашого застосунку, пропонує шаблони розгортань тощо. Наприклад, Kubernetes дозволяє легко створювати розгортання за стратегією canary у вашій системі.
+
+
+Kubernetes надає вам:
+
+
+
+* **Виявлення сервісів та балансування навантаження**
+Kubernetes може надавати доступ до контейнера, використовуючи DNS-ім'я або його власну IP-адресу. Якщо контейнер зазнає завеликого мережевого навантаження, Kubernetes здатний збалансувати та розподілити його таким чином, щоб якість обслуговування залишалась стабільною.
+* **Оркестрація сховища інформації**
+Kubernetes дозволяє вам автоматично монтувати системи збереження інформації на ваш вибір: локальні сховища, рішення від хмарних провайдерів тощо.
+* **Автоматичне розгортання та відкатування**
+За допомогою Kubernetes ви можете описати бажаний стан контейнерів, що розгортаються, і він регульовано простежить за виконанням цього стану. Наприклад, ви можете автоматизувати в Kubernetes процеси створення нових контейнерів для розгортання, видалення існуючих контейнерів і передачу їхніх ресурсів на новостворені контейнери.
+* **Автоматичне розміщення задач**
+Ви надаєте Kubernetes кластер для запуску контейнерізованих задач і вказуєте, скільки ресурсів ЦПУ та пам'яті (RAM) необхідно для роботи кожного контейнера. Kubernetes розподіляє контейнери по вузлах кластера для максимально ефективного використання ресурсів.
+* **Самозцілення**
+Kubernetes перезапускає контейнери, що відмовили; заміняє контейнери; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності.
+* **Управління секретами та конфігурацією**
+Kubernetes дозволяє вам зберігати та керувати чутливою інформацією, такою як паролі, OAuth токени та SSH ключі. Ви можете розгортати та оновлювати секрети та конфігурацію без перезбирання образів ваших контейнерів, не розкриваючи секрети в конфігурацію стека.
+
+
+
+## Чим не є Kubernetes
+
+
+Kubernetes не є комплексною системою PaaS (Платформа як послуга) у традиційному розумінні. Оскільки Kubernetes оперує швидше на рівні контейнерів, аніж на рівні апаратного забезпечення, деяка загальнозастосована функціональність і справді є спільною з PaaS, як-от розгортання, масштабування, розподіл навантаження, логування і моніторинг. Водночас Kubernetes не є монолітним, а вищезазначені особливості підключаються і є опціональними. Kubernetes надає будівельні блоки для створення платформ для розробників, але залишає за користувачем право вибору у важливих питаннях.
+
+
+Kubernetes:
+
+
+
+* Не обмежує типи застосунків, що підтримуються. Kubernetes намагається підтримувати найрізноманітніші типи навантажень, включно із застосунками зі станом (stateful) та без стану (stateless), навантаження по обробці даних тощо. Якщо ваш застосунок можна контейнеризувати, він чудово запуститься під Kubernetes.
+* Не розгортає застосунки з вихідного коду та не збирає ваші застосунки. Процеси безперервної інтеграції, доставки та розгортання (CI/CD) визначаються на рівні організації, та в залежності від технічних вимог.
+* Не надає сервіси на рівні застосунків як вбудовані: програмне забезпечення проміжного рівня (наприклад, шина передачі повідомлень), фреймворки обробки даних (наприклад, Spark), бази даних (наприклад, MySQL), кеш, некластерні системи збереження інформації (наприклад, Ceph). Ці компоненти можуть бути запущені у Kubernetes та/або бути доступними для застосунків за допомогою спеціальних механізмів, наприклад [Open Service Broker](https://openservicebrokerapi.org/).
+* Не нав'язує використання інструментів для логування, моніторингу та сповіщень, натомість надає певні інтеграційні рішення як прототипи, та механізми зі збирання та експорту метрик.
+* Не надає та не змушує використовувати якусь конфігураційну мову/систему (як наприклад `Jsonnet`), натомість надає можливість використовувати API, що може бути використаний довільними формами декларативних специфікацій.
+* Не надає і не запроваджує жодних систем машинної конфігурації, підтримки, управління або самозцілення.
+* На додачу, Kubernetes - не просто система оркестрації. Власне кажучи, вона усуває потребу оркестрації як такої. Технічне визначення оркестрації - це запуск визначених процесів: спочатку A, за ним B, потім C. На противагу, Kubernetes складається з певної множини незалежних, складних процесів контролерів, що безперервно опрацьовують стан у напрямку, що заданий бажаною конфігурацією. Неважливо, як ви дістанетесь з пункту A до пункту C. Централізоване управління також не є вимогою. Все це виливається в систему, яку легко використовувати, яка є потужною, надійною, стійкою та здатною до легкого розширення.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Перегляньте [компоненти Kubernetes](/docs/concepts/overview/components/)
+* Готові [розпочати роботу](/docs/setup/)?
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/services-networking/_index.md b/content/uk/docs/concepts/services-networking/_index.md
new file mode 100644
index 0000000000000..634694311a433
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Сервіси, балансування навантаження та мережа"
+weight: 60
+---
diff --git a/content/uk/docs/concepts/services-networking/dual-stack.md b/content/uk/docs/concepts/services-networking/dual-stack.md
new file mode 100644
index 0000000000000..a4e7bf57af6c2
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/dual-stack.md
@@ -0,0 +1,109 @@
+---
+reviewers:
+- lachie83
+- khenidak
+- aramase
+title: IPv4/IPv6 dual-stack
+feature:
+ title: Подвійний стек IPv4/IPv6
+ description: >
+ Призначення IPv4- та IPv6-адрес подам і сервісам.
+
+content_template: templates/concept
+weight: 70
+---
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
+
+ IPv4/IPv6 dual-stack enables the allocation of both IPv4 and IPv6 addresses to {{< glossary_tooltip text="Pods" term_id="pod" >}} and {{< glossary_tooltip text="Services" term_id="service" >}}.
+
+If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Supported Features
+
+Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:
+
+ * Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod)
+ * IPv4 and IPv6 enabled Services (each Service must be for a single address family)
+ * Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces
+
+## Prerequisites
+
+The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:
+
+ * Kubernetes 1.16 or later
+ * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces)
+ * A network plugin that supports dual-stack (such as Kubenet or Calico)
+ * Kube-proxy running in mode IPVS
+
+## Enable IPv4/IPv6 dual-stack
+
+To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the relevant components of your cluster, and set dual-stack cluster network assignments:
+
+ * kube-controller-manager:
+ * `--feature-gates="IPv6DualStack=true"`
+ * `--cluster-cidr=,` eg. `--cluster-cidr=10.244.0.0/16,fc00::/24`
+ * `--service-cluster-ip-range=,`
+ * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` defaults to /24 for IPv4 and /64 for IPv6
+ * kubelet:
+ * `--feature-gates="IPv6DualStack=true"`
+ * kube-proxy:
+ * `--proxy-mode=ipvs`
+ * `--cluster-cidrs=,`
+ * `--feature-gates="IPv6DualStack=true"`
+
+{{< caution >}}
+If you specify an IPv6 address block larger than a /24 via `--cluster-cidr` on the command line, that assignment will fail.
+{{< /caution >}}
+
+## Services
+
+If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} with either an IPv4 or an IPv6 address. You can choose the address family for the Service's cluster IP by setting a field, `.spec.ipFamily`, on that Service.
+You can only set this field when creating a new Service. Setting the `.spec.ipFamily` field is optional and should only be used if you plan to enable IPv4 and IPv6 {{< glossary_tooltip text="Services" term_id="service" >}} and {{< glossary_tooltip text="Ingresses" term_id="ingress" >}} on your cluster. The configuration of this field not a requirement for [egress](#egress-traffic) traffic.
+
+{{< note >}}
+The default address family for your cluster is the address family of the first service cluster IP range configured via the `--service-cluster-ip-range` flag to the kube-controller-manager.
+{{< /note >}}
+
+You can set `.spec.ipFamily` to either:
+
+ * `IPv4`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv4`
+ * `IPv6`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv6`
+
+The following Service specification does not include the `ipFamily` field. Kubernetes will assign an IP address (also known as a "cluster IP") from the first configured `service-cluster-ip-range` to this Service.
+
+{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
+
+The following Service specification includes the `ipFamily` field. Kubernetes will assign an IPv6 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service.
+
+{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}}
+
+For comparison, the following Service specification will be assigned an IPv4 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service.
+
+{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}}
+
+### Type LoadBalancer
+
+On cloud providers which support IPv6 enabled external load balancers, setting the `type` field to `LoadBalancer` in additional to setting `ipFamily` field to `IPv6` provisions a cloud load balancer for your Service.
+
+## Egress Traffic
+
+The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying {{< glossary_tooltip text="CNI" term_id="cni" >}} provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The [ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent) is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters.
+
+## Known Issues
+
+ * Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr)
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/services-networking/endpoint-slices.md b/content/uk/docs/concepts/services-networking/endpoint-slices.md
new file mode 100644
index 0000000000000..f6e918b13c785
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/endpoint-slices.md
@@ -0,0 +1,188 @@
+---
+reviewers:
+- freehan
+title: EndpointSlices
+feature:
+ title: EndpointSlices
+ description: >
+ Динамічне відстеження мережевих вузлів у кластері Kubernetes.
+
+content_template: templates/concept
+weight: 10
+---
+
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.17" state="beta" >}}
+
+_EndpointSlices_ provide a simple way to track network endpoints within a
+Kubernetes cluster. They offer a more scalable and extensible alternative to
+Endpoints.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## EndpointSlice resources {#endpointslice-resource}
+
+In Kubernetes, an EndpointSlice contains references to a set of network
+endpoints. The EndpointSlice controller automatically creates EndpointSlices
+for a Kubernetes Service when a {{< glossary_tooltip text="selector"
+term_id="selector" >}} is specified. These EndpointSlices will include
+references to any Pods that match the Service selector. EndpointSlices group
+network endpoints together by unique Service and Port combinations.
+
+As an example, here's a sample EndpointSlice resource for the `example`
+Kubernetes Service.
+
+```yaml
+apiVersion: discovery.k8s.io/v1beta1
+kind: EndpointSlice
+metadata:
+ name: example-abc
+ labels:
+ kubernetes.io/service-name: example
+addressType: IPv4
+ports:
+ - name: http
+ protocol: TCP
+ port: 80
+endpoints:
+ - addresses:
+ - "10.1.2.3"
+ conditions:
+ ready: true
+ hostname: pod-1
+ topology:
+ kubernetes.io/hostname: node-1
+ topology.kubernetes.io/zone: us-west2-a
+```
+
+By default, EndpointSlices managed by the EndpointSlice controller will have no
+more than 100 endpoints each. Below this scale, EndpointSlices should map 1:1
+with Endpoints and Services and have similar performance.
+
+EndpointSlices can act as the source of truth for kube-proxy when it comes to
+how to route internal traffic. When enabled, they should provide a performance
+improvement for services with large numbers of endpoints.
+
+### Address Types
+
+EndpointSlices support three address types:
+
+* IPv4
+* IPv6
+* FQDN (Fully Qualified Domain Name)
+
+### Topology
+
+Each endpoint within an EndpointSlice can contain relevant topology information.
+This is used to indicate where an endpoint is, containing information about the
+corresponding Node, zone, and region. When the values are available, the
+following Topology labels will be set by the EndpointSlice controller:
+
+* `kubernetes.io/hostname` - The name of the Node this endpoint is on.
+* `topology.kubernetes.io/zone` - The zone this endpoint is in.
+* `topology.kubernetes.io/region` - The region this endpoint is in.
+
+The values of these labels are derived from resources associated with each
+endpoint in a slice. The hostname label represents the value of the NodeName
+field on the corresponding Pod. The zone and region labels represent the value
+of the labels with the same names on the corresponding Node.
+
+### Management
+
+By default, EndpointSlices are created and managed by the EndpointSlice
+controller. There are a variety of other use cases for EndpointSlices, such as
+service mesh implementations, that could result in other entities or controllers
+managing additional sets of EndpointSlices. To ensure that multiple entities can
+manage EndpointSlices without interfering with each other, a
+`endpointslice.kubernetes.io/managed-by` label is used to indicate the entity
+managing an EndpointSlice. The EndpointSlice controller sets
+`endpointslice-controller.k8s.io` as the value for this label on all
+EndpointSlices it manages. Other entities managing EndpointSlices should also
+set a unique value for this label.
+
+### Ownership
+
+In most use cases, EndpointSlices will be owned by the Service that it tracks
+endpoints for. This is indicated by an owner reference on each EndpointSlice as
+well as a `kubernetes.io/service-name` label that enables simple lookups of all
+EndpointSlices belonging to a Service.
+
+## EndpointSlice Controller
+
+The EndpointSlice controller watches Services and Pods to ensure corresponding
+EndpointSlices are up to date. The controller will manage EndpointSlices for
+every Service with a selector specified. These will represent the IPs of Pods
+matching the Service selector.
+
+### Size of EndpointSlices
+
+By default, EndpointSlices are limited to a size of 100 endpoints each. You can
+configure this with the `--max-endpoints-per-slice` {{< glossary_tooltip
+text="kube-controller-manager" term_id="kube-controller-manager" >}} flag up to
+a maximum of 1000.
+
+### Distribution of EndpointSlices
+
+Each EndpointSlice has a set of ports that applies to all endpoints within the
+resource. When named ports are used for a Service, Pods may end up with
+different target port numbers for the same named port, requiring different
+EndpointSlices. This is similar to the logic behind how subsets are grouped
+with Endpoints.
+
+The controller tries to fill EndpointSlices as full as possible, but does not
+actively rebalance them. The logic of the controller is fairly straightforward:
+
+1. Iterate through existing EndpointSlices, remove endpoints that are no longer
+ desired and update matching endpoints that have changed.
+2. Iterate through EndpointSlices that have been modified in the first step and
+ fill them up with any new endpoints needed.
+3. If there's still new endpoints left to add, try to fit them into a previously
+ unchanged slice and/or create new ones.
+
+Importantly, the third step prioritizes limiting EndpointSlice updates over a
+perfectly full distribution of EndpointSlices. As an example, if there are 10
+new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each,
+this approach will create a new EndpointSlice instead of filling up the 2
+existing EndpointSlices. In other words, a single EndpointSlice creation is
+preferrable to multiple EndpointSlice updates.
+
+With kube-proxy running on each Node and watching EndpointSlices, every change
+to an EndpointSlice becomes relatively expensive since it will be transmitted to
+every Node in the cluster. This approach is intended to limit the number of
+changes that need to be sent to every Node, even if it may result with multiple
+EndpointSlices that are not full.
+
+In practice, this less than ideal distribution should be rare. Most changes
+processed by the EndpointSlice controller will be small enough to fit in an
+existing EndpointSlice, and if not, a new EndpointSlice is likely going to be
+necessary soon anyway. Rolling updates of Deployments also provide a natural
+repacking of EndpointSlices with all pods and their corresponding endpoints
+getting replaced.
+
+## Motivation
+
+The Endpoints API has provided a simple and straightforward way of
+tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
+and Services have gotten larger, limitations of that API became more visible.
+Most notably, those included challenges with scaling to larger numbers of
+network endpoints.
+
+Since all network endpoints for a Service were stored in a single Endpoints
+resource, those resources could get quite large. That affected the performance
+of Kubernetes components (notably the master control plane) and resulted in
+significant amounts of network traffic and processing when Endpoints changed.
+EndpointSlices help you mitigate those issues as well as provide an extensible
+platform for additional features such as topological routing.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
+* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/services-networking/service-topology.md b/content/uk/docs/concepts/services-networking/service-topology.md
new file mode 100644
index 0000000000000..c1be99267bf0b
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/service-topology.md
@@ -0,0 +1,127 @@
+---
+reviewers:
+- johnbelamaric
+- imroc
+title: Service Topology
+feature:
+ title: Топологія Сервісів
+ description: >
+ Маршрутизація трафіка Сервісом відповідно до топології кластера.
+
+content_template: templates/concept
+weight: 10
+---
+
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
+
+_Service Topology_ enables a service to route traffic based upon the Node
+topology of the cluster. For example, a service can specify that traffic be
+preferentially routed to endpoints that are on the same Node as the client, or
+in the same availability zone.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Introduction
+
+By default, traffic sent to a `ClusterIP` or `NodePort` Service may be routed to
+any backend address for the Service. Since Kubernetes 1.7 it has been possible
+to route "external" traffic to the Pods running on the Node that received the
+traffic, but this is not supported for `ClusterIP` Services, and more complex
+topologies — such as routing zonally — have not been possible. The
+_Service Topology_ feature resolves this by allowing the Service creator to
+define a policy for routing traffic based upon the Node labels for the
+originating and destination Nodes.
+
+By using Node label matching between the source and destination, the operator
+may designate groups of Nodes that are "closer" and "farther" from one another,
+using whatever metric makes sense for that operator's requirements. For many
+operators in public clouds, for example, there is a preference to keep service
+traffic within the same zone, because interzonal traffic has a cost associated
+with it, while intrazonal traffic does not. Other common needs include being able
+to route traffic to a local Pod managed by a DaemonSet, or keeping traffic to
+Nodes connected to the same top-of-rack switch for the lowest latency.
+
+## Prerequisites
+
+The following prerequisites are needed in order to enable topology aware service
+routing:
+
+ * Kubernetes 1.17 or later
+ * Kube-proxy running in iptables mode or IPVS mode
+ * Enable [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/)
+
+## Enable Service Topology
+
+To enable service topology, enable the `ServiceTopology` feature gate for
+kube-apiserver and kube-proxy:
+
+```
+--feature-gates="ServiceTopology=true"
+```
+
+## Using Service Topology
+
+If your cluster has Service Topology enabled, you can control Service traffic
+routing by specifying the `topologyKeys` field on the Service spec. This field
+is a preference-order list of Node labels which will be used to sort endpoints
+when accessing this Service. Traffic will be directed to a Node whose value for
+the first label matches the originating Node's value for that label. If there is
+no backend for the Service on a matching Node, then the second label will be
+considered, and so forth, until no labels remain.
+
+If no match is found, the traffic will be rejected, just as if there were no
+backends for the Service at all. That is, endpoints are chosen based on the first
+topology key with available backends. If this field is specified and all entries
+have no backends that match the topology of the client, the service has no
+backends for that client and connections should fail. The special value `"*"` may
+be used to mean "any topology". This catch-all value, if used, only makes sense
+as the last value in the list.
+
+If `topologyKeys` is not specified or empty, no topology constraints will be applied.
+
+Consider a cluster with Nodes that are labeled with their hostname, zone name,
+and region name. Then you can set the `topologyKeys` values of a service to direct
+traffic as follows.
+
+* Only to endpoints on the same node, failing if no endpoint exists on the node:
+ `["kubernetes.io/hostname"]`.
+* Preferentially to endpoints on the same node, falling back to endpoints in the
+ same zone, followed by the same region, and failing otherwise: `["kubernetes.io/hostname",
+ "topology.kubernetes.io/zone", "topology.kubernetes.io/region"]`.
+ This may be useful, for example, in cases where data locality is critical.
+* Preferentially to the same zone, but fallback on any available endpoint if
+ none are available within this zone:
+ `["topology.kubernetes.io/zone", "*"]`.
+
+
+
+## Constraints
+
+* Service topology is not compatible with `externalTrafficPolicy=Local`, and
+ therefore a Service cannot use both of these features. It is possible to use
+ both features in the same cluster on different Services, just not on the same
+ Service.
+
+* Valid topology keys are currently limited to `kubernetes.io/hostname`,
+ `topology.kubernetes.io/zone`, and `topology.kubernetes.io/region`, but will
+ be generalized to other node labels in the future.
+
+* Topology keys must be valid label keys and at most 16 keys may be specified.
+
+* The catch-all value, `"*"`, must be the last value in the topology keys, if
+ it is used.
+
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology)
+* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/services-networking/service.md b/content/uk/docs/concepts/services-networking/service.md
new file mode 100644
index 0000000000000..d6a72fcc634b9
--- /dev/null
+++ b/content/uk/docs/concepts/services-networking/service.md
@@ -0,0 +1,1197 @@
+---
+reviewers:
+- bprashanth
+title: Service
+feature:
+ title: Виявлення Сервісів і балансування навантаження
+ description: >
+ Не потрібно змінювати ваш застосунок для використання незнайомого механізму виявлення Сервісів. Kubernetes призначає Подам власні IP-адреси, а набору Подів - єдине DNS-ім'я, і балансує навантаження між ними.
+
+content_template: templates/concept
+weight: 10
+---
+
+
+{{% capture overview %}}
+
+{{< glossary_definition term_id="service" length="short" >}}
+
+With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism.
+Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods,
+and can load-balance across them.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Motivation
+
+Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are mortal.
+They are born and when they die, they are not resurrected.
+If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app,
+it can create and destroy Pods dynamically.
+
+Each Pod gets its own IP address, however in a Deployment, the set of Pods
+running in one moment in time could be different from
+the set of Pods running that application a moment later.
+
+This leads to a problem: if some set of Pods (call them “backends”) provides
+functionality to other Pods (call them “frontends”) inside your cluster,
+how do the frontends find out and keep track of which IP address to connect
+to, so that the frontend can use the backend part of the workload?
+
+Enter _Services_.
+
+## Service resources {#service-resource}
+
+In Kubernetes, a Service is an abstraction which defines a logical set of Pods
+and a policy by which to access them (sometimes this pattern is called
+a micro-service). The set of Pods targeted by a Service is usually determined
+by a {{< glossary_tooltip text="selector" term_id="selector" >}}
+(see [below](#services-without-selectors) for why you might want a Service
+_without_ a selector).
+
+For example, consider a stateless image-processing backend which is running with
+3 replicas. Those replicas are fungible—frontends do not care which backend
+they use. While the actual Pods that compose the backend set may change, the
+frontend clients should not need to be aware of that, nor should they need to keep
+track of the set of backends themselves.
+
+The Service abstraction enables this decoupling.
+
+### Cloud-native service discovery
+
+If you're able to use Kubernetes APIs for service discovery in your application,
+you can query the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
+for Endpoints, that get updated whenever the set of Pods in a Service changes.
+
+For non-native applications, Kubernetes offers ways to place a network port or load
+balancer in between your application and the backend Pods.
+
+## Defining a Service
+
+A Service in Kubernetes is a REST object, similar to a Pod. Like all of the
+REST objects, you can `POST` a Service definition to the API server to create
+a new instance.
+
+For example, suppose you have a set of Pods that each listen on TCP port 9376
+and carry a label `app=MyApp`:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+```
+
+This specification creates a new Service object named “my-service”, which
+targets TCP port 9376 on any Pod with the `app=MyApp` label.
+
+Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"),
+which is used by the Service proxies
+(see [Virtual IPs and service proxies](#virtual-ips-and-service-proxies) below).
+
+The controller for the Service selector continuously scans for Pods that
+match its selector, and then POSTs any updates to an Endpoint object
+also named “my-service”.
+
+{{< note >}}
+A Service can map _any_ incoming `port` to a `targetPort`. By default and
+for convenience, the `targetPort` is set to the same value as the `port`
+field.
+{{< /note >}}
+
+Port definitions in Pods have names, and you can reference these names in the
+`targetPort` attribute of a Service. This works even if there is a mixture
+of Pods in the Service using a single configured name, with the same network
+protocol available via different port numbers.
+This offers a lot of flexibility for deploying and evolving your Services.
+For example, you can change the port numbers that Pods expose in the next
+version of your backend software, without breaking clients.
+
+The default protocol for Services is TCP; you can also use any other
+[supported protocol](#protocol-support).
+
+As many Services need to expose more than one port, Kubernetes supports multiple
+port definitions on a Service object.
+Each port definition can have the same `protocol`, or a different one.
+
+### Services without selectors
+
+Services most commonly abstract access to Kubernetes Pods, but they can also
+abstract other kinds of backends.
+For example:
+
+ * You want to have an external database cluster in production, but in your
+ test environment you use your own databases.
+ * You want to point your Service to a Service in a different
+ {{< glossary_tooltip term_id="namespace" >}} or on another cluster.
+ * You are migrating a workload to Kubernetes. Whilst evaluating the approach,
+ you run only a proportion of your backends in Kubernetes.
+
+In any of these scenarios you can define a Service _without_ a Pod selector.
+For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+```
+
+Because this Service has no selector, the corresponding Endpoint object is *not*
+created automatically. You can manually map the Service to the network address and port
+where it's running, by adding an Endpoint object manually:
+
+```yaml
+apiVersion: v1
+kind: Endpoints
+metadata:
+ name: my-service
+subsets:
+ - addresses:
+ - ip: 192.0.2.42
+ ports:
+ - port: 9376
+```
+
+{{< note >}}
+The endpoint IPs _must not_ be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or
+link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
+
+Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services,
+because {{< glossary_tooltip term_id="kube-proxy" >}} doesn't support virtual IPs
+as a destination.
+{{< /note >}}
+
+Accessing a Service without a selector works the same as if it had a selector.
+In the example above, traffic is routed to the single endpoint defined in
+the YAML: `192.0.2.42:9376` (TCP).
+
+An ExternalName Service is a special case of Service that does not have
+selectors and uses DNS names instead. For more information, see the
+[ExternalName](#externalname) section later in this document.
+
+### EndpointSlices
+{{< feature-state for_k8s_version="v1.17" state="beta" >}}
+
+EndpointSlices are an API resource that can provide a more scalable alternative
+to Endpoints. Although conceptually quite similar to Endpoints, EndpointSlices
+allow for distributing network endpoints across multiple resources. By default,
+an EndpointSlice is considered "full" once it reaches 100 endpoints, at which
+point additional EndpointSlices will be created to store any additional
+endpoints.
+
+EndpointSlices provide additional attributes and functionality which is
+described in detail in [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/).
+
+## Virtual IPs and service proxies
+
+Every node in a Kubernetes cluster runs a `kube-proxy`. `kube-proxy` is
+responsible for implementing a form of virtual IP for `Services` of type other
+than [`ExternalName`](#externalname).
+
+### Why not use round-robin DNS?
+
+A question that pops up every now and then is why Kubernetes relies on
+proxying to forward inbound traffic to backends. What about other
+approaches? For example, would it be possible to configure DNS records that
+have multiple A values (or AAAA for IPv6), and rely on round-robin name
+resolution?
+
+There are a few reasons for using proxying for Services:
+
+ * There is a long history of DNS implementations not respecting record TTLs,
+ and caching the results of name lookups after they should have expired.
+ * Some apps do DNS lookups only once and cache the results indefinitely.
+ * Even if apps and libraries did proper re-resolution, the low or zero TTLs
+ on the DNS records could impose a high load on DNS that then becomes
+ difficult to manage.
+
+### User space proxy mode {#proxy-mode-userspace}
+
+In this mode, kube-proxy watches the Kubernetes master for the addition and
+removal of Service and Endpoint objects. For each Service it opens a
+port (randomly chosen) on the local node. Any connections to this "proxy port"
+are
+proxied to one of the Service's backend Pods (as reported via
+Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into
+account when deciding which backend Pod to use.
+
+Lastly, the user-space proxy installs iptables rules which capture traffic to
+the Service's `clusterIP` (which is virtual) and `port`. The rules
+redirect that traffic to the proxy port which proxies the backend Pod.
+
+By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
+
+![Services overview diagram for userspace proxy](/images/docs/services-userspace-overview.svg)
+
+### `iptables` proxy mode {#proxy-mode-iptables}
+
+In this mode, kube-proxy watches the Kubernetes control plane for the addition and
+removal of Service and Endpoint objects. For each Service, it installs
+iptables rules, which capture traffic to the Service's `clusterIP` and `port`,
+and redirect that traffic to one of the Service's
+backend sets. For each Endpoint object, it installs iptables rules which
+select a backend Pod.
+
+By default, kube-proxy in iptables mode chooses a backend at random.
+
+Using iptables to handle traffic has a lower system overhead, because traffic
+is handled by Linux netfilter without the need to switch between userspace and the
+kernel space. This approach is also likely to be more reliable.
+
+If kube-proxy is running in iptables mode and the first Pod that's selected
+does not respond, the connection fails. This is different from userspace
+mode: in that scenario, kube-proxy would detect that the connection to the first
+Pod had failed and would automatically retry with a different backend Pod.
+
+You can use Pod [readiness probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
+to verify that backend Pods are working OK, so that kube-proxy in iptables mode
+only sees backends that test out as healthy. Doing this means you avoid
+having traffic sent via kube-proxy to a Pod that's known to have failed.
+
+![Services overview diagram for iptables proxy](/images/docs/services-iptables-overview.svg)
+
+### IPVS proxy mode {#proxy-mode-ipvs}
+
+{{< feature-state for_k8s_version="v1.11" state="stable" >}}
+
+In `ipvs` mode, kube-proxy watches Kubernetes Services and Endpoints,
+calls `netlink` interface to create IPVS rules accordingly and synchronizes
+IPVS rules with Kubernetes Services and Endpoints periodically.
+This control loop ensures that IPVS status matches the desired
+state.
+When accessing a Service, IPVS directs traffic to one of the backend Pods.
+
+The IPVS proxy mode is based on netfilter hook function that is similar to
+iptables mode, but uses a hash table as the underlying data structure and works
+in the kernel space.
+That means kube-proxy in IPVS mode redirects traffic with lower latency than
+kube-proxy in iptables mode, with much better performance when synchronising
+proxy rules. Compared to the other proxy modes, IPVS mode also supports a
+higher throughput of network traffic.
+
+IPVS provides more options for balancing traffic to backend Pods;
+these are:
+
+- `rr`: round-robin
+- `lc`: least connection (smallest number of open connections)
+- `dh`: destination hashing
+- `sh`: source hashing
+- `sed`: shortest expected delay
+- `nq`: never queue
+
+{{< note >}}
+To run kube-proxy in IPVS mode, you must make the IPVS Linux available on
+the node before you starting kube-proxy.
+
+When kube-proxy starts in IPVS proxy mode, it verifies whether IPVS
+kernel modules are available. If the IPVS kernel modules are not detected, then kube-proxy
+falls back to running in iptables proxy mode.
+{{< /note >}}
+
+![Services overview diagram for IPVS proxy](/images/docs/services-ipvs-overview.svg)
+
+In these proxy models, the traffic bound for the Service’s IP:Port is
+proxied to an appropriate backend without the clients knowing anything
+about Kubernetes or Services or Pods.
+
+If you want to make sure that connections from a particular client
+are passed to the same Pod each time, you can select the session affinity based
+on the client's IP addresses by setting `service.spec.sessionAffinity` to "ClientIP"
+(the default is "None").
+You can also set the maximum session sticky time by setting
+`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` appropriately.
+(the default value is 10800, which works out to be 3 hours).
+
+## Multi-Port Services
+
+For some Services, you need to expose more than one port.
+Kubernetes lets you configure multiple port definitions on a Service object.
+When using multiple ports for a Service, you must give all of your ports names
+so that these are unambiguous.
+For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - name: http
+ protocol: TCP
+ port: 80
+ targetPort: 9376
+ - name: https
+ protocol: TCP
+ port: 443
+ targetPort: 9377
+```
+
+{{< note >}}
+As with Kubernetes {{< glossary_tooltip term_id="name" text="names">}} in general, names for ports
+must only contain lowercase alphanumeric characters and `-`. Port names must
+also start and end with an alphanumeric character.
+
+For example, the names `123-abc` and `web` are valid, but `123_abc` and `-web` are not.
+{{< /note >}}
+
+## Choosing your own IP address
+
+You can specify your own cluster IP address as part of a `Service` creation
+request. To do this, set the `.spec.clusterIP` field. For example, if you
+already have an existing DNS entry that you wish to reuse, or legacy systems
+that are configured for a specific IP address and difficult to re-configure.
+
+The IP address that you choose must be a valid IPv4 or IPv6 address from within the
+`service-cluster-ip-range` CIDR range that is configured for the API server.
+If you try to create a Service with an invalid clusterIP address value, the API
+server will return a 422 HTTP status code to indicate that there's a problem.
+
+## Discovering services
+
+Kubernetes supports 2 primary modes of finding a Service - environment
+variables and DNS.
+
+### Environment variables
+
+When a Pod is run on a Node, the kubelet adds a set of environment variables
+for each active Service. It supports both [Docker links
+compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
+[makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))
+and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
+where the Service name is upper-cased and dashes are converted to underscores.
+
+For example, the Service `"redis-master"` which exposes TCP port 6379 and has been
+allocated cluster IP address 10.0.0.11, produces the following environment
+variables:
+
+```shell
+REDIS_MASTER_SERVICE_HOST=10.0.0.11
+REDIS_MASTER_SERVICE_PORT=6379
+REDIS_MASTER_PORT=tcp://10.0.0.11:6379
+REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
+REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
+REDIS_MASTER_PORT_6379_TCP_PORT=6379
+REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
+```
+
+{{< note >}}
+When you have a Pod that needs to access a Service, and you are using
+the environment variable method to publish the port and cluster IP to the client
+Pods, you must create the Service *before* the client Pods come into existence.
+Otherwise, those client Pods won't have their environment variables populated.
+
+If you only use DNS to discover the cluster IP for a Service, you don't need to
+worry about this ordering issue.
+{{< /note >}}
+
+### DNS
+
+You can (and almost always should) set up a DNS service for your Kubernetes
+cluster using an [add-on](/docs/concepts/cluster-administration/addons/).
+
+A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new
+Services and creates a set of DNS records for each one. If DNS has been enabled
+throughout your cluster then all Pods should automatically be able to resolve
+Services by their DNS name.
+
+For example, if you have a Service called `"my-service"` in a Kubernetes
+Namespace `"my-ns"`, the control plane and the DNS Service acting together
+create a DNS record for `"my-service.my-ns"`. Pods in the `"my-ns"` Namespace
+should be able to find it by simply doing a name lookup for `my-service`
+(`"my-service.my-ns"` would also work).
+
+Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names
+will resolve to the cluster IP assigned for the Service.
+
+Kubernetes also supports DNS SRV (Service) records for named ports. If the
+`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to
+`TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover
+the port number for `"http"`, as well as the IP address.
+
+The Kubernetes DNS server is the only way to access `ExternalName` Services.
+You can find more information about `ExternalName` resolution in
+[DNS Pods and Services](/docs/concepts/services-networking/dns-pod-service/).
+
+## Headless Services
+
+Sometimes you don't need load-balancing and a single Service IP. In
+this case, you can create what are termed “headless” Services, by explicitly
+specifying `"None"` for the cluster IP (`.spec.clusterIP`).
+
+You can use a headless Service to interface with other service discovery mechanisms,
+without being tied to Kubernetes' implementation.
+
+For headless `Services`, a cluster IP is not allocated, kube-proxy does not handle
+these Services, and there is no load balancing or proxying done by the platform
+for them. How DNS is automatically configured depends on whether the Service has
+selectors defined:
+
+### With selectors
+
+For headless Services that define selectors, the endpoints controller creates
+`Endpoints` records in the API, and modifies the DNS configuration to return
+records (addresses) that point directly to the `Pods` backing the `Service`.
+
+### Without selectors
+
+For headless Services that do not define selectors, the endpoints controller does
+not create `Endpoints` records. However, the DNS system looks for and configures
+either:
+
+ * CNAME records for [`ExternalName`](#externalname)-type Services.
+ * A records for any `Endpoints` that share a name with the Service, for all
+ other types.
+
+## Publishing Services (ServiceTypes) {#publishing-services-service-types}
+
+For some parts of your application (for example, frontends) you may want to expose a
+Service onto an external IP address, that's outside of your cluster.
+
+Kubernetes `ServiceTypes` allow you to specify what kind of Service you want.
+The default is `ClusterIP`.
+
+`Type` values and their behaviors are:
+
+ * `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
+ makes the Service only reachable from within the cluster. This is the
+ default `ServiceType`.
+ * [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port
+ (the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service
+ routes, is automatically created. You'll be able to contact the `NodePort` Service,
+ from outside the cluster,
+ by requesting `:`.
+ * [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud
+ provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external
+ load balancer routes, are automatically created.
+ * [`ExternalName`](#externalname): Maps the Service to the contents of the
+ `externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
+
+ with its value. No proxying of any kind is set up.
+ {{< note >}}
+ You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the `ExternalName` type.
+ {{< /note >}}
+
+You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address.
+
+### Type NodePort {#nodeport}
+
+If you set the `type` field to `NodePort`, the Kubernetes control plane
+allocates a port from a range specified by `--service-node-port-range` flag (default: 30000-32767).
+Each node proxies that port (the same port number on every Node) into your Service.
+Your Service reports the allocated port in its `.spec.ports[*].nodePort` field.
+
+
+If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10.
+This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node.
+
+For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases).
+
+If you want a specific port number, you can specify a value in the `nodePort`
+field. The control plane will either allocate you that port or report that
+the API transaction failed.
+This means that you need to take care of possible port collisions yourself.
+You also have to use a valid port number, one that's inside the range configured
+for NodePort use.
+
+Using a NodePort gives you the freedom to set up your own load balancing solution,
+to configure environments that are not fully supported by Kubernetes, or even
+to just expose one or more nodes' IPs directly.
+
+Note that this Service is visible as `:spec.ports[*].nodePort`
+and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).)
+
+### Type LoadBalancer {#loadbalancer}
+
+On cloud providers which support external load balancers, setting the `type`
+field to `LoadBalancer` provisions a load balancer for your Service.
+The actual creation of the load balancer happens asynchronously, and
+information about the provisioned balancer is published in the Service's
+`.status.loadBalancer` field.
+For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ clusterIP: 10.0.171.239
+ type: LoadBalancer
+status:
+ loadBalancer:
+ ingress:
+ - ip: 192.0.2.127
+```
+
+Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
+
+For LoadBalancer type of Services, when there is more than one port defined, all
+ports must have the same protocol and the protocol must be one of `TCP`, `UDP`,
+and `SCTP`.
+
+Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created
+with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified,
+the loadBalancer is set up with an ephemeral IP address. If you specify a `loadBalancerIP`
+but your cloud provider does not support the feature, the `loadbalancerIP` field that you
+set is ignored.
+
+{{< note >}}
+If you're using SCTP, see the [caveat](#caveat-sctp-loadbalancer-service-type) below about the
+`LoadBalancer` Service type.
+{{< /note >}}
+
+{{< note >}}
+
+On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need
+to create a static type public IP address resource. This public IP address resource should
+be in the same resource group of the other automatically created resources of the cluster.
+For example, `MC_myResourceGroup_myAKSCluster_eastus`.
+
+Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the securityGroupName in the cloud provider configuration file. For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357).
+
+{{< /note >}}
+
+#### Internal load balancer
+In a mixed environment it is sometimes necessary to route traffic from Services inside the same
+(virtual) network address block.
+
+In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints.
+
+You can achieve this by adding one the following annotations to a Service.
+The annotation to add depends on the cloud Service provider you're using.
+
+{{< tabs name="service_tabs" >}}
+{{% tab name="Default" %}}
+Select one of the tabs.
+{{% /tab %}}
+{{% tab name="GCP" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ cloud.google.com/load-balancer-type: "Internal"
+[...]
+```
+{{% /tab %}}
+{{% tab name="AWS" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-internal: "true"
+[...]
+```
+{{% /tab %}}
+{{% tab name="Azure" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/azure-load-balancer-internal: "true"
+[...]
+```
+{{% /tab %}}
+{{% tab name="OpenStack" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
+[...]
+```
+{{% /tab %}}
+{{% tab name="Baidu Cloud" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
+[...]
+```
+{{% /tab %}}
+{{% tab name="Tencent Cloud" %}}
+```yaml
+[...]
+metadata:
+ annotations:
+ service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx
+[...]
+```
+{{% /tab %}}
+{{< /tabs >}}
+
+
+#### TLS support on AWS {#ssl-support-on-aws}
+
+For partial TLS / SSL support on clusters running on AWS, you can add three
+annotations to a `LoadBalancer` service:
+
+```yaml
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
+```
+
+The first specifies the ARN of the certificate to use. It can be either a
+certificate from a third party issuer that was uploaded to IAM or one created
+within AWS Certificate Manager.
+
+```yaml
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-backend-protocol: (https|http|ssl|tcp)
+```
+
+The second annotation specifies which protocol a Pod speaks. For HTTPS and
+SSL, the ELB expects the Pod to authenticate itself over the encrypted
+connection, using a certificate.
+
+HTTP and HTTPS selects layer 7 proxying: the ELB terminates
+the connection with the user, parses headers, and injects the `X-Forwarded-For`
+header with the user's IP address (Pods only see the IP address of the
+ELB at the other end of its connection) when forwarding requests.
+
+TCP and SSL selects layer 4 proxying: the ELB forwards traffic without
+modifying the headers.
+
+In a mixed-use environment where some ports are secured and others are left unencrypted,
+you can use the following annotations:
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
+ service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443"
+```
+
+In the above example, if the Service contained three ports, `80`, `443`, and
+`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just
+be proxied HTTP.
+
+From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
+To see which policies are available for use, you can use the `aws` command line tool:
+
+```bash
+aws elb describe-load-balancer-policies --query 'PolicyDescriptions[].PolicyName'
+```
+
+You can then specify any one of those policies using the
+"`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`"
+annotation; for example:
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
+```
+
+#### PROXY protocol support on AWS
+
+To enable [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)
+support for clusters running on AWS, you can use the following service
+annotation:
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
+```
+
+Since version 1.3.0, the use of this annotation applies to all ports proxied by the ELB
+and cannot be configured otherwise.
+
+#### ELB Access Logs on AWS
+
+There are several annotations to manage access logs for ELB Services on AWS.
+
+The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled`
+controls whether access logs are enabled.
+
+The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval`
+controls the interval in minutes for publishing the access logs. You can specify
+an interval of either 5 or 60 minutes.
+
+The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`
+controls the name of the Amazon S3 bucket where load balancer access logs are
+stored.
+
+The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`
+specifies the logical hierarchy you created for your Amazon S3 bucket.
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
+ # Specifies whether access logs are enabled for the load balancer
+ service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
+ # The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes).
+ service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
+ # The name of the Amazon S3 bucket where the access logs are stored
+ service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
+ # The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod`
+```
+
+#### Connection Draining on AWS
+
+Connection draining for Classic ELBs can be managed with the annotation
+`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set
+to the value of `"true"`. The annotation
+`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
+also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances.
+
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
+ service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
+```
+
+#### Other ELB annotations
+
+There are other annotations to manage Classic Elastic Load Balancers that are described below.
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
+ # The time, in seconds, that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer
+
+ service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
+ # Specifies whether cross-zone load balancing is enabled for the load balancer
+
+ service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
+ # A comma-separated list of key-value pairs which will be recorded as
+ # additional tags in the ELB.
+
+ service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
+ # The number of successive successful health checks required for a backend to
+ # be considered healthy for traffic. Defaults to 2, must be between 2 and 10
+
+ service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
+ # The number of unsuccessful health checks required for a backend to be
+ # considered unhealthy for traffic. Defaults to 6, must be between 2 and 10
+
+ service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
+ # The approximate interval, in seconds, between health checks of an
+ # individual instance. Defaults to 10, must be between 5 and 300
+ service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
+ # The amount of time, in seconds, during which no response means a failed
+ # health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
+ # value. Defaults to 5, must be between 2 and 60
+
+ service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
+ # A list of additional security groups to be added to the ELB
+```
+
+#### Network Load Balancer support on AWS {#aws-nlb-support}
+
+{{< feature-state for_k8s_version="v1.15" state="beta" >}}
+
+To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernetes.io/aws-load-balancer-type` with the value set to `nlb`.
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
+```
+
+{{< note >}}
+NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
+on Elastic Load Balancing for a list of supported instance types.
+{{< /note >}}
+
+Unlike Classic Elastic Load Balancers, Network Load Balancers (NLBs) forward the
+client's IP address through to the node. If a Service's `.spec.externalTrafficPolicy`
+is set to `Cluster`, the client's IP address is not propagated to the end
+Pods.
+
+By setting `.spec.externalTrafficPolicy` to `Local`, the client IP addresses is
+propagated to the end Pods, but this could result in uneven distribution of
+traffic. Nodes without any Pods for a particular LoadBalancer Service will fail
+the NLB Target Group's health check on the auto-assigned
+`.spec.healthCheckNodePort` and not receive any traffic.
+
+In order to achieve even traffic, either use a DaemonSet or specify a
+[pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
+to not locate on the same node.
+
+You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer)
+annotation.
+
+In order for client traffic to reach instances behind an NLB, the Node security
+groups are modified with the following IP rules:
+
+| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description |
+|------|----------|---------|------------|---------------------|
+| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\ |
+| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\ |
+| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\ |
+
+In order to limit which client IP's can access the Network Load Balancer,
+specify `loadBalancerSourceRanges`.
+
+```yaml
+spec:
+ loadBalancerSourceRanges:
+ - "143.231.0.0/16"
+```
+
+{{< note >}}
+If `.spec.loadBalancerSourceRanges` is not set, Kubernetes
+allows traffic from `0.0.0.0/0` to the Node Security Group(s). If nodes have
+public IP addresses, be aware that non-NLB traffic can also reach all instances
+in those modified security groups.
+
+{{< /note >}}
+
+#### Other CLB annotations on Tencent Kubernetes Engine (TKE)
+
+There are other annotations for managing Cloud Load Balancers on TKE as shown below.
+
+```yaml
+ metadata:
+ name: my-service
+ annotations:
+ # Bind Loadbalancers with specified nodes
+ service.kubernetes.io/qcloud-loadbalancer-backends-label: key in (value1, value2)
+
+ # ID of an existing load balancer
+ service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx
+
+ # Custom parameters for the load balancer (LB), does not support modification of LB type yet
+ service.kubernetes.io/service.extensiveParameters: ""
+
+ # Custom parameters for the LB listener
+ service.kubernetes.io/service.listenerParameters: ""
+
+ # Specifies the type of Load balancer;
+ # valid values: classic (Classic Cloud Load Balancer) or application (Application Cloud Load Balancer)
+ service.kubernetes.io/loadbalance-type: xxxxx
+
+ # Specifies the public network bandwidth billing method;
+ # valid values: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic) and BANDWIDTH_POSTPAID_BY_HOUR (bill-by-bandwidth).
+ service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx
+
+ # Specifies the bandwidth value (value range: [1,2000] Mbps).
+ service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10"
+
+ # When this annotation is set,the loadbalancers will only register nodes
+ # with pod running on it, otherwise all nodes will be registered.
+ service.kubernetes.io/local-svc-only-bind-node-with-pod: true
+```
+
+### Type ExternalName {#externalname}
+
+Services of type ExternalName map a Service to a DNS name, not to a typical selector such as
+`my-service` or `cassandra`. You specify these Services with the `spec.externalName` parameter.
+
+This Service definition, for example, maps
+the `my-service` Service in the `prod` namespace to `my.database.example.com`:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ namespace: prod
+spec:
+ type: ExternalName
+ externalName: my.database.example.com
+```
+{{< note >}}
+ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
+is intended to specify a canonical DNS name. To hardcode an IP address, consider using
+[headless Services](#headless-services).
+{{< /note >}}
+
+When looking up the host `my-service.prod.svc.cluster.local`, the cluster DNS Service
+returns a `CNAME` record with the value `my.database.example.com`. Accessing
+`my-service` works in the same way as other Services but with the crucial
+difference that redirection happens at the DNS level rather than via proxying or
+forwarding. Should you later decide to move your database into your cluster, you
+can start its Pods, add appropriate selectors or endpoints, and change the
+Service's `type`.
+
+{{< warning >}}
+You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references.
+
+For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a `Host:` header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
+{{< /warning >}}
+
+{{< note >}}
+This section is indebted to the [Kubernetes Tips - Part
+1](https://akomljen.com/kubernetes-tips-part-1/) blog post from [Alen Komljen](https://akomljen.com/).
+{{< /note >}}
+
+### External IPs
+
+If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those
+`externalIPs`. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port,
+will be routed to one of the Service endpoints. `externalIPs` are not managed by Kubernetes and are the responsibility
+of the cluster administrator.
+
+In the Service spec, `externalIPs` can be specified along with any of the `ServiceTypes`.
+In the example below, "`my-service`" can be accessed by clients on "`80.11.12.10:80`" (`externalIP:port`)
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - name: http
+ protocol: TCP
+ port: 80
+ targetPort: 9376
+ externalIPs:
+ - 80.11.12.10
+```
+
+## Shortcomings
+
+Using the userspace proxy for VIPs, work at small to medium scale, but will
+not scale to very large clusters with thousands of Services. The [original
+design proposal for portals](http://issue.k8s.io/1107) has more details on
+this.
+
+Using the userspace proxy obscures the source IP address of a packet accessing
+a Service.
+This makes some kinds of network filtering (firewalling) impossible. The iptables
+proxy mode does not
+obscure in-cluster source IPs, but it does still impact clients coming through
+a load balancer or node-port.
+
+The `Type` field is designed as nested functionality - each level adds to the
+previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does
+not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does)
+but the current API requires it.
+
+## Virtual IP implementation {#the-gory-details-of-virtual-ips}
+
+The previous information should be sufficient for many people who just want to
+use Services. However, there is a lot going on behind the scenes that may be
+worth understanding.
+
+### Avoiding collisions
+
+One of the primary philosophies of Kubernetes is that you should not be
+exposed to situations that could cause your actions to fail through no fault
+of your own. For the design of the Service resource, this means not making
+you choose your own port number if that choice might collide with
+someone else's choice. That is an isolation failure.
+
+In order to allow you to choose a port number for your Services, we must
+ensure that no two Services can collide. Kubernetes does that by allocating each
+Service its own IP address.
+
+To ensure each Service receives a unique IP, an internal allocator atomically
+updates a global allocation map in {{< glossary_tooltip term_id="etcd" >}}
+prior to creating each Service. The map object must exist in the registry for
+Services to get IP address assignments, otherwise creations will
+fail with a message indicating an IP address could not be allocated.
+
+In the control plane, a background controller is responsible for creating that
+map (needed to support migrating from older versions of Kubernetes that used
+in-memory locking). Kubernetes also uses controllers to checking for invalid
+assignments (eg due to administrator intervention) and for cleaning up allocated
+IP addresses that are no longer used by any Services.
+
+### Service IP addresses {#ips-and-vips}
+
+Unlike Pod IP addresses, which actually route to a fixed destination,
+Service IPs are not actually answered by a single host. Instead, kube-proxy
+uses iptables (packet processing logic in Linux) to define _virtual_ IP addresses
+which are transparently redirected as needed. When clients connect to the
+VIP, their traffic is automatically transported to an appropriate endpoint.
+The environment variables and DNS for Services are actually populated in
+terms of the Service's virtual IP address (and port).
+
+kube-proxy supports three proxy modes—userspace, iptables and IPVS—which
+each operate slightly differently.
+
+#### Userspace
+
+As an example, consider the image processing application described above.
+When the backend Service is created, the Kubernetes master assigns a virtual
+IP address, for example 10.0.0.1. Assuming the Service port is 1234, the
+Service is observed by all of the kube-proxy instances in the cluster.
+When a proxy sees a new Service, it opens a new random port, establishes an
+iptables redirect from the virtual IP address to this new port, and starts accepting
+connections on it.
+
+When a client connects to the Service's virtual IP address, the iptables
+rule kicks in, and redirects the packets to the proxy's own port.
+The “Service proxy” chooses a backend, and starts proxying traffic from the client to the backend.
+
+This means that Service owners can choose any port they want without risk of
+collision. Clients can simply connect to an IP and port, without being aware
+of which Pods they are actually accessing.
+
+#### iptables
+
+Again, consider the image processing application described above.
+When the backend Service is created, the Kubernetes control plane assigns a virtual
+IP address, for example 10.0.0.1. Assuming the Service port is 1234, the
+Service is observed by all of the kube-proxy instances in the cluster.
+When a proxy sees a new Service, it installs a series of iptables rules which
+redirect from the virtual IP address to per-Service rules. The per-Service
+rules link to per-Endpoint rules which redirect traffic (using destination NAT)
+to the backends.
+
+When a client connects to the Service's virtual IP address the iptables rule kicks in.
+A backend is chosen (either based on session affinity or randomly) and packets are
+redirected to the backend. Unlike the userspace proxy, packets are never
+copied to userspace, the kube-proxy does not have to be running for the virtual
+IP address to work, and Nodes see traffic arriving from the unaltered client IP
+address.
+
+This same basic flow executes when traffic comes in through a node-port or
+through a load-balancer, though in those cases the client IP does get altered.
+
+#### IPVS
+
+iptables operations slow down dramatically in large scale cluster e.g 10,000 Services.
+IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).
+
+## API Object
+
+Service is a top-level resource in the Kubernetes REST API. You can find more details
+about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
+
+## Supported protocols {#protocol-support}
+
+### TCP
+
+You can use TCP for any kind of Service, and it's the default network protocol.
+
+### UDP
+
+You can use UDP for most Services. For type=LoadBalancer Services, UDP support
+depends on the cloud provider offering this facility.
+
+### HTTP
+
+If your cloud provider supports it, you can use a Service in LoadBalancer mode
+to set up external HTTP / HTTPS reverse proxying, forwarded to the Endpoints
+of the Service.
+
+{{< note >}}
+You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service
+to expose HTTP / HTTPS Services.
+{{< /note >}}
+
+### PROXY protocol
+
+If your cloud provider supports it (eg, [AWS](/docs/concepts/cluster-administration/cloud-providers/#aws)),
+you can use a Service in LoadBalancer mode to configure a load balancer outside
+of Kubernetes itself, that will forward connections prefixed with
+[PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt).
+
+The load balancer will send an initial series of octets describing the
+incoming connection, similar to this example
+
+```
+PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n
+```
+followed by the data from the client.
+
+### SCTP
+
+{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+
+Kubernetes supports SCTP as a `protocol` value in Service, Endpoint, NetworkPolicy and Pod definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the `SCTPSupport` feature gate on the apiserver, for example, `--feature-gates=SCTPSupport=true,…`.
+
+When the feature gate is enabled, you can set the `protocol` field of a Service, Endpoint, NetworkPolicy or Pod to `SCTP`. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections.
+
+#### Warnings {#caveat-sctp-overview}
+
+##### Support for multihomed SCTP associations {#caveat-sctp-multihomed}
+
+{{< warning >}}
+The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod.
+
+NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.
+{{< /warning >}}
+
+##### Service with type=LoadBalancer {#caveat-sctp-loadbalancer-service-type}
+
+{{< warning >}}
+You can only create a Service with `type` LoadBalancer plus `protocol` SCTP if the cloud provider's load balancer implementation supports SCTP as a protocol. Otherwise, the Service creation request is rejected. The current set of cloud load balancer providers (Azure, AWS, CloudStack, GCE, OpenStack) all lack support for SCTP.
+{{< /warning >}}
+
+##### Windows {#caveat-sctp-windows-os}
+
+{{< warning >}}
+SCTP is not supported on Windows based nodes.
+{{< /warning >}}
+
+##### Userspace kube-proxy {#caveat-sctp-kube-proxy-userspace}
+
+{{< warning >}}
+The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
+{{< /warning >}}
+
+## Future work
+
+In the future, the proxy policy for Services can become more nuanced than
+simple round-robin balancing, for example master-elected or sharded. We also
+envision that some Services will have "real" load balancers, in which case the
+virtual IP address will simply transport the packets there.
+
+The Kubernetes project intends to improve support for L7 (HTTP) Services.
+
+The Kubernetes project intends to have more flexible ingress modes for Services
+that encompass the current ClusterIP, NodePort, and LoadBalancer modes and more.
+
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
+* Read about [Ingress](/docs/concepts/services-networking/ingress/)
+* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/storage/_index.md b/content/uk/docs/concepts/storage/_index.md
new file mode 100644
index 0000000000000..23108a421cb1a
--- /dev/null
+++ b/content/uk/docs/concepts/storage/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Сховища інформації"
+weight: 70
+---
diff --git a/content/uk/docs/concepts/storage/persistent-volumes.md b/content/uk/docs/concepts/storage/persistent-volumes.md
new file mode 100644
index 0000000000000..e348abb931a44
--- /dev/null
+++ b/content/uk/docs/concepts/storage/persistent-volumes.md
@@ -0,0 +1,736 @@
+---
+reviewers:
+- jsafrane
+- saad-ali
+- thockin
+- msau42
+title: Persistent Volumes
+feature:
+ title: Оркестрація сховищем
+ description: >
+ Автоматично монтує систему збереження даних на ваш вибір: з локального носія даних, із хмарного сховища від провайдера публічних хмарних сервісів, як-от GCP чи AWS, або з мережевого сховища, такого як: NFS, iSCSI, Gluster, Ceph, Cinder чи Flocker.
+
+content_template: templates/concept
+weight: 20
+---
+
+{{% capture overview %}}
+
+This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Introduction
+
+Managing storage is a distinct problem from managing compute instances. The `PersistentVolume` subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: `PersistentVolume` and `PersistentVolumeClaim`.
+
+A `PersistentVolume` (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
+
+A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).
+
+While `PersistentVolumeClaims` allow a user to consume abstract storage
+resources, it is common that users need `PersistentVolumes` with varying
+properties, such as performance, for different problems. Cluster administrators
+need to be able to offer a variety of `PersistentVolumes` that differ in more
+ways than just size and access modes, without exposing users to the details of
+how those volumes are implemented. For these needs, there is the `StorageClass`
+resource.
+
+See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/).
+
+
+## Lifecycle of a volume and claim
+
+PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle:
+
+### Provisioning
+
+There are two ways PVs may be provisioned: statically or dynamically.
+
+#### Static
+A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
+
+#### Dynamic
+When none of the static PVs the administrator created match a user's `PersistentVolumeClaim`,
+the cluster may try to dynamically provision a volume specially for the PVC.
+This provisioning is based on `StorageClasses`: the PVC must request a
+[storage class](/docs/concepts/storage/storage-classes/) and
+the administrator must have created and configured that class for dynamic
+provisioning to occur. Claims that request the class `""` effectively disable
+dynamic provisioning for themselves.
+
+To enable dynamic storage provisioning based on storage class, the cluster administrator
+needs to enable the `DefaultStorageClass` [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
+on the API server. This can be done, for example, by ensuring that `DefaultStorageClass` is
+among the comma-delimited, ordered list of values for the `--enable-admission-plugins` flag of
+the API server component. For more information on API server command-line flags,
+check [kube-apiserver](/docs/admin/kube-apiserver/) documentation.
+
+### Binding
+
+A user creates, or in the case of dynamic provisioning, has already created, a `PersistentVolumeClaim` with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise, the user will always get at least what they asked for, but the volume may be in excess of what was requested. Once bound, `PersistentVolumeClaim` binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping.
+
+Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
+
+### Using
+
+Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod. For volumes that support multiple access modes, the user specifies which mode is desired when using their claim as a volume in a Pod.
+
+Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a `persistentVolumeClaim` in their Pod's volumes block. [See below for syntax details](#claims-as-volumes).
+
+### Storage Object in Use Protection
+The purpose of the Storage Object in Use Protection feature is to ensure that Persistent Volume Claims (PVCs) in active use by a Pod and Persistent Volume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss.
+
+{{< note >}}
+PVC is in active use by a Pod when a Pod object exists that is using the PVC.
+{{< /note >}}
+
+If a user deletes a PVC in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods. Also, if an admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC.
+
+You can see that a PVC is protected when the PVC's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pvc-protection`:
+
+```shell
+kubectl describe pvc hostpath
+Name: hostpath
+Namespace: default
+StorageClass: example-hostpath
+Status: Terminating
+Volume:
+Labels:
+Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath
+ volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath
+Finalizers: [kubernetes.io/pvc-protection]
+...
+```
+
+You can see that a PV is protected when the PV's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pv-protection` too:
+
+```shell
+kubectl describe pv task-pv-volume
+Name: task-pv-volume
+Labels: type=local
+Annotations:
+Finalizers: [kubernetes.io/pv-protection]
+StorageClass: standard
+Status: Terminating
+Claim:
+Reclaim Policy: Delete
+Access Modes: RWO
+Capacity: 1Gi
+Message:
+Source:
+ Type: HostPath (bare host directory volume)
+ Path: /tmp/data
+ HostPathType:
+Events:
+```
+
+### Reclaiming
+
+When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted.
+
+#### Retain
+
+The `Retain` reclaim policy allows for manual reclamation of the resource. When the `PersistentVolumeClaim` is deleted, the `PersistentVolume` still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps.
+
+1. Delete the `PersistentVolume`. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
+1. Manually clean up the data on the associated storage asset accordingly.
+1. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new `PersistentVolume` with the storage asset definition.
+
+#### Delete
+
+For volume plugins that support the `Delete` reclaim policy, deletion removes both the `PersistentVolume` object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the [reclaim policy of their `StorageClass`](#reclaim-policy), which defaults to `Delete`. The administrator should configure the `StorageClass` according to users' expectations; otherwise, the PV must be edited or patched after it is created. See [Change the Reclaim Policy of a PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
+
+#### Recycle
+
+{{< warning >}}
+The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning.
+{{< /warning >}}
+
+If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim.
+
+However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler Pod template must contain a `volumes` specification, as shown in the example below:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pv-recycler
+ namespace: default
+spec:
+ restartPolicy: Never
+ volumes:
+ - name: vol
+ hostPath:
+ path: /any/path/it/will/be/replaced
+ containers:
+ - name: pv-recycler
+ image: "k8s.gcr.io/busybox"
+ command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"]
+ volumeMounts:
+ - name: vol
+ mountPath: /scrub
+```
+
+However, the particular path specified in the custom recycler Pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled.
+
+### Expanding Persistent Volumes Claims
+
+{{< feature-state for_k8s_version="v1.11" state="beta" >}}
+
+Support for expanding PersistentVolumeClaims (PVCs) is now enabled by default. You can expand
+the following types of volumes:
+
+* gcePersistentDisk
+* awsElasticBlockStore
+* Cinder
+* glusterfs
+* rbd
+* Azure File
+* Azure Disk
+* Portworx
+* FlexVolumes
+* CSI
+
+You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.
+
+``` yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: gluster-vol-default
+provisioner: kubernetes.io/glusterfs
+parameters:
+ resturl: "http://192.168.10.100:8080"
+ restuser: ""
+ secretNamespace: ""
+ secretName: ""
+allowVolumeExpansion: true
+```
+
+To request a larger volume for a PVC, edit the PVC object and specify a larger
+size. This triggers expansion of the volume that backs the underlying `PersistentVolume`. A
+new `PersistentVolume` is never created to satisfy the claim. Instead, an existing volume is resized.
+
+#### CSI Volume expansion
+
+{{< feature-state for_k8s_version="v1.16" state="beta" >}}
+
+Support for expanding CSI volumes is enabled by default but it also requires a specific CSI driver to support volume expansion. Refer to documentation of the specific CSI driver for more information.
+
+
+#### Resizing a volume containing a file system
+
+You can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4.
+
+When a volume contains a file system, the file system is only resized when a new Pod is using
+the `PersistentVolumeClaim` in ReadWrite mode. File system expansion is either done when a Pod is starting up
+or when a Pod is running and the underlying file system supports online expansion.
+
+FlexVolumes allow resize if the driver is set with the `RequiresFSResize` capability to `true`.
+The FlexVolume can be resized on Pod restart.
+
+#### Resizing an in-use PersistentVolumeClaim
+
+{{< feature-state for_k8s_version="v1.15" state="beta" >}}
+
+{{< note >}}
+Expanding in-use PVCs is available as beta since Kubernetes 1.15, and as alpha since 1.11. The `ExpandInUsePersistentVolumes` feature must be enabled, which is the case automatically for many clusters for beta features. Refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information.
+{{< /note >}}
+
+In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC.
+Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded.
+This feature has no effect on PVCs that are not in use by a Pod or deployment. You must create a Pod that
+uses the PVC before the expansion can complete.
+
+
+Similar to other volume types - FlexVolume volumes can also be expanded when in-use by a Pod.
+
+{{< note >}}
+FlexVolume resize is possible only when the underlying driver supports resize.
+{{< /note >}}
+
+{{< note >}}
+Expanding EBS volumes is a time-consuming operation. Also, there is a per-volume quota of one modification every 6 hours.
+{{< /note >}}
+
+
+## Types of Persistent Volumes
+
+`PersistentVolume` types are implemented as plugins. Kubernetes currently supports the following plugins:
+
+* GCEPersistentDisk
+* AWSElasticBlockStore
+* AzureFile
+* AzureDisk
+* CSI
+* FC (Fibre Channel)
+* FlexVolume
+* Flocker
+* NFS
+* iSCSI
+* RBD (Ceph Block Device)
+* CephFS
+* Cinder (OpenStack block storage)
+* Glusterfs
+* VsphereVolume
+* Quobyte Volumes
+* HostPath (Single node testing only -- local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
+* Portworx Volumes
+* ScaleIO Volumes
+* StorageOS
+
+## Persistent Volumes
+
+Each PV contains a spec and status, which is the specification and status of the volume.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv0003
+spec:
+ capacity:
+ storage: 5Gi
+ volumeMode: Filesystem
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Recycle
+ storageClassName: slow
+ mountOptions:
+ - hard
+ - nfsvers=4.1
+ nfs:
+ path: /tmp
+ server: 172.17.0.2
+```
+
+### Capacity
+
+Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) to understand the units expected by `capacity`.
+
+Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
+
+### Volume Mode
+
+{{< feature-state for_k8s_version="v1.13" state="beta" >}}
+
+Prior to Kubernetes 1.9, all volume plugins created a filesystem on the persistent volume.
+Now, you can set the value of `volumeMode` to `block` to use a raw block device, or `filesystem`
+to use a filesystem. `filesystem` is the default if the value is omitted. This is an optional API
+parameter.
+
+### Access Modes
+
+A `PersistentVolume` can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.
+
+The access modes are:
+
+* ReadWriteOnce -- the volume can be mounted as read-write by a single node
+* ReadOnlyMany -- the volume can be mounted read-only by many nodes
+* ReadWriteMany -- the volume can be mounted as read-write by many nodes
+
+In the CLI, the access modes are abbreviated to:
+
+* RWO - ReadWriteOnce
+* ROX - ReadOnlyMany
+* RWX - ReadWriteMany
+
+> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
+
+
+| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany|
+| :--- | :---: | :---: | :---: |
+| AWSElasticBlockStore | ✓ | - | - |
+| AzureFile | ✓ | ✓ | ✓ |
+| AzureDisk | ✓ | - | - |
+| CephFS | ✓ | ✓ | ✓ |
+| Cinder | ✓ | - | - |
+| CSI | depends on the driver | depends on the driver | depends on the driver |
+| FC | ✓ | ✓ | - |
+| FlexVolume | ✓ | ✓ | depends on the driver |
+| Flocker | ✓ | - | - |
+| GCEPersistentDisk | ✓ | ✓ | - |
+| Glusterfs | ✓ | ✓ | ✓ |
+| HostPath | ✓ | - | - |
+| iSCSI | ✓ | ✓ | - |
+| Quobyte | ✓ | ✓ | ✓ |
+| NFS | ✓ | ✓ | ✓ |
+| RBD | ✓ | ✓ | - |
+| VsphereVolume | ✓ | - | - (works when Pods are collocated) |
+| PortworxVolume | ✓ | - | ✓ |
+| ScaleIO | ✓ | ✓ | - |
+| StorageOS | ✓ | - | - |
+
+### Class
+
+A PV can have a class, which is specified by setting the
+`storageClassName` attribute to the name of a
+[StorageClass](/docs/concepts/storage/storage-classes/).
+A PV of a particular class can only be bound to PVCs requesting
+that class. A PV with no `storageClassName` has no class and can only be bound
+to PVCs that request no particular class.
+
+In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead
+of the `storageClassName` attribute. This annotation is still working; however,
+it will become fully deprecated in a future Kubernetes release.
+
+### Reclaim Policy
+
+Current reclaim policies are:
+
+* Retain -- manual reclamation
+* Recycle -- basic scrub (`rm -rf /thevolume/*`)
+* Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted
+
+Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion.
+
+### Mount Options
+
+A Kubernetes administrator can specify additional mount options for when a Persistent Volume is mounted on a node.
+
+{{< note >}}
+Not all Persistent Volume types support mount options.
+{{< /note >}}
+
+The following volume types support mount options:
+
+* AWSElasticBlockStore
+* AzureDisk
+* AzureFile
+* CephFS
+* Cinder (OpenStack block storage)
+* GCEPersistentDisk
+* Glusterfs
+* NFS
+* Quobyte Volumes
+* RBD (Ceph Block Device)
+* StorageOS
+* VsphereVolume
+* iSCSI
+
+Mount options are not validated, so mount will simply fail if one is invalid.
+
+In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
+of the `mountOptions` attribute. This annotation is still working; however,
+it will become fully deprecated in a future Kubernetes release.
+
+### Node Affinity
+
+{{< note >}}
+For most volume types, you do not need to set this field. It is automatically populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and [Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
+{{< /note >}}
+
+A PV can specify [node affinity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core) to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity.
+
+### Phase
+
+A volume will be in one of the following phases:
+
+* Available -- a free resource that is not yet bound to a claim
+* Bound -- the volume is bound to a claim
+* Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster
+* Failed -- the volume has failed its automatic reclamation
+
+The CLI will show the name of the PVC bound to the PV.
+
+## PersistentVolumeClaims
+
+Each PVC contains a spec and status, which is the specification and status of the claim.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: myclaim
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Filesystem
+ resources:
+ requests:
+ storage: 8Gi
+ storageClassName: slow
+ selector:
+ matchLabels:
+ release: "stable"
+ matchExpressions:
+ - {key: environment, operator: In, values: [dev]}
+```
+
+### Access Modes
+
+Claims use the same conventions as volumes when requesting storage with specific access modes.
+
+### Volume Modes
+
+Claims use the same convention as volumes to indicate the consumption of the volume as either a filesystem or block device.
+
+### Resources
+
+Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) applies to both volumes and claims.
+
+### Selector
+
+Claims can specify a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields:
+
+* `matchLabels` - the volume must have a label with this value
+* `matchExpressions` - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist.
+
+All of the requirements, from both `matchLabels` and `matchExpressions`, are ANDed together – they must all be satisfied in order to match.
+
+### Class
+
+A claim can request a particular class by specifying the name of a
+[StorageClass](/docs/concepts/storage/storage-classes/)
+using the attribute `storageClassName`.
+Only PVs of the requested class, ones with the same `storageClassName` as the PVC, can
+be bound to the PVC.
+
+PVCs don't necessarily have to request a class. A PVC with its `storageClassName` set
+equal to `""` is always interpreted to be requesting a PV with no class, so it
+can only be bound to PVs with no class (no annotation or one set equal to
+`""`). A PVC with no `storageClassName` is not quite the same and is treated differently
+by the cluster, depending on whether the
+[`DefaultStorageClass` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
+is turned on.
+
+* If the admission plugin is turned on, the administrator may specify a
+ default `StorageClass`. All PVCs that have no `storageClassName` can be bound only to
+ PVs of that default. Specifying a default `StorageClass` is done by setting the
+ annotation `storageclass.kubernetes.io/is-default-class` equal to `true` in
+ a `StorageClass` object. If the administrator does not specify a default, the
+ cluster responds to PVC creation as if the admission plugin were turned off. If
+ more than one default is specified, the admission plugin forbids the creation of
+ all PVCs.
+* If the admission plugin is turned off, there is no notion of a default
+ `StorageClass`. All PVCs that have no `storageClassName` can be bound only to PVs that
+ have no class. In this case, the PVCs that have no `storageClassName` are treated the
+ same way as PVCs that have their `storageClassName` set to `""`.
+
+Depending on installation method, a default StorageClass may be deployed
+to a Kubernetes cluster by addon manager during installation.
+
+When a PVC specifies a `selector` in addition to requesting a `StorageClass`,
+the requirements are ANDed together: only a PV of the requested class and with
+the requested labels may be bound to the PVC.
+
+{{< note >}}
+Currently, a PVC with a non-empty `selector` can't have a PV dynamically provisioned for it.
+{{< /note >}}
+
+In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead
+of `storageClassName` attribute. This annotation is still working; however,
+it won't be supported in a future Kubernetes release.
+
+## Claims As Volumes
+
+Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim. The cluster finds the claim in the Pod's namespace and uses it to get the `PersistentVolume` backing the claim. The volume is then mounted to the host and into the Pod.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: myfrontend
+ image: nginx
+ volumeMounts:
+ - mountPath: "/var/www/html"
+ name: mypd
+ volumes:
+ - name: mypd
+ persistentVolumeClaim:
+ claimName: myclaim
+```
+
+### A Note on Namespaces
+
+`PersistentVolumes` binds are exclusive, and since `PersistentVolumeClaims` are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace.
+
+## Raw Block Volume Support
+
+{{< feature-state for_k8s_version="v1.13" state="beta" >}}
+
+The following volume plugins support raw block volumes, including dynamic provisioning where
+applicable:
+
+* AWSElasticBlockStore
+* AzureDisk
+* FC (Fibre Channel)
+* GCEPersistentDisk
+* iSCSI
+* Local volume
+* RBD (Ceph Block Device)
+* VsphereVolume (alpha)
+
+{{< note >}}
+Only FC and iSCSI volumes supported raw block volumes in Kubernetes 1.9.
+Support for the additional plugins was added in 1.10.
+{{< /note >}}
+
+### Persistent Volumes using a Raw Block Volume
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: block-pv
+spec:
+ capacity:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Block
+ persistentVolumeReclaimPolicy: Retain
+ fc:
+ targetWWNs: ["50060e801049cfd1"]
+ lun: 0
+ readOnly: false
+```
+### Persistent Volume Claim requesting a Raw Block Volume
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: block-pvc
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Block
+ resources:
+ requests:
+ storage: 10Gi
+```
+### Pod specification adding Raw Block Device path in container
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pod-with-block-volume
+spec:
+ containers:
+ - name: fc-container
+ image: fedora:26
+ command: ["/bin/sh", "-c"]
+ args: [ "tail -f /dev/null" ]
+ volumeDevices:
+ - name: data
+ devicePath: /dev/xvda
+ volumes:
+ - name: data
+ persistentVolumeClaim:
+ claimName: block-pvc
+```
+
+{{< note >}}
+When adding a raw block device for a Pod, you specify the device path in the container instead of a mount path.
+{{< /note >}}
+
+### Binding Block Volumes
+
+If a user requests a raw block volume by indicating this using the `volumeMode` field in the `PersistentVolumeClaim` spec, the binding rules differ slightly from previous releases that didn't consider this mode as part of the spec.
+Listed is a table of possible combinations the user and admin might specify for requesting a raw block device. The table indicates if the volume will be bound or not given the combinations:
+Volume binding matrix for statically provisioned volumes:
+
+| PV volumeMode | PVC volumeMode | Result |
+| --------------|:---------------:| ----------------:|
+| unspecified | unspecified | BIND |
+| unspecified | Block | NO BIND |
+| unspecified | Filesystem | BIND |
+| Block | unspecified | NO BIND |
+| Block | Block | BIND |
+| Block | Filesystem | NO BIND |
+| Filesystem | Filesystem | BIND |
+| Filesystem | Block | NO BIND |
+| Filesystem | unspecified | BIND |
+
+{{< note >}}
+Only statically provisioned volumes are supported for alpha release. Administrators should take care to consider these values when working with raw block devices.
+{{< /note >}}
+
+## Volume Snapshot and Restore Volume from Snapshot Support
+
+{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+
+Volume snapshot feature was added to support CSI Volume Plugins only. For details, see [volume snapshots](/docs/concepts/storage/volume-snapshots/).
+
+To enable support for restoring a volume from a volume snapshot data source, enable the
+`VolumeSnapshotDataSource` feature gate on the apiserver and controller-manager.
+
+### Create Persistent Volume Claim from Volume Snapshot
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: restore-pvc
+spec:
+ storageClassName: csi-hostpath-sc
+ dataSource:
+ name: new-snapshot-test
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+```
+
+## Volume Cloning
+
+{{< feature-state for_k8s_version="v1.16" state="beta" >}}
+
+Volume clone feature was added to support CSI Volume Plugins only. For details, see [volume cloning](/docs/concepts/storage/volume-pvc-datasource/).
+
+To enable support for cloning a volume from a PVC data source, enable the
+`VolumePVCDataSource` feature gate on the apiserver and controller-manager.
+
+### Create Persistent Volume Claim from an existing pvc
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: cloned-pvc
+spec:
+ storageClassName: my-csi-plugin
+ dataSource:
+ name: existing-src-pvc-name
+ kind: PersistentVolumeClaim
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 10Gi
+```
+
+## Writing Portable Configuration
+
+If you're writing configuration templates or examples that run on a wide range of clusters
+and need persistent storage, it is recommended that you use the following pattern:
+
+- Include PersistentVolumeClaim objects in your bundle of config (alongside
+ Deployments, ConfigMaps, etc).
+- Do not include PersistentVolume objects in the config, since the user instantiating
+ the config may not have permission to create PersistentVolumes.
+- Give the user the option of providing a storage class name when instantiating
+ the template.
+ - If the user provides a storage class name, put that value into the
+ `persistentVolumeClaim.storageClassName` field.
+ This will cause the PVC to match the right storage
+ class if the cluster has StorageClasses enabled by the admin.
+ - If the user does not provide a storage class name, leave the
+ `persistentVolumeClaim.storageClassName` field as nil. This will cause a
+ PV to be automatically provisioned for the user with the default StorageClass
+ in the cluster. Many cluster environments have a default StorageClass installed,
+ or administrators can create their own default StorageClass.
+- In your tooling, watch for PVCs that are not getting bound after some time
+ and surface this to the user, as this may indicate that the cluster has no
+ dynamic storage support (in which case the user should create a matching PV)
+ or the cluster has no storage system (in which case the user cannot deploy
+ config requiring PVCs).
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/workloads/_index.md b/content/uk/docs/concepts/workloads/_index.md
new file mode 100644
index 0000000000000..c826cbbcbc587
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Робочі навантаження"
+weight: 50
+---
diff --git a/content/uk/docs/concepts/workloads/controllers/_index.md b/content/uk/docs/concepts/workloads/controllers/_index.md
new file mode 100644
index 0000000000000..3e5306f908cbc
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/controllers/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Контролери"
+weight: 20
+---
diff --git a/content/uk/docs/concepts/workloads/controllers/deployment.md b/content/uk/docs/concepts/workloads/controllers/deployment.md
new file mode 100644
index 0000000000000..4d676e76f065d
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/controllers/deployment.md
@@ -0,0 +1,1152 @@
+---
+reviewers:
+- janetkuo
+title: Deployments
+feature:
+ title: Автоматичне розгортання і відкатування
+ description: >
+ Kubernetes вносить зміни до вашого застосунку чи його конфігурації по мірі їх надходження. Водночас система моніторить робочий стан застосунку для того, щоб ці зміни не призвели до одночасної зупинки усіх ваших Подів. У випадку будь-яких збоїв, Kubernetes відкотить зміни назад. Скористайтеся перевагами зростаючої екосистеми інструментів для розгортання застосунків.
+
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
+[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
+
+You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_tooltip term_id="controller" >}} changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
+
+{{< note >}}
+Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
+{{< /note >}}
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Use Case
+
+The following are typical use cases for Deployments:
+
+* [Create a Deployment to rollout a ReplicaSet](#creating-a-deployment). The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
+* [Declare the new state of the Pods](#updating-a-deployment) by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
+* [Rollback to an earlier Deployment revision](#rolling-back-a-deployment) if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
+* [Scale up the Deployment to facilitate more load](#scaling-a-deployment).
+* [Pause the Deployment](#pausing-and-resuming-a-deployment) to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
+* [Use the status of the Deployment](#deployment-status) as an indicator that a rollout has stuck.
+* [Clean up older ReplicaSets](#clean-up-policy) that you don't need anymore.
+
+## Creating a Deployment
+
+The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods:
+
+{{< codenew file="controllers/nginx-deployment.yaml" >}}
+
+In this example:
+
+* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
+* The Deployment creates three replicated Pods, indicated by the `replicas` field.
+* The `selector` field defines how the Deployment finds which Pods to manage.
+ In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
+ However, more sophisticated selection rules are possible,
+ as long as the Pod template itself satisfies the rule.
+ {{< note >}}
+ The `matchLabels` field is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map
+ is equivalent to an element of `matchExpressions`, whose key field is "key" the operator is "In",
+ and the values array contains only "value".
+ All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match.
+ {{< /note >}}
+
+* The `template` field contains the following sub-fields:
+ * The Pods are labeled `app: nginx`using the `labels` field.
+ * The Pod template's specification, or `.template.spec` field, indicates that
+ the Pods run one container, `nginx`, which runs the `nginx`
+ [Docker Hub](https://hub.docker.com/) image at version 1.7.9.
+ * Create one container and name it `nginx` using the `name` field.
+
+ Follow the steps given below to create the above Deployment:
+
+ Before you begin, make sure your Kubernetes cluster is up and running.
+
+ 1. Create the Deployment by running the following command:
+
+ {{< note >}}
+ You may specify the `--record` flag to write the command executed in the resource annotation `kubernetes.io/change-cause`. It is useful for future introspection.
+ For example, to see the commands executed in each Deployment revision.
+ {{< /note >}}
+
+ ```shell
+ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
+ ```
+
+ 2. Run `kubectl get deployments` to check if the Deployment was created. If the Deployment is still being created, the output is similar to the following:
+ ```shell
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 0/3 0 0 1s
+ ```
+ When you inspect the Deployments in your cluster, the following fields are displayed:
+
+ * `NAME` lists the names of the Deployments in the cluster.
+ * `DESIRED` displays the desired number of _replicas_ of the application, which you define when you create the Deployment. This is the _desired state_.
+ * `CURRENT` displays how many replicas are currently running.
+ * `UP-TO-DATE` displays the number of replicas that have been updated to achieve the desired state.
+ * `AVAILABLE` displays how many replicas of the application are available to your users.
+ * `AGE` displays the amount of time that the application has been running.
+
+ Notice how the number of desired replicas is 3 according to `.spec.replicas` field.
+
+ 3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment`. The output is similar to this:
+ ```shell
+ Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
+ deployment.apps/nginx-deployment successfully rolled out
+ ```
+
+ 4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this:
+ ```shell
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 3/3 3 3 18s
+ ```
+ Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.
+
+ 5. To see the ReplicaSet (`rs`) created by the Deployment, run `kubectl get rs`. The output is similar to this:
+ ```shell
+ NAME DESIRED CURRENT READY AGE
+ nginx-deployment-75675f5897 3 3 3 18s
+ ```
+ Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. The random string is
+ randomly generated and uses the pod-template-hash as a seed.
+
+ 6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`. The following output is returned:
+ ```shell
+ NAME READY STATUS RESTARTS AGE LABELS
+ nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
+ nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
+ nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
+ ```
+ The created ReplicaSet ensures that there are three `nginx` Pods.
+
+ {{< note >}}
+ You must specify an appropriate selector and Pod template labels in a Deployment (in this case,
+ `app: nginx`). Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly.
+ {{< /note >}}
+
+### Pod-template-hash label
+
+{{< note >}}
+Do not change this label.
+{{< /note >}}
+
+The `pod-template-hash` label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.
+
+This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the `PodTemplate` of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels,
+and in any existing Pods that the ReplicaSet might have.
+
+## Updating a Deployment
+
+{{< note >}}
+A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, `.spec.template`)
+is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.
+{{< /note >}}
+
+Follow the steps given below to update your Deployment:
+
+1. Let's update the nginx Pods to use the `nginx:1.9.1` image instead of the `nginx:1.7.9` image.
+
+ ```shell
+ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
+ ```
+ or simply use the following command:
+
+ ```shell
+ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment image updated
+ ```
+
+ Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`:
+
+ ```shell
+ kubectl edit deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment edited
+ ```
+
+2. To see the rollout status, run:
+
+ ```shell
+ kubectl rollout status deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
+ ```
+ or
+ ```
+ deployment.apps/nginx-deployment successfully rolled out
+ ```
+
+Get more details on your updated Deployment:
+
+* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
+ The output is similar to this:
+ ```
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 3/3 3 3 36s
+ ```
+
+* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
+up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
+
+ ```shell
+ kubectl get rs
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-deployment-1564180365 3 3 3 6s
+ nginx-deployment-2035384211 0 0 0 36s
+ ```
+
+* Running `get pods` should now show only the new Pods:
+
+ ```shell
+ kubectl get pods
+ ```
+
+ The output is similar to this:
+ ```
+ NAME READY STATUS RESTARTS AGE
+ nginx-deployment-1564180365-khku8 1/1 Running 0 14s
+ nginx-deployment-1564180365-nacti 1/1 Running 0 14s
+ nginx-deployment-1564180365-z9gth 1/1 Running 0 14s
+ ```
+
+ Next time you want to update these Pods, you only need to update the Deployment's Pod template again.
+
+ Deployment ensures that only a certain number of Pods are down while they are being updated. By default,
+ it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
+
+ Deployment also ensures that only a certain number of Pods are created above the desired number of Pods.
+ By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
+
+ For example, if you look at the above Deployment closely, you will see that it first created a new Pod,
+ then deleted some old Pods, and created new ones. It does not kill old Pods until a sufficient number of
+ new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.
+ It makes sure that at least 2 Pods are available and that at max 4 Pods in total are available.
+
+* Get details of your Deployment:
+ ```shell
+ kubectl describe deployments
+ ```
+ The output is similar to this:
+ ```
+ Name: nginx-deployment
+ Namespace: default
+ CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000
+ Labels: app=nginx
+ Annotations: deployment.kubernetes.io/revision=2
+ Selector: app=nginx
+ Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
+ StrategyType: RollingUpdate
+ MinReadySeconds: 0
+ RollingUpdateStrategy: 25% max unavailable, 25% max surge
+ Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx:1.9.1
+ Port: 80/TCP
+ Environment:
+ Mounts:
+ Volumes:
+ Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+ OldReplicaSets:
+ NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
+ Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3
+ Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1
+ Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2
+ Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2
+ Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1
+ Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3
+ Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0
+ ```
+ Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)
+ and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet
+ (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that at
+ least 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and down
+ the new and the old ReplicaSet, with the same rolling update strategy. Finally, you'll have 3 available replicas
+ in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
+
+### Rollover (aka multiple updates in-flight)
+
+Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up
+the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels
+match `.spec.selector` but whose template does not match `.spec.template` are scaled down. Eventually, the new
+ReplicaSet is scaled to `.spec.replicas` and all old ReplicaSets is scaled to 0.
+
+If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet
+as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously
+ -- it will add it to its list of old ReplicaSets and start scaling it down.
+
+For example, suppose you create a Deployment to create 5 replicas of `nginx:1.7.9`,
+but then update the Deployment to create 5 replicas of `nginx:1.9.1`, when only 3
+replicas of `nginx:1.7.9` had been created. In that case, the Deployment immediately starts
+killing the 3 `nginx:1.7.9` Pods that it had created, and starts creating
+`nginx:1.9.1` Pods. It does not wait for the 5 replicas of `nginx:1.7.9` to be created
+before changing course.
+
+### Label selector updates
+
+It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front.
+In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped
+all of the implications.
+
+{{< note >}}
+In API version `apps/v1`, a Deployment's label selector is immutable after it gets created.
+{{< /note >}}
+
+* Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too,
+otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector does
+not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and
+creating a new ReplicaSet.
+* Selector updates changes the existing value in a selector key -- result in the same behavior as additions.
+* Selector removals removes an existing key from the Deployment selector -- do not require any changes in the
+Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the
+removed label still exists in any existing Pods and ReplicaSets.
+
+## Rolling Back a Deployment
+
+Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping.
+By default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want
+(you can change that by modifying revision history limit).
+
+{{< note >}}
+A Deployment's revision is created when a Deployment's rollout is triggered. This means that the
+new revision is created if and only if the Deployment's Pod template (`.spec.template`) is changed,
+for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment,
+do not create a Deployment revision, so that you can facilitate simultaneous manual- or auto-scaling.
+This means that when you roll back to an earlier revision, only the Deployment's Pod template part is
+rolled back.
+{{< /note >}}
+
+* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
+
+ ```shell
+ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment image updated
+ ```
+
+* The rollout gets stuck. You can verify it by checking the rollout status:
+
+ ```shell
+ kubectl rollout status deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
+ ```
+
+* Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts,
+[read more here](#deployment-status).
+
+* You see that the number of old replicas (`nginx-deployment-1564180365` and `nginx-deployment-2035384211`) is 2, and new replicas (nginx-deployment-3066724191) is 1.
+
+ ```shell
+ kubectl get rs
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-deployment-1564180365 3 3 3 25s
+ nginx-deployment-2035384211 0 0 0 36s
+ nginx-deployment-3066724191 1 1 0 6s
+ ```
+
+* Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.
+
+ ```shell
+ kubectl get pods
+ ```
+
+ The output is similar to this:
+ ```
+ NAME READY STATUS RESTARTS AGE
+ nginx-deployment-1564180365-70iae 1/1 Running 0 25s
+ nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
+ nginx-deployment-1564180365-hysrc 1/1 Running 0 25s
+ nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
+ ```
+
+ {{< note >}}
+ The Deployment controller stops the bad rollout automatically, and stops scaling up the new
+ ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified.
+ Kubernetes by default sets the value to 25%.
+ {{< /note >}}
+
+* Get the description of the Deployment:
+ ```shell
+ kubectl describe deployment
+ ```
+
+ The output is similar to this:
+ ```
+ Name: nginx-deployment
+ Namespace: default
+ CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
+ Labels: app=nginx
+ Selector: app=nginx
+ Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable
+ StrategyType: RollingUpdate
+ MinReadySeconds: 0
+ RollingUpdateStrategy: 25% max unavailable, 25% max surge
+ Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx:1.91
+ Port: 80/TCP
+ Host Port: 0/TCP
+ Environment:
+ Mounts:
+ Volumes:
+ Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True ReplicaSetUpdated
+ OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)
+ NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)
+ Events:
+ FirstSeen LastSeen Count From SubObjectPath Type Reason Message
+ --------- -------- ----- ---- ------------- -------- ------ -------
+ 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
+ 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
+ 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
+ 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
+ 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1
+ 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
+ 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
+ 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
+ ```
+
+ To fix this, you need to rollback to a previous revision of Deployment that is stable.
+
+### Checking Rollout History of a Deployment
+
+Follow the steps given below to check the rollout history:
+
+1. First, check the revisions of this Deployment:
+ ```shell
+ kubectl rollout history deployment.v1.apps/nginx-deployment
+ ```
+ The output is similar to this:
+ ```
+ deployments "nginx-deployment"
+ REVISION CHANGE-CAUSE
+ 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
+ 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
+ 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
+ ```
+
+ `CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by:
+
+ * Annotating the Deployment with `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"`
+ * Append the `--record` flag to save the `kubectl` command that is making changes to the resource.
+ * Manually editing the manifest of the resource.
+
+2. To see the details of each revision, run:
+ ```shell
+ kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
+ ```
+
+ The output is similar to this:
+ ```
+ deployments "nginx-deployment" revision 2
+ Labels: app=nginx
+ pod-template-hash=1159050644
+ Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
+ Containers:
+ nginx:
+ Image: nginx:1.9.1
+ Port: 80/TCP
+ QoS Tier:
+ cpu: BestEffort
+ memory: BestEffort
+ Environment Variables:
+ No volumes.
+ ```
+
+### Rolling Back to a Previous Revision
+Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.
+
+1. Now you've decided to undo the current rollout and rollback to the previous revision:
+ ```shell
+ kubectl rollout undo deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment
+ ```
+ Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`:
+
+ ```shell
+ kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment
+ ```
+
+ For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout).
+
+ The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event
+ for rolling back to revision 2 is generated from Deployment controller.
+
+2. Check if the rollback was successful and the Deployment is running as expected, run:
+ ```shell
+ kubectl get deployment nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 3/3 3 3 30m
+ ```
+3. Get the description of the Deployment:
+ ```shell
+ kubectl describe deployment nginx-deployment
+ ```
+ The output is similar to this:
+ ```
+ Name: nginx-deployment
+ Namespace: default
+ CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500
+ Labels: app=nginx
+ Annotations: deployment.kubernetes.io/revision=4
+ kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
+ Selector: app=nginx
+ Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
+ StrategyType: RollingUpdate
+ MinReadySeconds: 0
+ RollingUpdateStrategy: 25% max unavailable, 25% max surge
+ Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx:1.9.1
+ Port: 80/TCP
+ Host Port: 0/TCP
+ Environment:
+ Mounts:
+ Volumes:
+ Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+ OldReplicaSets:
+ NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)
+ Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3
+ Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1
+ Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2
+ Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2
+ Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1
+ Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3
+ Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0
+ Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1
+ Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2
+ Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0
+ ```
+
+## Scaling a Deployment
+
+You can scale a Deployment by using the following command:
+
+```shell
+kubectl scale deployment.v1.apps/nginx-deployment --replicas=10
+```
+The output is similar to this:
+```
+deployment.apps/nginx-deployment scaled
+```
+
+Assuming [horizontal Pod autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) is enabled
+in your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number of
+Pods you want to run based on the CPU utilization of your existing Pods.
+
+```shell
+kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80
+```
+The output is similar to this:
+```
+deployment.apps/nginx-deployment scaled
+```
+
+### Proportional scaling
+
+RollingUpdate Deployments support running multiple versions of an application at the same time. When you
+or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress
+or paused), the Deployment controller balances the additional replicas in the existing active
+ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *proportional scaling*.
+
+For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2.
+
+* Ensure that the 10 replicas in your Deployment are running.
+ ```shell
+ kubectl get deploy
+ ```
+ The output is similar to this:
+
+ ```
+ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 10 10 10 10 50s
+ ```
+
+* You update to a new image which happens to be unresolvable from inside the cluster.
+ ```shell
+ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment image updated
+ ```
+
+* The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the
+`maxUnavailable` requirement that you mentioned above. Check out the rollout status:
+ ```shell
+ kubectl get rs
+ ```
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-deployment-1989198191 5 5 0 9s
+ nginx-deployment-618515232 8 8 8 1m
+ ```
+
+* Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas
+to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't using
+proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you
+spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the
+most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the
+ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.
+
+In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the
+new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming
+the new replicas become healthy. To confirm this, run:
+
+```shell
+kubectl get deploy
+```
+
+The output is similar to this:
+```
+NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+nginx-deployment 15 18 7 8 7m
+```
+The rollout status confirms how the replicas were added to each ReplicaSet.
+```shell
+kubectl get rs
+```
+
+The output is similar to this:
+```
+NAME DESIRED CURRENT READY AGE
+nginx-deployment-1989198191 7 7 0 7m
+nginx-deployment-618515232 11 11 11 7m
+```
+
+## Pausing and Resuming a Deployment
+
+You can pause a Deployment before triggering one or more updates and then resume it. This allows you to
+apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
+
+* For example, with a Deployment that was just created:
+ Get the Deployment details:
+ ```shell
+ kubectl get deploy
+ ```
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+ nginx 3 3 3 3 1m
+ ```
+ Get the rollout status:
+ ```shell
+ kubectl get rs
+ ```
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-2142116321 3 3 3 1m
+ ```
+
+* Pause by running the following command:
+ ```shell
+ kubectl rollout pause deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment paused
+ ```
+
+* Then update the image of the Deployment:
+ ```shell
+ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment image updated
+ ```
+
+* Notice that no new rollout started:
+ ```shell
+ kubectl rollout history deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployments "nginx"
+ REVISION CHANGE-CAUSE
+ 1
+ ```
+* Get the rollout status to ensure that the Deployment is updates successfully:
+ ```shell
+ kubectl get rs
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-2142116321 3 3 3 2m
+ ```
+
+* You can make as many updates as you wish, for example, update the resources that will be used:
+ ```shell
+ kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment resource requirements updated
+ ```
+
+ The initial state of the Deployment prior to pausing it will continue its function, but new updates to
+ the Deployment will not have any effect as long as the Deployment is paused.
+
+* Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates:
+ ```shell
+ kubectl rollout resume deployment.v1.apps/nginx-deployment
+ ```
+
+ The output is similar to this:
+ ```
+ deployment.apps/nginx-deployment resumed
+ ```
+* Watch the status of the rollout until it's done.
+ ```shell
+ kubectl get rs -w
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-2142116321 2 2 2 2m
+ nginx-3926361531 2 2 0 6s
+ nginx-3926361531 2 2 1 18s
+ nginx-2142116321 1 2 2 2m
+ nginx-2142116321 1 2 2 2m
+ nginx-3926361531 3 2 1 18s
+ nginx-3926361531 3 2 1 18s
+ nginx-2142116321 1 1 1 2m
+ nginx-3926361531 3 3 1 18s
+ nginx-3926361531 3 3 2 19s
+ nginx-2142116321 0 1 1 2m
+ nginx-2142116321 0 1 1 2m
+ nginx-2142116321 0 0 0 2m
+ nginx-3926361531 3 3 3 20s
+ ```
+* Get the status of the latest rollout:
+ ```shell
+ kubectl get rs
+ ```
+
+ The output is similar to this:
+ ```
+ NAME DESIRED CURRENT READY AGE
+ nginx-2142116321 0 0 0 2m
+ nginx-3926361531 3 3 3 28s
+ ```
+{{< note >}}
+You cannot rollback a paused Deployment until you resume it.
+{{< /note >}}
+
+## Deployment status
+
+A Deployment enters various states during its lifecycle. It can be [progressing](#progressing-deployment) while
+rolling out a new ReplicaSet, it can be [complete](#complete-deployment), or it can [fail to progress](#failed-deployment).
+
+### Progressing Deployment
+
+Kubernetes marks a Deployment as _progressing_ when one of the following tasks is performed:
+
+* The Deployment creates a new ReplicaSet.
+* The Deployment is scaling up its newest ReplicaSet.
+* The Deployment is scaling down its older ReplicaSet(s).
+* New Pods become ready or available (ready for at least [MinReadySeconds](#min-ready-seconds)).
+
+You can monitor the progress for a Deployment by using `kubectl rollout status`.
+
+### Complete Deployment
+
+Kubernetes marks a Deployment as _complete_ when it has the following characteristics:
+
+* All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning any
+updates you've requested have been completed.
+* All of the replicas associated with the Deployment are available.
+* No old replicas for the Deployment are running.
+
+You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed
+successfully, `kubectl rollout status` returns a zero exit code.
+
+```shell
+kubectl rollout status deployment.v1.apps/nginx-deployment
+```
+The output is similar to this:
+```
+Waiting for rollout to finish: 2 of 3 updated replicas are available...
+deployment.apps/nginx-deployment successfully rolled out
+$ echo $?
+0
+```
+
+### Failed Deployment
+
+Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occur
+due to some of the following factors:
+
+* Insufficient quota
+* Readiness probe failures
+* Image pull errors
+* Insufficient permissions
+* Limit ranges
+* Application runtime misconfiguration
+
+One way you can detect this condition is to specify a deadline parameter in your Deployment spec:
+([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `.spec.progressDeadlineSeconds` denotes the
+number of seconds the Deployment controller waits before indicating (in the Deployment status) that the
+Deployment progress has stalled.
+
+The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report
+lack of progress for a Deployment after 10 minutes:
+
+```shell
+kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
+```
+The output is similar to this:
+```
+deployment.apps/nginx-deployment patched
+```
+Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
+attributes to the Deployment's `.status.conditions`:
+
+* Type=Progressing
+* Status=False
+* Reason=ProgressDeadlineExceeded
+
+See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions.
+
+{{< note >}}
+Kubernetes takes no action on a stalled Deployment other than to report a status condition with
+`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for
+example, rollback the Deployment to its previous version.
+{{< /note >}}
+
+{{< note >}}
+If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You can
+safely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding the
+deadline.
+{{< /note >}}
+
+You may experience transient errors with your Deployments, either due to a low timeout that you have set or
+due to any other kind of error that can be treated as transient. For example, let's suppose you have
+insufficient quota. If you describe the Deployment you will notice the following section:
+
+```shell
+kubectl describe deployment nginx-deployment
+```
+The output is similar to this:
+```
+<...>
+Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True ReplicaSetUpdated
+ ReplicaFailure True FailedCreate
+<...>
+```
+
+If you run `kubectl get deployment nginx-deployment -o yaml`, the Deployment status is similar to this:
+
+```
+status:
+ availableReplicas: 2
+ conditions:
+ - lastTransitionTime: 2016-10-04T12:25:39Z
+ lastUpdateTime: 2016-10-04T12:25:39Z
+ message: Replica set "nginx-deployment-4262182780" is progressing.
+ reason: ReplicaSetUpdated
+ status: "True"
+ type: Progressing
+ - lastTransitionTime: 2016-10-04T12:25:42Z
+ lastUpdateTime: 2016-10-04T12:25:42Z
+ message: Deployment has minimum availability.
+ reason: MinimumReplicasAvailable
+ status: "True"
+ type: Available
+ - lastTransitionTime: 2016-10-04T12:25:39Z
+ lastUpdateTime: 2016-10-04T12:25:39Z
+ message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
+ object-counts, requested: pods=1, used: pods=3, limited: pods=2'
+ reason: FailedCreate
+ status: "True"
+ type: ReplicaFailure
+ observedGeneration: 3
+ replicas: 2
+ unavailableReplicas: 2
+```
+
+Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and the
+reason for the Progressing condition:
+
+```
+Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing False ProgressDeadlineExceeded
+ ReplicaFailure True FailedCreate
+```
+
+You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other
+controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota
+conditions and the Deployment controller then completes the Deployment rollout, you'll see the
+Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable`).
+
+```
+Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+```
+
+`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated
+by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment
+is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum
+required new replicas are available (see the Reason of the condition for the particulars - in our case
+`Reason=NewReplicaSetAvailable` means that the Deployment is complete).
+
+You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status`
+returns a non-zero exit code if the Deployment has exceeded the progression deadline.
+
+```shell
+kubectl rollout status deployment.v1.apps/nginx-deployment
+```
+The output is similar to this:
+```
+Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
+error: deployment "nginx" exceeded its progress deadline
+$ echo $?
+1
+```
+
+### Operating on a failed deployment
+
+All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back
+to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template.
+
+## Clean up Policy
+
+You can set `.spec.revisionHistoryLimit` field in a Deployment to specify how many old ReplicaSets for
+this Deployment you want to retain. The rest will be garbage-collected in the background. By default,
+it is 10.
+
+{{< note >}}
+Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment
+thus that Deployment will not be able to roll back.
+{{< /note >}}
+
+## Canary Deployment
+
+If you want to roll out releases to a subset of users or servers using the Deployment, you
+can create multiple Deployments, one for each release, following the canary pattern described in
+[managing resources](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments).
+
+## Writing a Deployment Spec
+
+As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and `metadata` fields.
+For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/),
+configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents.
+
+A Deployment also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
+
+### Pod Template
+
+The `.spec.template` and `.spec.selector` are the only required field of the `.spec`.
+
+The `.spec.template` is a [Pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [Pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an
+`apiVersion` or `kind`.
+
+In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriate
+labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [selector](#selector)).
+
+Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is
+allowed, which is the default if not specified.
+
+### Replicas
+
+`.spec.replicas` is an optional field that specifies the number of desired Pods. It defaults to 1.
+
+### Selector
+
+`.spec.selector` is an required field that specifies a [label selector](/docs/concepts/overview/working-with-objects/labels/)
+for the Pods targeted by this Deployment.
+
+`.spec.selector` must match `.spec.template.metadata.labels`, or it will be rejected by the API.
+
+In API version `apps/v1`, `.spec.selector` and `.metadata.labels` do not default to `.spec.template.metadata.labels` if not set. So they must be set explicitly. Also note that `.spec.selector` is immutable after creation of the Deployment in `apps/v1`.
+
+A Deployment may terminate Pods whose labels match the selector if their template is different
+from `.spec.template` or if the total number of such Pods exceeds `.spec.replicas`. It brings up new
+Pods with `.spec.template` if the number of Pods is less than the desired number.
+
+{{< note >}}
+You should not create other Pods whose labels match this selector, either directly, by creating
+another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you
+do so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this.
+{{< /note >}}
+
+If you have multiple controllers that have overlapping selectors, the controllers will fight with each
+other and won't behave correctly.
+
+### Strategy
+
+`.spec.strategy` specifies the strategy used to replace old Pods by new ones.
+`.spec.strategy.type` can be "Recreate" or "RollingUpdate". "RollingUpdate" is
+the default value.
+
+#### Recreate Deployment
+
+All existing Pods are killed before new ones are created when `.spec.strategy.type==Recreate`.
+
+#### Rolling Update Deployment
+
+The Deployment updates Pods in a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/)
+fashion when `.spec.strategy.type==RollingUpdate`. You can specify `maxUnavailable` and `maxSurge` to control
+the rolling update process.
+
+##### Max Unavailable
+
+`.spec.strategy.rollingUpdate.maxUnavailable` is an optional field that specifies the maximum number
+of Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5)
+or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage by
+rounding down. The value cannot be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0. The default value is 25%.
+
+For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desired
+Pods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaled
+down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available
+at all times during the update is at least 70% of the desired Pods.
+
+##### Max Surge
+
+`.spec.strategy.rollingUpdate.maxSurge` is an optional field that specifies the maximum number of Pods
+that can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or a
+percentage of desired Pods (for example, 10%). The value cannot be 0 if `MaxUnavailable` is 0. The absolute number
+is calculated from the percentage by rounding up. The default value is 25%.
+
+For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the
+rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired
+Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the
+total number of Pods running at any time during the update is at most 130% of desired Pods.
+
+### Progress Deadline Seconds
+
+`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want
+to wait for your Deployment to progress before the system reports back that the Deployment has
+[failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`.
+and `Reason=ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep
+retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment
+controller will roll back a Deployment as soon as it observes such a condition.
+
+If specified, this field needs to be greater than `.spec.minReadySeconds`.
+
+### Min Ready Seconds
+
+`.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly
+created Pod should be ready without any of its containers crashing, for it to be considered available.
+This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about when
+a Pod is considered ready, see [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
+
+### Rollback To
+
+Field `.spec.rollbackTo` has been deprecated in API versions `extensions/v1beta1` and `apps/v1beta1`, and is no longer supported in API versions starting `apps/v1beta2`. Instead, `kubectl rollout undo` as introduced in [Rolling Back to a Previous Revision](#rolling-back-to-a-previous-revision) should be used.
+
+### Revision History Limit
+
+A Deployment's revision history is stored in the ReplicaSets it controls.
+
+`.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain
+to allow rollback. These old ReplicaSets consume resources in `etcd` and crowd the output of `kubectl get rs`. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments.
+
+More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up.
+In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.
+
+### Paused
+
+`.spec.paused` is an optional boolean field for pausing and resuming a Deployment. The only difference between
+a paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the paused
+Deployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default when
+it is created.
+
+## Alternative to Deployments
+
+### kubectl rolling-update
+
+[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) updates Pods and ReplicationControllers
+in a similar fashion. But Deployments are recommended, since they are declarative, server side, and have
+additional features, such as rolling back to any previous revision even after the rolling update is done.
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md
new file mode 100644
index 0000000000000..36bf7876bcb79
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/controllers/jobs-run-to-completion.md
@@ -0,0 +1,480 @@
+---
+reviewers:
+- erictune
+- soltysh
+title: Jobs - Run to Completion
+content_template: templates/concept
+feature:
+ title: Пакетна обробка
+ description: >
+ На додачу до Сервісів, Kubernetes може керувати вашими робочими навантаженнями систем безперервної інтеграції та пакетної обробки, за потреби замінюючи контейнери, що відмовляють.
+weight: 70
+---
+
+{{% capture overview %}}
+
+A Job creates one or more Pods and ensures that a specified number of them successfully terminate.
+As pods successfully complete, the Job tracks the successful completions. When a specified number
+of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
+the Pods it created.
+
+A simple case is to create one Job object in order to reliably run one Pod to completion.
+The Job object will start a new Pod if the first Pod fails or is deleted (for example
+due to a node hardware failure or a node reboot).
+
+You can also use a Job to run multiple Pods in parallel.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Running an example Job
+
+Here is an example Job config. It computes π to 2000 places and prints it out.
+It takes around 10s to complete.
+
+{{< codenew file="controllers/job.yaml" >}}
+
+You can run the example with this command:
+
+```shell
+kubectl apply -f https://k8s.io/examples/controllers/job.yaml
+```
+```
+job.batch/pi created
+```
+
+Check on the status of the Job with `kubectl`:
+
+```shell
+kubectl describe jobs/pi
+```
+```
+Name: pi
+Namespace: default
+Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+ job-name=pi
+Annotations: kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":...
+Parallelism: 1
+Completions: 1
+Start Time: Mon, 02 Dec 2019 15:20:11 +0200
+Completed At: Mon, 02 Dec 2019 15:21:16 +0200
+Duration: 65s
+Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
+Pod Template:
+ Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+ job-name=pi
+ Containers:
+ pi:
+ Image: perl
+ Port:
+ Host Port:
+ Command:
+ perl
+ -Mbignum=bpi
+ -wle
+ print bpi(2000)
+ Environment:
+ Mounts:
+ Volumes:
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7
+```
+
+To view completed Pods of a Job, use `kubectl get pods`.
+
+To list all the Pods that belong to a Job in a machine readable form, you can use a command like this:
+
+```shell
+pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
+echo $pods
+```
+```
+pi-5rwd7
+```
+
+Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
+that just gets the name from each Pod in the returned list.
+
+View the standard output of one of the pods:
+
+```shell
+kubectl logs $pods
+```
+The output is similar to this:
+```shell
+3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
+```
+
+## Writing a Job Spec
+
+As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields.
+
+A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
+
+### Pod Template
+
+The `.spec.template` is the only required field of the `.spec`.
+
+The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`.
+
+In addition to required fields for a Pod, a pod template in a Job must specify appropriate
+labels (see [pod selector](#pod-selector)) and an appropriate restart policy.
+
+Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed.
+
+### Pod Selector
+
+The `.spec.selector` field is optional. In almost all cases you should not specify it.
+See section [specifying your own pod selector](#specifying-your-own-pod-selector).
+
+
+### Parallel Jobs
+
+There are three main types of task suitable to run as a Job:
+
+1. Non-parallel Jobs
+ - normally, only one Pod is started, unless the Pod fails.
+ - the Job is complete as soon as its Pod terminates successfully.
+1. Parallel Jobs with a *fixed completion count*:
+ - specify a non-zero positive value for `.spec.completions`.
+ - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`.
+ - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`.
+1. Parallel Jobs with a *work queue*:
+ - do not specify `.spec.completions`, default to `.spec.parallelism`.
+ - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
+ - each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done.
+ - when _any_ Pod from the Job terminates with success, no new Pods are created.
+ - once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success.
+ - once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting.
+
+For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
+unset, both are defaulted to 1.
+
+For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed.
+You can set `.spec.parallelism`, or leave it unset and it will default to 1.
+
+For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
+a non-negative integer.
+
+For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
+
+
+#### Controlling Parallelism
+
+The requested parallelism (`.spec.parallelism`) can be set to any non-negative value.
+If it is unspecified, it defaults to 1.
+If it is specified as 0, then the Job is effectively paused until it is increased.
+
+Actual parallelism (number of pods running at any instant) may be more or less than requested
+parallelism, for a variety of reasons:
+
+- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of
+ remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
+- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however.
+- If the Job {{< glossary_tooltip term_id="controller" >}} has not had time to react.
+- If the Job controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.),
+ then there may be fewer pods than requested.
+- The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job.
+- When a Pod is gracefully shut down, it takes time to stop.
+
+## Handling Pod and Container Failures
+
+A container in a Pod may fail for a number of reasons, such as because the process in it exited with
+a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this
+happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays
+on the node, but the container is re-run. Therefore, your program needs to handle the case when it is
+restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`.
+See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`.
+
+An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
+(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
+`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller
+starts a new Pod. This means that your application needs to handle the case when it is restarted in a new
+pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
+caused by previous runs.
+
+Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and
+`.spec.template.spec.restartPolicy = "Never"`, the same program may
+sometimes be started twice.
+
+If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
+multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
+
+### Pod backoff failure policy
+
+There are situations where you want to fail a Job after some amount of retries
+due to a logical error in configuration etc.
+To do so, set `.spec.backoffLimit` to specify the number of retries before
+considering a Job as failed. The back-off limit is set by default to 6. Failed
+Pods associated with the Job are recreated by the Job controller with an
+exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The
+back-off count is reset if no new failed Pods appear before the Job's next
+status check.
+
+{{< note >}}
+Issue [#54870](https://github.com/kubernetes/kubernetes/issues/54870) still exists for versions of Kubernetes prior to version 1.12
+{{< /note >}}
+{{< note >}}
+If your job has `restartPolicy = "OnFailure"`, keep in mind that your container running the Job
+will be terminated once the job backoff limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting
+`restartPolicy = "Never"` when debugging the Job or using a logging system to ensure output
+from failed Jobs is not lost inadvertently.
+{{< /note >}}
+
+## Job Termination and Cleanup
+
+When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around
+allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output.
+The job object also remains after it is completed so that you can view its status. It is up to the user to delete
+old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too.
+
+By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the
+`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated.
+
+Another way to terminate a Job is by setting an active deadline.
+Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
+The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created.
+Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`.
+
+Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.
+
+Example:
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: pi-with-timeout
+spec:
+ backoffLimit: 5
+ activeDeadlineSeconds: 100
+ template:
+ spec:
+ containers:
+ - name: pi
+ image: perl
+ command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
+ restartPolicy: Never
+```
+
+Note that both the Job spec and the [Pod template spec](/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level.
+
+Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is `type: Failed`.
+That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds` and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve.
+
+## Clean Up Finished Jobs Automatically
+
+Finished Jobs are usually no longer needed in the system. Keeping them around in
+the system will put pressure on the API server. If the Jobs are managed directly
+by a higher level controller, such as
+[CronJobs](/docs/concepts/workloads/controllers/cron-jobs/), the Jobs can be
+cleaned up by CronJobs based on the specified capacity-based cleanup policy.
+
+### TTL Mechanism for Finished Jobs
+
+{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+
+Another way to clean up finished Jobs (either `Complete` or `Failed`)
+automatically is to use a TTL mechanism provided by a
+[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
+finished resources, by specifying the `.spec.ttlSecondsAfterFinished` field of
+the Job.
+
+When the TTL controller cleans up the Job, it will delete the Job cascadingly,
+i.e. delete its dependent objects, such as Pods, together with the Job. Note
+that when the Job is deleted, its lifecycle guarantees, such as finalizers, will
+be honored.
+
+For example:
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: pi-with-ttl
+spec:
+ ttlSecondsAfterFinished: 100
+ template:
+ spec:
+ containers:
+ - name: pi
+ image: perl
+ command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
+ restartPolicy: Never
+```
+
+The Job `pi-with-ttl` will be eligible to be automatically deleted, `100`
+seconds after it finishes.
+
+If the field is set to `0`, the Job will be eligible to be automatically deleted
+immediately after it finishes. If the field is unset, this Job won't be cleaned
+up by the TTL controller after it finishes.
+
+Note that this TTL mechanism is alpha, with feature gate `TTLAfterFinished`. For
+more information, see the documentation for
+[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
+finished resources.
+
+## Job Patterns
+
+The Job object can be used to support reliable parallel execution of Pods. The Job object is not
+designed to support closely-communicating parallel processes, as commonly found in scientific
+computing. It does support parallel processing of a set of independent but related *work items*.
+These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a
+NoSQL database to scan, and so on.
+
+In a complex system, there may be multiple different sets of work items. Here we are just
+considering one set of work items that the user wants to manage together — a *batch job*.
+
+There are several different patterns for parallel computation, each with strengths and weaknesses.
+The tradeoffs are:
+
+- One Job object for each work item, vs. a single Job object for all work items. The latter is
+ better for large numbers of work items. The former creates some overhead for the user and for the
+ system to manage large numbers of Job objects.
+- Number of pods created equals number of work items, vs. each Pod can process multiple work items.
+ The former typically requires less modification to existing code and containers. The latter
+ is better for large numbers of work items, for similar reasons to the previous bullet.
+- Several approaches use a work queue. This requires running a queue service,
+ and modifications to the existing program or container to make it use the work queue.
+ Other approaches are easier to adapt to an existing containerised application.
+
+
+The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
+The pattern names are also links to examples and more detailed description.
+
+| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
+| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
+| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ |
+| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | sometimes | ✓ |
+| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ |
+| Single Job with Static Work Assignment | ✓ | | ✓ | |
+
+When you specify completions with `.spec.completions`, each Pod created by the Job controller
+has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that
+all pods for a task will have the same command line and the same
+image, the same volumes, and (almost) the same environment variables. These patterns
+are different ways to arrange for pods to work on different things.
+
+This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
+Here, `W` is the number of work items.
+
+| Pattern | `.spec.completions` | `.spec.parallelism` |
+| -------------------------------------------------------------------- |:-------------------:|:--------------------:|
+| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | 1 | should be 1 |
+| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any |
+| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any |
+| Single Job with Static Work Assignment | W | any |
+
+
+## Advanced Usage
+
+### Specifying your own pod selector
+
+Normally, when you create a Job object, you do not specify `.spec.selector`.
+The system defaulting logic adds this field when the Job is created.
+It picks a selector value that will not overlap with any other jobs.
+
+However, in some cases, you might need to override this automatically set selector.
+To do this, you can specify the `.spec.selector` of the Job.
+
+Be very careful when doing this. If you specify a label selector which is not
+unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated
+job may be deleted, or this Job may count other Pods as completing it, or one or both
+Jobs may refuse to create Pods or run to completion. If a non-unique selector is
+chosen, then other controllers (e.g. ReplicationController) and their Pods may behave
+in unpredictable ways too. Kubernetes will not stop you from making a mistake when
+specifying `.spec.selector`.
+
+Here is an example of a case when you might want to use this feature.
+
+Say Job `old` is already running. You want existing Pods
+to keep running, but you want the rest of the Pods it creates
+to use a different pod template and for the Job to have a new name.
+You cannot update the Job because these fields are not updatable.
+Therefore, you delete Job `old` but _leave its pods
+running_, using `kubectl delete jobs/old --cascade=false`.
+Before deleting it, you make a note of what selector it uses:
+
+```
+kubectl get job old -o yaml
+```
+```
+kind: Job
+metadata:
+ name: old
+ ...
+spec:
+ selector:
+ matchLabels:
+ controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
+ ...
+```
+
+Then you create a new Job with name `new` and you explicitly specify the same selector.
+Since the existing Pods have label `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
+they are controlled by Job `new` as well.
+
+You need to specify `manualSelector: true` in the new Job since you are not using
+the selector that the system normally generates for you automatically.
+
+```
+kind: Job
+metadata:
+ name: new
+ ...
+spec:
+ manualSelector: true
+ selector:
+ matchLabels:
+ controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
+ ...
+```
+
+The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
+`manualSelector: true` tells the system to that you know what you are doing and to allow this
+mismatch.
+
+## Alternatives
+
+### Bare Pods
+
+When the node that a Pod is running on reboots or fails, the pod is terminated
+and will not be restarted. However, a Job will create new Pods to replace terminated ones.
+For this reason, we recommend that you use a Job rather than a bare Pod, even if your application
+requires only a single Pod.
+
+### Replication Controller
+
+Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller).
+A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job
+manages Pods that are expected to terminate (e.g. batch tasks).
+
+As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate
+for pods with `RestartPolicy` equal to `OnFailure` or `Never`.
+(Note: If `RestartPolicy` is not set, the default value is `Always`.)
+
+### Single Job starts Controller Pod
+
+Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort
+of custom controller for those Pods. This allows the most flexibility, but may be somewhat
+complicated to get started with and offers less integration with Kubernetes.
+
+One example of this pattern would be a Job which starts a Pod which runs a script that in turn
+starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)), runs a spark
+driver, and then cleans up.
+
+An advantage of this approach is that the overall process gets the completion guarantee of a Job
+object, but complete control over what Pods are created and how work is assigned to them.
+
+## Cron Jobs {#cron-jobs}
+
+You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`.
+
+{{% /capture %}}
diff --git a/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md b/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md
new file mode 100644
index 0000000000000..c8a666ac1120f
--- /dev/null
+++ b/content/uk/docs/concepts/workloads/controllers/replicationcontroller.md
@@ -0,0 +1,291 @@
+---
+reviewers:
+- bprashanth
+- janetkuo
+title: ReplicationController
+feature:
+ title: Самозцілення
+ anchor: How a ReplicationController Works
+ description: >
+ Перезапускає контейнери, що відмовили; заміняє і перерозподіляє контейнери у випадку непрацездатності вузла; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності.
+
+content_template: templates/concept
+weight: 20
+---
+
+{{% capture overview %}}
+
+{{< note >}}
+A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication.
+{{< /note >}}
+
+A _ReplicationController_ ensures that a specified number of pod replicas are running at any one
+time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is
+always up and available.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## How a ReplicationController Works
+
+If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the
+ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a
+ReplicationController are automatically replaced if they fail, are deleted, or are terminated.
+For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade.
+For this reason, you should use a ReplicationController even if your application requires
+only a single pod. A ReplicationController is similar to a process supervisor,
+but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods
+across multiple nodes.
+
+ReplicationController is often abbreviated to "rc" in discussion, and as a shortcut in
+kubectl commands.
+
+A simple case is to create one ReplicationController object to reliably run one instance of
+a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated
+service, such as web servers.
+
+## Running an example ReplicationController
+
+This example ReplicationController config runs three copies of the nginx web server.
+
+{{< codenew file="controllers/replication.yaml" >}}
+
+Run the example job by downloading the example file and then running this command:
+
+```shell
+kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
+```
+```
+replicationcontroller/nginx created
+```
+
+Check on the status of the ReplicationController using this command:
+
+```shell
+kubectl describe replicationcontrollers/nginx
+```
+```
+Name: nginx
+Namespace: default
+Selector: app=nginx
+Labels: app=nginx
+Annotations:
+Replicas: 3 current / 3 desired
+Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
+Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx
+ Port: 80/TCP
+ Environment:
+ Mounts:
+ Volumes:
+Events:
+ FirstSeen LastSeen Count From SubobjectPath Type Reason Message
+ --------- -------- ----- ---- ------------- ---- ------ -------
+ 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m
+ 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0
+ 20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v
+```
+
+Here, three pods are created, but none is running yet, perhaps because the image is being pulled.
+A little later, the same command may show:
+
+```shell
+Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
+```
+
+To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:
+
+```shell
+pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
+echo $pods
+```
+```
+nginx-3ntk0 nginx-4ok8v nginx-qrm3m
+```
+
+Here, the selector is the same as the selector for the ReplicationController (seen in the
+`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option
+specifies an expression that just gets the name from each pod in the returned list.
+
+
+## Writing a ReplicationController Spec
+
+As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields.
+For general information about working with config files, see [object management ](/docs/concepts/overview/working-with-objects/object-management/).
+
+A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
+
+### Pod Template
+
+The `.spec.template` is the only required field of the `.spec`.
+
+The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`.
+
+In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate
+labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [pod selector](#pod-selector).
+
+Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified.
+
+For local container restarts, ReplicationControllers delegate to an agent on the node,
+for example the [Kubelet](/docs/admin/kubelet/) or Docker.
+
+### Labels on the ReplicationController
+
+The ReplicationController can itself have labels (`.metadata.labels`). Typically, you
+would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
+then it defaults to `.spec.template.metadata.labels`. However, they are allowed to be
+different, and the `.metadata.labels` do not affect the behavior of the ReplicationController.
+
+### Pod Selector
+
+The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors). A ReplicationController
+manages all the pods with labels that match the selector. It does not distinguish
+between pods that it created or deleted and pods that another person or process created or
+deleted. This allows the ReplicationController to be replaced without affecting the running pods.
+
+If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will
+be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to
+`.spec.template.metadata.labels`.
+
+Also you should not normally create any pods whose labels match this selector, either directly, with
+another ReplicationController, or with another controller such as Job. If you do so, the
+ReplicationController thinks that it created the other pods. Kubernetes does not stop you
+from doing this.
+
+If you do end up with multiple controllers that have overlapping selectors, you
+will have to manage the deletion yourself (see [below](#working-with-replicationcontrollers)).
+
+### Multiple Replicas
+
+You can specify how many pods should run concurrently by setting `.spec.replicas` to the number
+of pods you would like to have running concurrently. The number running at any time may be higher
+or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully
+shutdown, and a replacement starts early.
+
+If you do not specify `.spec.replicas`, then it defaults to 1.
+
+## Working with ReplicationControllers
+
+### Deleting a ReplicationController and its Pods
+
+To delete a ReplicationController and all its pods, use [`kubectl
+delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl will scale the ReplicationController to zero and wait
+for it to delete each pod before deleting the ReplicationController itself. If this kubectl
+command is interrupted, it can be restarted.
+
+When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
+0, wait for pod deletions, then delete the ReplicationController).
+
+### Deleting just a ReplicationController
+
+You can delete a ReplicationController without affecting any of its pods.
+
+Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
+
+When using the REST API or go client library, simply delete the ReplicationController object.
+
+Once the original is deleted, you can create a new ReplicationController to replace it. As long
+as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
+However, it will not make any effort to make existing pods match a new, different pod template.
+To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
+
+### Isolating pods from a ReplicationController
+
+Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
+
+## Common usage patterns
+
+### Rescheduling
+
+As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent).
+
+### Scaling
+
+The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
+
+### Rolling updates
+
+The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.
+
+As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
+
+Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
+
+The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
+
+Rolling update is implemented in the client tool
+[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
+
+### Multiple release tracks
+
+In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
+
+For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another ReplicationController with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc.
+
+### Using ReplicationControllers with Services
+
+Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic
+goes to the old version, and some goes to the new version.
+
+A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services.
+
+## Writing programs for Replication
+
+Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.
+
+## Responsibilities of the ReplicationController
+
+The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
+
+The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
+
+The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
+
+
+## API Object
+
+Replication controller is a top-level resource in the Kubernetes REST API. More details about the
+API object can be found at:
+[ReplicationController API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicationcontroller-v1-core).
+
+## Alternatives to ReplicationController
+
+### ReplicaSet
+
+[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
+It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
+Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all.
+
+
+### Deployment (Recommended)
+
+[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods
+in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
+because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features.
+
+### Bare Pods
+
+Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
+
+### Job
+
+Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for pods that are expected to terminate on their own
+(that is, batch jobs).
+
+### DaemonSet
+
+Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicationController for pods that provide a
+machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied
+to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
+safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
+
+## For more information
+
+Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/).
+
+{{% /capture %}}
diff --git a/content/uk/docs/contribute/localization_uk.md b/content/uk/docs/contribute/localization_uk.md
new file mode 100644
index 0000000000000..67a9c1789b918
--- /dev/null
+++ b/content/uk/docs/contribute/localization_uk.md
@@ -0,0 +1,123 @@
+---
+title: Рекомендації з перекладу на українську мову
+content_template: templates/concept
+anchors:
+ - anchor: "#правила-перекладу"
+ title: Правила перекладу
+ - anchor: "#словник"
+ title: Словник
+---
+
+{{% capture overview %}}
+
+Дорогі друзі! Раді вітати вас у спільноті українських контриб'юторів проекту Kubernetes. Ця сторінка створена з метою полегшити вашу роботу при перекладі документації. Вона містить правила, якими ми керувалися під час перекладу, і базовий словник, який ми почали укладати. Перелічені у ньому терміни ви знайдете в українській версії документації Kubernetes. Будемо дуже вдячні, якщо ви допоможете нам доповнити цей словник і розширити правила перекладу.
+
+Сподіваємось, наші рекомендації стануть вам у пригоді.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Правила перекладу {#правила-перекладу}
+
+* У випадку, якщо у перекладі термін набуває неоднозначності і розуміння тексту ускладнюється, надайте у дужках англійський варіант, наприклад: кінцеві точки (endpoints). Якщо при перекладі термін втрачає своє значення, краще не перекладати його, наприклад: характеристики affinity.
+
+* Співзвучні слова передаємо транслітерацією зі збереженням написання (Service -> Сервіс).
+
+* Реалії Kubernetes пишемо з великої літери: Сервіс, Под, але вузол (node).
+
+* Для слів з великих літер, які не мають трансліт-аналогу, використовуємо англійські слова (Deployment, Volume, Namespace).
+
+* Складені слова вважаємо за власні назви і не перекладаємо (LabelSelector, kube-apiserver).
+
+* Частовживані і усталені за межами K8s слова перекладаємо українською і пишемо з маленької літери (label -> мітка).
+
+* Для перевірки закінчень слів у родовому відмінку однини (-а/-я, -у/-ю) використовуйте [онлайн словник](https://slovnyk.ua/). Якщо слова немає у словнику, визначте його відміну і далі відмінюйте за правилами. Більшість необхідних нам термінів є словами іншомовного походження, які у родовому відмінку однини приймають закінчення -а: Пода, Deployment'а. Докладніше [дивіться тут](https://pidruchniki.com/1948041951499/dokumentoznavstvo/vidminyuvannya_imennikiv).
+
+## Словник {#словник}
+
+English | Українська |
+--- | --- |
+addon | розширення |
+application | застосунок |
+backend | бекенд |
+build | збирати (процес) |
+build | збирання (результат) |
+cache | кеш |
+CLI | інтерфейс командного рядка |
+cloud | хмара; хмарний провайдер |
+containerized | контейнеризований |
+Continuous development | безперервна розробка |
+Continuous integration | безперервна інтеграція |
+Continuous deployment | безперервне розгортання |
+contribute | робити внесок (до проекту), допомагати (проекту) |
+contributor | контриб'ютор, учасник проекту |
+control plane | площина управління |
+controller | контролер |
+CPU | ЦПУ |
+dashboard | дашборд |
+data plane | площина даних |
+default settings | типові налаштування |
+default (by) | за умовчанням |
+Deployment | Deployment |
+deprecated | застарілий |
+desired state | бажаний стан |
+downtime | недоступність, простій |
+ecosystem | сімейство проектів (екосистема) |
+endpoint | кінцева точка |
+expose (a service) | відкрити доступ (до сервісу) |
+fail | відмовити |
+feature | компонент |
+framework | фреймворк |
+frontend | фронтенд |
+image | образ |
+Ingress | Ingress |
+instance | інстанс |
+issue | запит |
+kube-proxy | kube-proxy |
+kubelet | kubelet |
+Kubernetes features | функціональні можливості Kubernetes |
+label | мітка |
+lifecycle | життєвий цикл |
+logging | логування |
+maintenance | обслуговування |
+master | master |
+map | спроектувати, зіставити, встановити відповідність |
+monitor | моніторити |
+monitoring | моніторинг |
+Namespace | Namespace |
+network policy | мережева політика |
+node | вузол |
+orchestrate | оркеструвати |
+output | вивід |
+patch | патч |
+Pod | Под |
+production | прод |
+pull request | pull request |
+release | реліз |
+replica | репліка |
+rollback | відкатування |
+rolling update | послідовне оновлення |
+rollout (new updates) | викатка (оновлень) |
+run | запускати |
+scale | масштабувати |
+schedule | розподіляти (Поди по вузлах) |
+scheduler | scheduler |
+secret | секрет |
+selector | селектор |
+self-healing | самозцілення |
+self-restoring | самовідновлення |
+service | сервіс |
+service discovery | виявлення сервісу |
+source code | вихідний код |
+stateful app | застосунок зі станом |
+stateless app | застосунок без стану |
+task | завдання |
+terminated | зупинений |
+traffic | трафік |
+VM (virtual machine) | ВМ |
+Volume | Volume |
+workload | робоче навантаження |
+YAML | YAML |
+
+{{% /capture %}}
diff --git a/content/uk/docs/home/_index.md b/content/uk/docs/home/_index.md
new file mode 100644
index 0000000000000..d72f91478f315
--- /dev/null
+++ b/content/uk/docs/home/_index.md
@@ -0,0 +1,58 @@
+---
+approvers:
+- chenopis
+title: Документація Kubernetes
+noedit: true
+cid: docsHome
+layout: docsportal_home
+class: gridPage
+linkTitle: "Home"
+main_menu: true
+weight: 10
+hide_feedback: true
+menu:
+ main:
+ title: "Документація"
+ weight: 20
+ post: >
+
Дізнайтеся про основи роботи з Kubernetes, використовуючи схеми, навчальну та довідкову документацію. Ви можете навіть зробити свій внесок у документацію!
+overview: >
+ Kubernetes - рушій оркестрації контейнерів з відкритим вихідним кодом для автоматичного розгортання, масштабування і управління контейнеризованими застосунками. Цей проект розробляється під егідою Cloud Native Computing Foundation (CNCF).
+cards:
+- name: concepts
+ title: "Розуміння основ"
+ description: "Дізнайтеся про Kubernetes і його фундаментальні концепції."
+ button: "Дізнатися про концепції"
+ button_path: "/docs/concepts"
+- name: tutorials
+ title: "Спробуйте Kubernetes"
+ description: "Дізнайтеся із навчальних матеріалів, як розгортати застосунки в Kubernetes."
+ button: "Переглянути навчальні матеріали"
+ button_path: "/docs/tutorials"
+- name: setup
+ title: "Налаштування кластера"
+ description: "Розгорніть Kubernetes з урахуванням власних ресурсів і потреб."
+ button: "Налаштувати Kubernetes"
+ button_path: "/docs/setup"
+- name: tasks
+ title: "Дізнайтеся, як користуватись Kubernetes"
+ description: "Ознайомтеся з типовими задачами і способами їх виконання за допомогою короткого алгоритму дій."
+ button: "Переглянути задачі"
+ button_path: "/docs/tasks"
+- name: reference
+ title: Переглянути довідкову інформацію
+ description: Ознайомтеся з термінологією, синтаксисом командного рядка, типами ресурсів API і документацією з налаштування інструментів.
+ button: Переглянути довідкову інформацію
+ button_path: /docs/reference
+- name: contribute
+ title: Зробити внесок у документацію
+ description: Будь-хто може зробити свій внесок, незалежно від того, чи ви нещодавно долучилися до проекту, чи працюєте над ним вже довгий час.
+ button: Зробити внесок у документацію
+ button_path: /docs/contribute
+- name: download
+ title: Завантажити Kubernetes
+ description: Якщо ви встановлюєте Kubernetes чи оновлюєтесь до останньої версії, звіряйтеся з актуальною інформацією по релізу.
+- name: about
+ title: Про документацію
+ description: Цей вебсайт містить документацію по актуальній і чотирьох попередніх версіях Kubernetes.
+---
diff --git a/content/uk/docs/reference/glossary/applications.md b/content/uk/docs/reference/glossary/applications.md
new file mode 100644
index 0000000000000..c42c6ec34339c
--- /dev/null
+++ b/content/uk/docs/reference/glossary/applications.md
@@ -0,0 +1,16 @@
+---
+# title: Applications
+title: Застосунки
+id: applications
+date: 2019-05-12
+full_link:
+# short_description: >
+# The layer where various containerized applications run.
+short_description: >
+ Шар, в якому запущено контейнерізовані застосунки.
+aka:
+tags:
+- fundamental
+---
+
+Шар, в якому запущено контейнерізовані застосунки.
diff --git a/content/uk/docs/reference/glossary/cluster-infrastructure.md b/content/uk/docs/reference/glossary/cluster-infrastructure.md
new file mode 100644
index 0000000000000..557180912abc0
--- /dev/null
+++ b/content/uk/docs/reference/glossary/cluster-infrastructure.md
@@ -0,0 +1,17 @@
+---
+# title: Cluster Infrastructure
+title: Інфраструктура кластера
+id: cluster-infrastructure
+date: 2019-05-12
+full_link:
+# short_description: >
+# The infrastructure layer provides and maintains VMs, networking, security groups and others.
+short_description: >
+ Шар інфраструктури забезпечує і підтримує роботу ВМ, мережі, груп безпеки тощо.
+
+aka:
+tags:
+- operations
+---
+
+Шар інфраструктури забезпечує і підтримує роботу ВМ, мережі, груп безпеки тощо.
diff --git a/content/uk/docs/reference/glossary/cluster-operations.md b/content/uk/docs/reference/glossary/cluster-operations.md
new file mode 100644
index 0000000000000..e274bb4f7f444
--- /dev/null
+++ b/content/uk/docs/reference/glossary/cluster-operations.md
@@ -0,0 +1,17 @@
+---
+# title: Cluster Operations
+title: Операції з кластером
+id: cluster-operations
+date: 2019-05-12
+full_link:
+# short_description: >
+# Activities such as upgrading the clusters, implementing security, storage, ingress, networking, logging and monitoring, and other operations involved in managing a Kubernetes cluster.
+short_description: >
+Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером.
+
+aka:
+tags:
+- operations
+---
+
+Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером.
diff --git a/content/uk/docs/reference/glossary/cluster.md b/content/uk/docs/reference/glossary/cluster.md
new file mode 100644
index 0000000000000..58fc3bd6fdb0c
--- /dev/null
+++ b/content/uk/docs/reference/glossary/cluster.md
@@ -0,0 +1,22 @@
+---
+# title: Cluster
+title: Кластер
+id: cluster
+date: 2019-06-15
+full_link:
+# short_description: >
+# A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
+short_description: >
+ Група робочих машин (їх називають вузлами), на яких запущені контейнерізовані застосунки. Кожен кластер має щонайменше один вузол.
+
+aka:
+tags:
+- fundamental
+- operation
+---
+
+Група робочих машин (їх називають вузлами), на яких запущені контейнерізовані застосунки. Кожен кластер має щонайменше один вузол.
+
+
+
+На робочих вузлах розміщуються Поди, які є складовими застосунку. Площина управління керує робочими вузлами і Подами кластера. У прод оточеннях площина управління зазвичай розповсюджується на багато комп'ютерів, а кластер складається з багатьох вузлів для забезпечення відмовостійкості і високої доступності.
diff --git a/content/uk/docs/reference/glossary/control-plane.md b/content/uk/docs/reference/glossary/control-plane.md
new file mode 100644
index 0000000000000..da9fd4c08a588
--- /dev/null
+++ b/content/uk/docs/reference/glossary/control-plane.md
@@ -0,0 +1,17 @@
+---
+# title: Control Plane
+title: Площина управління
+id: control-plane
+date: 2019-05-12
+full_link:
+# short_description: >
+# The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.
+short_description: >
+Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів.
+
+aka:
+tags:
+- fundamental
+---
+
+Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів.
diff --git a/content/uk/docs/reference/glossary/data-plane.md b/content/uk/docs/reference/glossary/data-plane.md
new file mode 100644
index 0000000000000..263a544010700
--- /dev/null
+++ b/content/uk/docs/reference/glossary/data-plane.md
@@ -0,0 +1,17 @@
+---
+# title: Data Plane
+title: Площина даних
+id: data-plane
+date: 2019-05-12
+full_link:
+# short_description: >
+# The layer that provides capacity such as CPU, memory, network, and storage so that the containers can run and connect to a network.
+short_description: >
+ Шар, який надає контейнерам ресурси, такі як ЦПУ, пам'ять, мережа і сховище даних для того, щоб контейнери могли працювати і підключатися до мережі.
+
+aka:
+tags:
+- fundamental
+---
+
+Шар, який надає контейнерам ресурси, такі як ЦПУ, пам'ять, мережа і сховище даних для того, щоб контейнери могли працювати і підключатися до мережі.
diff --git a/content/uk/docs/reference/glossary/deployment.md b/content/uk/docs/reference/glossary/deployment.md
new file mode 100644
index 0000000000000..5e62f7f784649
--- /dev/null
+++ b/content/uk/docs/reference/glossary/deployment.md
@@ -0,0 +1,23 @@
+---
+title: Deployment
+id: deployment
+date: 2018-04-12
+full_link: /docs/concepts/workloads/controllers/deployment/
+# short_description: >
+# An API object that manages a replicated application.
+short_description: >
+ Об'єкт API, що керує реплікованим застосунком.
+
+aka:
+tags:
+- fundamental
+- core-object
+- workload
+---
+
+Об'єкт API, що керує реплікованим застосунком.
+
+
+
+
+Кожна репліка являє собою {{< glossary_tooltip term_id="Под" >}}; Поди розподіляються між вузлами кластера.
diff --git a/content/uk/docs/reference/glossary/index.md b/content/uk/docs/reference/glossary/index.md
new file mode 100644
index 0000000000000..3cbc4533bacdb
--- /dev/null
+++ b/content/uk/docs/reference/glossary/index.md
@@ -0,0 +1,17 @@
+---
+approvers:
+- chenopis
+- abiogenesis-now
+# title: Standardized Glossary
+title: Глосарій
+layout: glossary
+noedit: true
+default_active_tag: fundamental
+weight: 5
+card:
+ name: reference
+ weight: 10
+# title: Glossary
+ title: Глосарій
+---
+
diff --git a/content/uk/docs/reference/glossary/kube-apiserver.md b/content/uk/docs/reference/glossary/kube-apiserver.md
new file mode 100644
index 0000000000000..82e3caa0bae63
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kube-apiserver.md
@@ -0,0 +1,29 @@
+---
+# title: API server
+title: API-сервер
+id: kube-apiserver
+date: 2018-04-12
+full_link: /docs/reference/generated/kube-apiserver/
+# short_description: >
+# Control plane component that serves the Kubernetes API.
+short_description: >
+ Компонент площини управління, що надає доступ до API Kubernetes.
+
+aka:
+- kube-apiserver
+tags:
+- architecture
+- fundamental
+---
+
+API-сервер є компонентом {{< glossary_tooltip text="площини управління" term_id="control-plane" >}} Kubernetes, через який можна отримати доступ до API Kubernetes. API-сервер є фронтендом площини управління Kubernetes.
+
+
+
+
+
+
+Основною реалізацією Kubernetes API-сервера є [kube-apiserver](/docs/reference/generated/kube-apiserver/). kube-apiserver підтримує горизонтальне масштабування, тобто масштабується за рахунок збільшення кількості інстансів. kube-apiserver можна запустити на декількох інстансах, збалансувавши між ними трафік.
diff --git a/content/uk/docs/reference/glossary/kube-controller-manager.md b/content/uk/docs/reference/glossary/kube-controller-manager.md
new file mode 100644
index 0000000000000..edd56dcc90ff6
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kube-controller-manager.md
@@ -0,0 +1,22 @@
+---
+title: kube-controller-manager
+id: kube-controller-manager
+date: 2018-04-12
+full_link: /docs/reference/command-line-tools-reference/kube-controller-manager/
+# short_description: >
+# Control Plane component that runs controller processes.
+short_description: >
+ Компонент площини управління, який запускає процеси контролера.
+
+aka:
+tags:
+- architecture
+- fundamental
+---
+
+Компонент площини управління, який запускає процеси {{< glossary_tooltip text="контролера" term_id="controller" >}}.
+
+
+
+
+За логікою, кожен {{< glossary_tooltip text="контролер" term_id="controller" >}} є окремим процесом. Однак для спрощення їх збирають в один бінарний файл і запускають як єдиний процес.
diff --git a/content/uk/docs/reference/glossary/kube-proxy.md b/content/uk/docs/reference/glossary/kube-proxy.md
new file mode 100644
index 0000000000000..5086226f8eb22
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kube-proxy.md
@@ -0,0 +1,33 @@
+---
+title: kube-proxy
+id: kube-proxy
+date: 2018-04-12
+full_link: /docs/reference/command-line-tools-reference/kube-proxy/
+# short_description: >
+# `kube-proxy` is a network proxy that runs on each node in the cluster.
+short_description: >
+ `kube-proxy` - це мережеве проксі, що запущене на кожному вузлі кластера.
+
+aka:
+tags:
+- fundamental
+- networking
+---
+
+[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) є мережевим проксі, що запущене на кожному вузлі кластера і реалізує частину концепції Kubernetes {{< glossary_tooltip term_id="сервісу">}}.
+
+
+
+
+kube-proxy відповідає за мережеві правила на вузлах. Ці правила обумовлюють підключення по мережі до ваших Подів всередині чи поза межами кластера.
+
+
+kube-proxy використовує шар фільтрації пакетів операційної системи, за наявності такого. В іншому випадку kube-proxy скеровує трафік самостійно.
diff --git a/content/uk/docs/reference/glossary/kube-scheduler.md b/content/uk/docs/reference/glossary/kube-scheduler.md
new file mode 100644
index 0000000000000..87f460222c62f
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kube-scheduler.md
@@ -0,0 +1,22 @@
+---
+title: kube-scheduler
+id: kube-scheduler
+date: 2018-04-12
+full_link: /docs/reference/generated/kube-scheduler/
+# short_description: >
+# Control Plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.
+short_description: >
+ Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть.
+
+aka:
+tags:
+- architecture
+---
+
+Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть.
+
+
+
+
+При виборі вузла враховуються наступні фактори: індивідуальна і колективна потреба у ресурсах, обмеження за апаратним/програмним забезпеченням і політиками, характеристики affinity і anti-affinity, локальність даних, сумісність робочих навантажень і граничні терміни виконання.
diff --git a/content/uk/docs/reference/glossary/kubelet.md b/content/uk/docs/reference/glossary/kubelet.md
new file mode 100644
index 0000000000000..c1178ddf45e99
--- /dev/null
+++ b/content/uk/docs/reference/glossary/kubelet.md
@@ -0,0 +1,23 @@
+---
+title: Kubelet
+id: kubelet
+date: 2018-04-12
+full_link: /docs/reference/generated/kubelet
+# short_description: >
+# An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
+short_description: >
+ Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах.
+
+aka:
+tags:
+- fundamental
+- core-object
+---
+
+Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах.
+
+
+
+
+kubelet використовує специфікації PodSpecs, які надаються за допомогою різних механізмів, і забезпечує працездатність і справність усіх контейнерів, що описані у PodSpecs. kubelet керує лише тими контейнерами, що були створені Kubernetes.
diff --git a/content/uk/docs/reference/glossary/pod.md b/content/uk/docs/reference/glossary/pod.md
new file mode 100644
index 0000000000000..b205c0bd1da73
--- /dev/null
+++ b/content/uk/docs/reference/glossary/pod.md
@@ -0,0 +1,23 @@
+---
+# title: Pod
+title: Под
+id: pod
+date: 2018-04-12
+full_link: /docs/concepts/workloads/pods/pod-overview/
+# short_description: >
+# The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster.
+short_description: >
+ Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу контейнерів, що запущені у вашому кластері.
+
+aka:
+tags:
+- core-object
+- fundamental
+---
+
+ Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу {{< glossary_tooltip text="контейнерів" term_id="container" >}}, що запущені у вашому кластері.
+
+
+
+
+Як правило, в одному Поді запускається один контейнер. У Поді також можуть бути запущені допоміжні контейнери, що забезпечують додаткову функціональність, наприклад, логування. Управління Подами зазвичай здійснює {{< glossary_tooltip term_id="deployment" >}}.
diff --git a/content/uk/docs/reference/glossary/selector.md b/content/uk/docs/reference/glossary/selector.md
new file mode 100644
index 0000000000000..77eb861f4edb2
--- /dev/null
+++ b/content/uk/docs/reference/glossary/selector.md
@@ -0,0 +1,22 @@
+---
+# title: Selector
+title: Селектор
+id: selector
+date: 2018-04-12
+full_link: /docs/concepts/overview/working-with-objects/labels/
+# short_description: >
+# Allows users to filter a list of resources based on labels.
+short_description: >
+ Дозволяє користувачам фільтрувати ресурси за мітками.
+
+aka:
+tags:
+- fundamental
+---
+
+Дозволяє користувачам фільтрувати ресурси за мітками.
+
+
+
+
+Селектори застосовуються при створенні запитів для фільтрації ресурсів за {{< glossary_tooltip text="мітками" term_id="label" >}}.
diff --git a/content/uk/docs/reference/glossary/service.md b/content/uk/docs/reference/glossary/service.md
new file mode 100755
index 0000000000000..91407b199a051
--- /dev/null
+++ b/content/uk/docs/reference/glossary/service.md
@@ -0,0 +1,24 @@
+---
+title: Сервіс
+id: service
+date: 2018-04-12
+full_link: /docs/concepts/services-networking/service/
+# A way to expose an application running on a set of Pods as a network service.
+short_description: >
+ Спосіб відкрити доступ до застосунку, що запущений на декількох Подах у вигляді мережевої служби.
+
+aka:
+tags:
+- fundamental
+- core-object
+---
+
+Це абстрактний спосіб відкрити доступ до застосунку, що працює як один (або декілька) {{< glossary_tooltip text="Подів" term_id="pod" >}} у вигляді мережевої служби.
+
+
+
+
+Переважно група Подів визначається як Сервіс за допомогою {{< glossary_tooltip text="селектора" term_id="selector" >}}. Додання або вилучення Подів змінить групу Подів, визначених селектором. Сервіс забезпечує надходження мережевого трафіка до актуальної групи Подів для підтримки робочого навантаження.
diff --git a/content/uk/docs/setup/_index.md b/content/uk/docs/setup/_index.md
new file mode 100644
index 0000000000000..f7874f9fc422a
--- /dev/null
+++ b/content/uk/docs/setup/_index.md
@@ -0,0 +1,136 @@
+---
+reviewers:
+- brendandburns
+- erictune
+- mikedanese
+no_issue: true
+title: Початок роботи
+main_menu: true
+weight: 20
+content_template: templates/concept
+card:
+ name: setup
+ weight: 20
+ anchors:
+ - anchor: "#навчальне-середовище"
+ title: Навчальне середовище
+ - anchor: "#прод-оточення"
+ title: Прод оточення
+---
+
+{{% capture overview %}}
+
+
+У цьому розділі розглянуто різні варіанти налаштування і запуску Kubernetes.
+
+
+Різні рішення Kubernetes відповідають різним вимогам: легкість в експлуатації, безпека, система контролю, наявні ресурси та досвід, необхідний для управління кластером.
+
+
+Ви можете розгорнути Kubernetes кластер на робочому комп'ютері, у хмарі чи в локальному дата-центрі, або обрати керований Kubernetes кластер. Також можна створити індивідуальні рішення на базі різних провайдерів хмарних сервісів або на звичайних серверах.
+
+
+Простіше кажучи, ви можете створити Kubernetes кластер у навчальному і в прод оточеннях.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+
+
+## Навчальне оточення {#навчальне-оточення}
+
+
+Для вивчення Kubernetes використовуйте рішення на базі Docker: інструменти, підтримувані спільнотою Kubernetes, або інші інструменти з сімейства проектів для налаштування Kubernetes кластера на локальному комп'ютері.
+
+{{< table caption="Таблиця інструментів для локального розгортання Kubernetes, які підтримуються спільнотою або входять до сімейства проектів Kubernetes." >}}
+
+|Спільнота |Сімейство проектів |
+| ------------ | -------- |
+| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) |
+| [kind (Kubernetes IN Docker)](https://github.com/kubernetes-sigs/kind) | [Docker Desktop](https://www.docker.com/products/docker-desktop)|
+| | [Minishift](https://docs.okd.io/latest/minishift/)|
+| | [MicroK8s](https://microk8s.io/)|
+| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) |
+| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)|
+| | [k3s](https://k3s.io)|
+
+
+## Прод оточення {#прод-оточення}
+
+
+Обираючи рішення для проду, визначіться, якими з функціональних складових (або абстракцій) Kubernetes кластера ви хочете керувати самі, а управління якими - доручити провайдеру.
+
+
+У Kubernetes кластері можливі наступні абстракції: {{< glossary_tooltip text="застосунки" term_id="applications" >}}, {{< glossary_tooltip text="площина даних" term_id="data-plane" >}}, {{< glossary_tooltip text="площина управління" term_id="control-plane" >}}, {{< glossary_tooltip text="інфраструктура кластера" term_id="cluster-infrastructure" >}} та {{< glossary_tooltip text="операції з кластером" term_id="cluster-operations" >}}.
+
+
+На діаграмі нижче показані можливі абстракції Kubernetes кластера із зазначенням, які з них потребують самостійного управління, а які можуть бути керовані провайдером.
+
+Рішення для прод оточення![Рішення для прод оточення](/images/docs/KubernetesSolutions.svg)
+
+{{< table caption="Таблиця рішень для прод оточення містить перелік провайдерів і їх технологій." >}}
+
+Таблиця рішень для прод оточення містить перелік провайдерів і технологій, які вони пропонують.
+
+|Провайдери | Керований сервіс | Хмара "під ключ" | Локальний дата-центр | Під замовлення (хмара) | Під замовлення (локальні ВМ)| Під замовлення (сервери без ОС) |
+| --------- | ------ | ------ | ------ | ------ | ------ | ----- |
+| [Agile Stacks](https://www.agilestacks.com/products/kubernetes)| | ✔ | ✔ | | |
+| [Alibaba Cloud](https://www.alibabacloud.com/product/kubernetes)| | ✔ | | | |
+| [Amazon](https://aws.amazon.com) | [Amazon EKS](https://aws.amazon.com/eks/) |[Amazon EC2](https://aws.amazon.com/ec2/) | | | |
+| [AppsCode](https://appscode.com/products/pharmer/) | ✔ | | | | |
+| [APPUiO](https://appuio.ch/) | ✔ | ✔ | ✔ | | | |
+| [Banzai Cloud Pipeline Kubernetes Engine (PKE)](https://banzaicloud.com/products/pke/) | | ✔ | | ✔ | ✔ | ✔ |
+| [CenturyLink Cloud](https://www.ctl.io/) | | ✔ | | | |
+| [Cisco Container Platform](https://cisco.com/go/containers) | | | ✔ | | |
+| [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) | | | | ✔ |✔ |
+| [CloudStack](https://cloudstack.apache.org/) | | | | | ✔|
+| [Canonical](https://ubuntu.com/kubernetes) | ✔ | ✔ | ✔ | ✔ |✔ | ✔
+| [Containership](https://containership.io) | ✔ |✔ | | | |
+| [D2iQ](https://d2iq.com/) | | [Kommander](https://d2iq.com/solutions/ksphere) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) |
+| [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | ✔
+| [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | ✔ | | | | |
+| [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |✔ | ✔ | | | ✔
+| [Gardener](https://gardener.cloud/) | ✔ | ✔ | ✔ | ✔ | ✔ | [Custom Extensions](https://github.com/gardener/gardener/blob/master/docs/extensions/overview.md) |
+| [Giant Swarm](https://www.giantswarm.io/) | ✔ | ✔ | ✔ | |
+| [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | |
+| [Hidora](https://hidora.com/) | ✔ | ✔| ✔ | | | | | | | |
+| [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | |
+| [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | |
+| [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | |
+| [KubeOne](https://kubeone.io/) | | ✔ | ✔ | ✔ | ✔ | ✔ |
+| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | |
+| [KubeSail](https://kubesail.com/) | ✔ | | | | |
+| [Kubespray](https://kubespray.io/#/) | | | |✔ | ✔ | ✔ |
+| [Kublr](https://kublr.com/) |✔ | ✔ |✔ |✔ |✔ |✔ |
+| [Microsoft Azure](https://azure.microsoft.com) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) | | | | |
+| [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) | | | ✔ | | |
+| [NetApp Kubernetes Service (NKS)](https://cloud.netapp.com/kubernetes-service) | ✔ | ✔ | ✔ | | |
+| [Nirmata](https://www.nirmata.com/) | | ✔ | ✔ | | |
+| [Nutanix](https://www.nutanix.com/en) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | | | [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization) |
+| [OpenNebula](https://www.opennebula.org) |[OpenNebula Kubernetes](https://marketplace.opennebula.systems/docs/service/kubernetes.html) | | | | |
+| [OpenShift](https://www.openshift.com) |[OpenShift Dedicated](https://www.openshift.com/products/dedicated/) and [OpenShift Online](https://www.openshift.com/products/online/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) |[OpenShift Container Platform](https://www.openshift.com/products/container-platform/)
+| [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | ✔ | ✔ | | | |
+| [oVirt](https://www.ovirt.org/) | | | | | ✔ |
+| [Pivotal](https://pivotal.io/) | | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | | |
+| [Platform9](https://platform9.com/) | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | ✔ | ✔ | ✔
+| [Rancher](https://rancher.com/) | | [Rancher 2.x](https://rancher.com/docs/rancher/v2.x/en/) | | [Rancher Kubernetes Engine (RKE)](https://rancher.com/docs/rke/latest/en/) | | [k3s](https://k3s.io/)
+| [Supergiant](https://supergiant.io/) | |✔ | | | |
+| [SUSE](https://www.suse.com/) | | ✔ | | | |
+| [SysEleven](https://www.syseleven.io/) | ✔ | | | | |
+| [Tencent Cloud](https://intl.cloud.tencent.com/) | [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/product/tke) | ✔ | ✔ | | | ✔ |
+| [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | |
+| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks)
+| [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | |
+
+{{% /capture %}}
diff --git a/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md
new file mode 100644
index 0000000000000..90dbfdb914ed0
--- /dev/null
+++ b/content/uk/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -0,0 +1,293 @@
+---
+reviewers:
+- fgrzadkowski
+- jszczepkowski
+- directxman12
+title: Horizontal Pod Autoscaler
+feature:
+ title: Горизонтальне масштабування
+ description: >
+ Масштабуйте ваш застосунок за допомогою простої команди, інтерфейсу користувача чи автоматично, виходячи із навантаження на ЦПУ.
+
+content_template: templates/concept
+weight: 90
+---
+
+{{% capture overview %}}
+
+The Horizontal Pod Autoscaler automatically scales the number of pods
+in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with
+[custom metrics](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md)
+support, on some other application-provided metrics). Note that Horizontal
+Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets.
+
+The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
+The resource determines the behavior of the controller.
+The controller periodically adjusts the number of replicas in a replication controller or deployment
+to match the observed average CPU utilization to the target specified by user.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## How does the Horizontal Pod Autoscaler work?
+
+![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.svg)
+
+The Horizontal Pod Autoscaler is implemented as a control loop, with a period controlled
+by the controller manager's `--horizontal-pod-autoscaler-sync-period` flag (with a default
+value of 15 seconds).
+
+During each period, the controller manager queries the resource utilization against the
+metrics specified in each HorizontalPodAutoscaler definition. The controller manager
+obtains the metrics from either the resource metrics API (for per-pod resource metrics),
+or the custom metrics API (for all other metrics).
+
+* For per-pod resource metrics (like CPU), the controller fetches the metrics
+ from the resource metrics API for each pod targeted by the HorizontalPodAutoscaler.
+ Then, if a target utilization value is set, the controller calculates the utilization
+ value as a percentage of the equivalent resource request on the containers in
+ each pod. If a target raw value is set, the raw metric values are used directly.
+ The controller then takes the mean of the utilization or the raw value (depending on the type
+ of target specified) across all targeted pods, and produces a ratio used to scale
+ the number of desired replicas.
+
+ Please note that if some of the pod's containers do not have the relevant resource request set,
+ CPU utilization for the pod will not be defined and the autoscaler will
+ not take any action for that metric. See the [algorithm
+ details](#algorithm-details) section below for more information about
+ how the autoscaling algorithm works.
+
+* For per-pod custom metrics, the controller functions similarly to per-pod resource metrics,
+ except that it works with raw values, not utilization values.
+
+* For object metrics and external metrics, a single metric is fetched, which describes
+ the object in question. This metric is compared to the target
+ value, to produce a ratio as above. In the `autoscaling/v2beta2` API
+ version, this value can optionally be divided by the number of pods before the
+ comparison is made.
+
+The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`,
+`custom.metrics.k8s.io`, and `external.metrics.k8s.io`). The `metrics.k8s.io` API is usually provided by
+metrics-server, which needs to be launched separately. See
+[metrics-server](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server)
+for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster.
+
+{{< note >}}
+{{< feature-state state="deprecated" for_k8s_version="1.11" >}}
+Fetching metrics from Heapster is deprecated as of Kubernetes 1.11.
+{{< /note >}}
+
+See [Support for metrics APIs](#support-for-metrics-apis) for more details.
+
+The autoscaler accesses corresponding scalable controllers (such as replication controllers, deployments, and replica sets)
+by using the scale sub-resource. Scale is an interface that allows you to dynamically set the number of replicas and examine
+each of their current states. More details on scale sub-resource can be found
+[here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
+
+### Algorithm Details
+
+From the most basic perspective, the Horizontal Pod Autoscaler controller
+operates on the ratio between desired metric value and current metric
+value:
+
+```
+desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
+```
+
+For example, if the current metric value is `200m`, and the desired value
+is `100m`, the number of replicas will be doubled, since `200.0 / 100.0 ==
+2.0` If the current value is instead `50m`, we'll halve the number of
+replicas, since `50.0 / 100.0 == 0.5`. We'll skip scaling if the ratio is
+sufficiently close to 1.0 (within a globally-configurable tolerance, from
+the `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1).
+
+When a `targetAverageValue` or `targetAverageUtilization` is specified,
+the `currentMetricValue` is computed by taking the average of the given
+metric across all Pods in the HorizontalPodAutoscaler's scale target.
+Before checking the tolerance and deciding on the final values, we take
+pod readiness and missing metrics into consideration, however.
+
+All Pods with a deletion timestamp set (i.e. Pods in the process of being
+shut down) and all failed Pods are discarded.
+
+If a particular Pod is missing metrics, it is set aside for later; Pods
+with missing metrics will be used to adjust the final scaling amount.
+
+When scaling on CPU, if any pod has yet to become ready (i.e. it's still
+initializing) *or* the most recent metric point for the pod was before it
+became ready, that pod is set aside as well.
+
+Due to technical constraints, the HorizontalPodAutoscaler controller
+cannot exactly determine the first time a pod becomes ready when
+determining whether to set aside certain CPU metrics. Instead, it
+considers a Pod "not yet ready" if it's unready and transitioned to
+unready within a short, configurable window of time since it started.
+This value is configured with the `--horizontal-pod-autoscaler-initial-readiness-delay` flag, and its default is 30
+seconds. Once a pod has become ready, it considers any transition to
+ready to be the first if it occurred within a longer, configurable time
+since it started. This value is configured with the `--horizontal-pod-autoscaler-cpu-initialization-period` flag, and its
+default is 5 minutes.
+
+The `currentMetricValue / desiredMetricValue` base scale ratio is then
+calculated using the remaining pods not set aside or discarded from above.
+
+If there were any missing metrics, we recompute the average more
+conservatively, assuming those pods were consuming 100% of the desired
+value in case of a scale down, and 0% in case of a scale up. This dampens
+the magnitude of any potential scale.
+
+Furthermore, if any not-yet-ready pods were present, and we would have
+scaled up without factoring in missing metrics or not-yet-ready pods, we
+conservatively assume the not-yet-ready pods are consuming 0% of the
+desired metric, further dampening the magnitude of a scale up.
+
+After factoring in the not-yet-ready pods and missing metrics, we
+recalculate the usage ratio. If the new ratio reverses the scale
+direction, or is within the tolerance, we skip scaling. Otherwise, we use
+the new ratio to scale.
+
+Note that the *original* value for the average utilization is reported
+back via the HorizontalPodAutoscaler status, without factoring in the
+not-yet-ready pods or missing metrics, even when the new usage ratio is
+used.
+
+If multiple metrics are specified in a HorizontalPodAutoscaler, this
+calculation is done for each metric, and then the largest of the desired
+replica counts is chosen. If any of these metrics cannot be converted
+into a desired replica count (e.g. due to an error fetching the metrics
+from the metrics APIs) and a scale down is suggested by the metrics which
+can be fetched, scaling is skipped. This means that the HPA is still capable
+of scaling up if one or more metrics give a `desiredReplicas` greater than
+the current value.
+
+Finally, just before HPA scales the target, the scale recommendation is recorded. The
+controller considers all recommendations within a configurable window choosing the
+highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes.
+This means that scaledowns will occur gradually, smoothing out the impact of rapidly
+fluctuating metric values.
+
+## API Object
+
+The Horizontal Pod Autoscaler is an API resource in the Kubernetes `autoscaling` API group.
+The current stable version, which only includes support for CPU autoscaling,
+can be found in the `autoscaling/v1` API version.
+
+The beta version, which includes support for scaling on memory and custom metrics,
+can be found in `autoscaling/v2beta2`. The new fields introduced in `autoscaling/v2beta2`
+are preserved as annotations when working with `autoscaling/v1`.
+
+More details about the API object can be found at
+[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
+
+## Support for Horizontal Pod Autoscaler in kubectl
+
+Horizontal Pod Autoscaler, like every API resource, is supported in a standard way by `kubectl`.
+We can create a new autoscaler using `kubectl create` command.
+We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
+Finally, we can delete an autoscaler using `kubectl delete hpa`.
+
+In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
+For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
+will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
+and the number of replicas between 2 and 5.
+The detailed documentation of `kubectl autoscale` can be found [here](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
+
+
+## Autoscaling during rolling update
+
+Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly,
+or by using the deployment object, which manages the underlying replica sets for you.
+Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
+it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
+
+Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
+i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
+The reason this doesn't work is that when rolling update creates a new replication controller,
+the Horizontal Pod Autoscaler will not be bound to the new replication controller.
+
+## Support for cooldown/delay
+
+When managing the scale of a group of replicas using the Horizontal Pod Autoscaler,
+it is possible that the number of replicas keeps fluctuating frequently due to the
+dynamic nature of the metrics evaluated. This is sometimes referred to as *thrashing*.
+
+Starting from v1.6, a cluster operator can mitigate this problem by tuning
+the global HPA settings exposed as flags for the `kube-controller-manager` component:
+
+Starting from v1.12, a new algorithmic update removes the need for the
+upscale delay.
+
+- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a
+ duration that specifies how long the autoscaler has to wait before another
+ downscale operation can be performed after the current one has completed.
+ The default value is 5 minutes (`5m0s`).
+
+{{< note >}}
+When tuning these parameter values, a cluster operator should be aware of the possible
+consequences. If the delay (cooldown) value is set too long, there could be complaints
+that the Horizontal Pod Autoscaler is not responsive to workload changes. However, if
+the delay value is set too short, the scale of the replicas set may keep thrashing as
+usual.
+{{< /note >}}
+
+## Support for multiple metrics
+
+Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta2` API
+version to specify multiple metrics for the Horizontal Pod Autoscaler to scale on. Then, the Horizontal Pod
+Autoscaler controller will evaluate each metric, and propose a new scale based on that metric. The largest of the
+proposed scales will be used as the new scale.
+
+## Support for custom metrics
+
+{{< note >}}
+Kubernetes 1.2 added alpha support for scaling based on application-specific metrics using special annotations.
+Support for these annotations was removed in Kubernetes 1.6 in favor of the new autoscaling API. While the old method for collecting
+custom metrics is still available, these metrics will not be available for use by the Horizontal Pod Autoscaler, and the former
+annotations for specifying which custom metrics to scale on are no longer honored by the Horizontal Pod Autoscaler controller.
+{{< /note >}}
+
+Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler.
+You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API.
+Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics.
+
+See [Support for metrics APIs](#support-for-metrics-apis) for the requirements.
+
+## Support for metrics APIs
+
+By default, the HorizontalPodAutoscaler controller retrieves metrics from a series of APIs. In order for it to access these
+APIs, cluster administrators must ensure that:
+
+* The [API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) is enabled.
+
+* The corresponding APIs are registered:
+
+ * For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-incubator/metrics-server).
+ It can be launched as a cluster addon.
+
+ * For custom metrics, this is the `custom.metrics.k8s.io` API. It's provided by "adapter" API servers provided by metrics solution vendors.
+ Check with your metrics pipeline, or the [list of known solutions](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api).
+ If you would like to write your own, check out the [boilerplate](https://github.com/kubernetes-incubator/custom-metrics-apiserver) to get started.
+
+ * For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above.
+
+* The `--horizontal-pod-autoscaler-use-rest-clients` is `true` or unset. Setting this to false switches to Heapster-based autoscaling, which is deprecated.
+
+For more information on these different metrics paths and how they differ please see the relevant design proposals for
+[the HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md),
+[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md)
+and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md).
+
+For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)
+and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects).
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
+* kubectl autoscale command: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
+* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).
+
+{{% /capture %}}
diff --git a/content/uk/docs/templates/feature-state-alpha.txt b/content/uk/docs/templates/feature-state-alpha.txt
new file mode 100644
index 0000000000000..e061aa52be02b
--- /dev/null
+++ b/content/uk/docs/templates/feature-state-alpha.txt
@@ -0,0 +1,7 @@
+Наразі цей компонент у статусі *alpha*, що означає:
+
+* Назва версії містить слово alpha (напр. v1alpha1).
+* Увімкнення цього компонента може призвести до помилок у системі. За умовчанням цей компонент вимкнутий.
+* Підтримка цього компонентa може бути припинена у будь-який час без попередження.
+* API може стати несумісним у наступних релізах без попередження.
+* Рекомендований до використання лише у тестових кластерах через підвищений ризик виникнення помилок і відсутність довгострокової підтримки.
diff --git a/content/uk/docs/templates/feature-state-beta.txt b/content/uk/docs/templates/feature-state-beta.txt
new file mode 100644
index 0000000000000..3790be73f4718
--- /dev/null
+++ b/content/uk/docs/templates/feature-state-beta.txt
@@ -0,0 +1,22 @@
+
+Наразі цей компонент у статусі *beta*, що означає:
+
+
+* Назва версії містить слово beta (наприклад, v2beta3).
+
+* Код добре відтестований. Увімкнення цього компонента не загрожує роботі системи. Компонент увімкнутий за умовчанням.
+
+* Загальна підтримка цього компонента триватиме, однак деталі можуть змінитися.
+
+* У наступній beta- чи стабільній версії схема та/або семантика об'єктів може змінитися і стати несумісною. У такому випадку ми надамо інструкції для міграції на наступну версію. Це може призвести до видалення, редагування і перестворення об'єктів API. У процесі редагування вам, можливо, знадобиться продумати зміни в об'єкті. Це може призвести до недоступності застосунків, для роботи яких цей компонент є істотно важливим.
+
+* Використання компонента рекомендоване лише у некритичних для безперебійної діяльності випадках через ризик несумісних змін у подальших релізах. Це обмеження може бути пом'якшене у випадку декількох кластерів, які можна оновлювати окремо.
+
+* **Будь ласка, спробуйте beta-версії наших компонентів і поділіться з нами своєю думкою! Після того, як компонент вийде зі статусу beta, нам буде важче змінити його.**
diff --git a/content/uk/docs/templates/feature-state-deprecated.txt b/content/uk/docs/templates/feature-state-deprecated.txt
new file mode 100644
index 0000000000000..7c35b3fc2f04b
--- /dev/null
+++ b/content/uk/docs/templates/feature-state-deprecated.txt
@@ -0,0 +1,4 @@
+
+
+Цей компонент є *застарілим*. Дізнатися більше про цей статус ви можете зі статті [Політика Kubernetes щодо застарілих компонентів](/docs/reference/deprecation-policy/).
diff --git a/content/uk/docs/templates/feature-state-stable.txt b/content/uk/docs/templates/feature-state-stable.txt
new file mode 100644
index 0000000000000..a794f5ceb6134
--- /dev/null
+++ b/content/uk/docs/templates/feature-state-stable.txt
@@ -0,0 +1,11 @@
+
+
+Цей компонент є *стабільним*, що означає:
+
+
+* Назва версії становить vX, де X є цілим числом.
+
+* Стабільні версії компонентів з'являтимуться у багатьох наступних версіях програмного забезпечення.
\ No newline at end of file
diff --git a/content/uk/docs/templates/index.md b/content/uk/docs/templates/index.md
new file mode 100644
index 0000000000000..0e0b890542ef8
--- /dev/null
+++ b/content/uk/docs/templates/index.md
@@ -0,0 +1,15 @@
+---
+headless: true
+
+resources:
+- src: "*alpha*"
+ title: "alpha"
+- src: "*beta*"
+ title: "beta"
+- src: "*deprecated*"
+# title: "deprecated"
+ title: "застарілий"
+- src: "*stable*"
+# title: "stable"
+ title: "стабільний"
+---
\ No newline at end of file
diff --git a/content/uk/docs/tutorials/_index.md b/content/uk/docs/tutorials/_index.md
new file mode 100644
index 0000000000000..ad03de23df642
--- /dev/null
+++ b/content/uk/docs/tutorials/_index.md
@@ -0,0 +1,90 @@
+---
+#title: Tutorials
+title: Навчальні матеріали
+main_menu: true
+weight: 60
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+
+У цьому розділі документації Kubernetes зібрані навчальні матеріали. Кожний матеріал показує, як досягти окремої мети, що більша за одне [завдання](/docs/tasks/). Зазвичай навчальний матеріал має декілька розділів, кожен з яких містить певну послідовність дій. До ознайомлення з навчальними матеріалами вам, можливо, знадобиться додати у закладки сторінку з [Глосарієм](/docs/reference/glossary/) для подальшого консультування.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+
+## Основи
+
+
+* [Основи Kubernetes](/docs/tutorials/kubernetes-basics/) - детальний навчальний матеріал з інтерактивними уроками, що допоможе вам зрозуміти Kubernetes і спробувати його базову функціональність.
+
+* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
+
+* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
+
+* [Привіт Minikube](/docs/tutorials/hello-minikube/)
+
+
+## Конфігурація
+
+* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
+
+## Застосунки без стану (Stateless Applications)
+
+* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
+
+* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
+
+## Застосунки зі станом (Stateful Applications)
+
+* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
+
+* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
+
+* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/)
+
+* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
+
+## CI/CD Pipeline
+
+* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
+
+* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
+
+* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
+
+* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
+
+## Кластери
+
+* [AppArmor](/docs/tutorials/clusters/apparmor/)
+
+## Сервіси
+
+* [Using Source IP](/docs/tutorials/services/source-ip/)
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+
+Якщо ви хочете написати навчальний матеріал, у статті
+[Використання шаблонів сторінок](/docs/home/contribute/page-templates/)
+ви знайдете інформацію про тип навчальної сторінки і шаблон.
+
+{{% /capture %}}
diff --git a/content/uk/docs/tutorials/hello-minikube.md b/content/uk/docs/tutorials/hello-minikube.md
new file mode 100644
index 0000000000000..356426112be9f
--- /dev/null
+++ b/content/uk/docs/tutorials/hello-minikube.md
@@ -0,0 +1,394 @@
+---
+#title: Hello Minikube
+title: Привіт Minikube
+content_template: templates/tutorial
+weight: 5
+menu:
+ main:
+ #title: "Get Started"
+ title: "Початок роботи"
+ weight: 10
+ #post: >
+ #
Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.
+card:
+ #name: tutorials
+ name: навчальні матеріали
+ weight: 10
+---
+
+{{% capture overview %}}
+
+
+З цього навчального матеріалу ви дізнаєтесь, як запустити у Kubernetes простий Hello World застосунок на Node.js за допомогою [Minikube](/docs/setup/learning-environment/minikube) і Katacoda. Katacoda надає безплатне Kubernetes середовище, що доступне у вашому браузері.
+
+
+{{< note >}}
+Також ви можете навчатись за цим матеріалом, якщо встановили [Minikube локально](/docs/tasks/tools/install-minikube/).
+{{< /note >}}
+
+{{% /capture %}}
+
+{{% capture objectives %}}
+
+
+* Розгорнути Hello World застосунок у Minikube.
+
+* Запустити застосунок.
+
+* Переглянути логи застосунку.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+
+У цьому навчальному матеріалі ми використовуємо образ контейнера, зібраний із наступних файлів:
+
+{{< codenew language="js" file="minikube/server.js" >}}
+
+{{< codenew language="conf" file="minikube/Dockerfile" >}}
+
+
+Більше інформації про команду `docker build` ви знайдете у [документації Docker](https://docs.docker.com/engine/reference/commandline/build/).
+
+{{% /capture %}}
+
+{{% capture lessoncontent %}}
+
+
+## Створення Minikube кластера
+
+
+1. Натисніть кнопку **Запуск термінала**
+
+ {{< kat-button >}}
+
+
+ {{< note >}}Якщо Minikube встановлений локально, виконайте команду `minikube start`.{{< /note >}}
+
+
+2. Відкрийте Kubernetes дашборд у браузері:
+
+ ```shell
+ minikube dashboard
+ ```
+
+
+3. Тільки для Katacoda: у верхній частині вікна термінала натисніть знак плюс, а потім -- **Select port to view on Host 1**.
+
+
+4. Тільки для Katacoda: введіть `30000`, а потім натисніть **Display Port**.
+
+
+## Створення Deployment
+
+
+[*Под*](/docs/concepts/workloads/pods/pod/) у Kubernetes -- це група з одного або декількох контейнерів, що об'єднані разом з метою адміністрування і роботи у мережі. У цьому навчальному матеріалі Под має лише один контейнер. Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) перевіряє стан Пода і перезапускає контейнер Пода, якщо контейнер перестає працювати. Створювати і масштабувати Поди рекомендується за допомогою Deployment'ів.
+
+
+1. За допомогою команди `kubectl create` створіть Deployment, який керуватиме Подом. Под запускає контейнер на основі наданого Docker образу.
+
+ ```shell
+ kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
+ ```
+
+
+2. Перегляньте інформацію про запущений Deployment:
+
+ ```shell
+ kubectl get deployments
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ hello-node 1/1 1 1 1m
+ ```
+
+
+3. Перегляньте інформацію про запущені Поди:
+
+ ```shell
+ kubectl get pods
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
+ ```
+
+
+4. Перегляньте події кластера:
+
+ ```shell
+ kubectl get events
+ ```
+
+
+5. Перегляньте конфігурацію `kubectl`:
+
+ ```shell
+ kubectl config view
+ ```
+
+
+ {{< note >}}Більше про команди `kubectl` ви можете дізнатися зі статті [Загальна інформація про kubectl](/docs/user-guide/kubectl-overview/).{{< /note >}}
+
+
+## Створення Сервісу
+
+
+За умовчанням, Под доступний лише за внутрішньою IP-адресою у межах Kubernetes кластера. Для того, щоб контейнер `hello-node` став доступний за межами віртуальної мережі Kubernetes, Под необхідно відкрити як Kubernetes [*Сервіс*](/docs/concepts/services-networking/service/).
+
+
+1. Відкрийте Под для публічного доступу з інтернету за допомогою команди `kubectl expose`:
+
+ ```shell
+ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
+ ```
+
+
+ Прапорець `--type=LoadBalancer` вказує, що ви хочете відкрити доступ до Сервісу за межами кластера.
+
+
+2. Перегляньте інформацію за Сервісом, який ви щойно створили:
+
+ ```shell
+ kubectl get services
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s
+ kubernetes ClusterIP 10.96.0.1 443/TCP 23m
+ ```
+
+
+ Для хмарних провайдерів, що підтримують балансування навантаження, доступ до Сервісу надається через зовнішню IP-адресу. Для Minikube, тип `LoadBalancer` робить Сервіс доступним ззовні за допомогою команди `minikube service`.
+
+
+3. Виконайте наступну команду:
+
+ ```shell
+ minikube service hello-node
+ ```
+
+
+4. Тільки для Katacoda: натисніть знак плюс, а потім -- **Select port to view on Host 1**.
+
+
+5. Тільки для Katacoda: запишіть п'ятизначний номер порту, що відображається напроти `8080` у виводі сервісу. Номер цього порту генерується довільно і тому може бути іншим у вашому випадку. Введіть номер порту у призначене для цього текстове поле і натисніть Display Port. У нашому прикладі номер порту `30369`.
+
+
+ Це відкриє вікно браузера, в якому запущений ваш застосунок, і покаже повідомлення "Hello World".
+
+
+## Увімкнення розширень
+
+
+Minikube має ряд вбудованих {{< glossary_tooltip text="розширень" term_id="addons" >}}, які можна увімкнути, вимкнути і відкрити у локальному Kubernetes оточенні.
+
+
+1. Перегляньте перелік підтримуваних розширень:
+
+ ```shell
+ minikube addons list
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ addon-manager: enabled
+ dashboard: enabled
+ default-storageclass: enabled
+ efk: disabled
+ freshpod: disabled
+ gvisor: disabled
+ helm-tiller: disabled
+ ingress: disabled
+ ingress-dns: disabled
+ logviewer: disabled
+ metrics-server: disabled
+ nvidia-driver-installer: disabled
+ nvidia-gpu-device-plugin: disabled
+ registry: disabled
+ registry-creds: disabled
+ storage-provisioner: enabled
+ storage-provisioner-gluster: disabled
+ ```
+
+
+2. Увімкніть розширення, наприклад `metrics-server`:
+
+ ```shell
+ minikube addons enable metrics-server
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ metrics-server was successfully enabled
+ ```
+
+
+3. Перегляньте інформацію про Под і Сервіс, які ви щойно створили:
+
+ ```shell
+ kubectl get pod,svc -n kube-system
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
+ pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
+ pod/metrics-server-67fb648c5 1/1 Running 0 26s
+ pod/etcd-minikube 1/1 Running 0 34m
+ pod/influxdb-grafana-b29w8 2/2 Running 0 26s
+ pod/kube-addon-manager-minikube 1/1 Running 0 34m
+ pod/kube-apiserver-minikube 1/1 Running 0 34m
+ pod/kube-controller-manager-minikube 1/1 Running 0 34m
+ pod/kube-proxy-rnlps 1/1 Running 0 34m
+ pod/kube-scheduler-minikube 1/1 Running 0 34m
+ pod/storage-provisioner 1/1 Running 0 34m
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/metrics-server ClusterIP 10.96.241.45 80/TCP 26s
+ service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m
+ service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s
+ service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s
+ ```
+
+
+4. Вимкніть `metrics-server`:
+
+ ```shell
+ minikube addons disable metrics-server
+ ```
+
+
+ У виводі ви побачите подібну інформацію:
+
+ ```
+ metrics-server was successfully disabled
+ ```
+
+
+## Вивільнення ресурсів
+
+
+Тепер ви можете видалити ресурси, які створили у вашому кластері:
+
+```shell
+kubectl delete service hello-node
+kubectl delete deployment hello-node
+```
+
+
+За бажанням, зупиніть віртуальну машину (ВМ) з Minikube:
+
+```shell
+minikube stop
+```
+
+
+За бажанням, видаліть ВМ з Minikube:
+
+```shell
+minikube delete
+```
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+
+* Дізнайтеся більше про [об'єкти Deployment](/docs/concepts/workloads/controllers/deployment/).
+
+* Дізнайтеся більше про [розгортання застосунків](/docs/user-guide/deploying-applications/).
+
+* Дізнайтеся більше про [об'єкти сервісу](/docs/concepts/services-networking/service/).
+
+{{% /capture %}}
diff --git a/content/uk/docs/tutorials/kubernetes-basics/_index.html b/content/uk/docs/tutorials/kubernetes-basics/_index.html
new file mode 100644
index 0000000000000..466b8b3437340
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/_index.html
@@ -0,0 +1,138 @@
+---
+title: Дізнатися про основи Kubernetes
+linkTitle: Основи Kubernetes
+weight: 10
+card:
+ name: навчальні матеріали
+ weight: 20
+ title: Знайомство з основами
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Основи Kubernetes
+
+
Цей навчальний матеріал ознайомить вас з основами системи оркестрації Kubernetes кластера. Кожен модуль містить загальну інформацію щодо основної функціональності і концепцій Kubernetes, а також інтерактивний онлайн-урок. Завдяки цим інтерактивним урокам ви зможете самостійно керувати простим кластером і розгорнутими в ньому контейнеризованими застосунками.
+
+
З інтерактивних уроків ви дізнаєтесь:
+
+
+
як розгорнути контейнеризований застосунок у кластері.
+
+
як масштабувати Deployment.
+
+
як розгорнути нову версію контейнеризованого застосунку.
+
+
як відлагодити контейнеризований застосунок.
+
+
+
Навчальні матеріали використовують Katacoda для запуску у вашому браузері віртуального термінала, в якому запущено Minikube - невеликий локально розгорнутий Kubernetes, що може працювати будь-де. Вам не потрібно встановлювати або налаштовувати жодне програмне забезпечення: кожен інтерактивний урок запускається просто у вашому браузері.
+
+
+
+
+
+
+
+
+
Чим Kubernetes може бути корисний для вас?
+
+
Від сучасних вебсервісів користувачі очікують доступності 24/7, а розробники - можливості розгортати нові версії цих застосунків по кілька разів на день. Контейнеризація, що допомагає упакувати програмне забезпечення, якнайкраще сприяє цим цілям. Вона дозволяє випускати і оновлювати застосунки легко, швидко та без простою. Із Kubernetes ви можете бути певні, що ваші контейнеризовані застосунки запущені там і тоді, де ви цього хочете, а також забезпечені усіма необхідними для роботи ресурсами та інструментами. Kubernetes - це висококласна платформа з відкритим вихідним кодом, в основі якої - накопичений досвід оркестрації контейнерів від Google, поєднаний із найкращими ідеями і практиками від спільноти.
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
new file mode 100644
index 0000000000000..1a4e179a69e77
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
@@ -0,0 +1,152 @@
+---
+title: Використання Minikube для створення кластера
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+
Зрозуміти, що таке Kubernetes кластер.
+
+
Зрозуміти, що таке Minikube.
+
+
Запустити Kubernetes кластер за допомогою онлайн-термінала.
+
+
+
+
+
+
Kubernetes кластери
+
+
+ Kubernetes координує високодоступний кластер комп'ютерів, з'єднаних таким чином, щоб працювати як одне ціле. Абстракції Kubernetes дозволяють вам розгортати контейнеризовані застосунки в кластері без конкретної прив'язки до окремих машин. Для того, щоб скористатися цією новою моделлю розгортання, застосунки потрібно упакувати таким чином, щоб звільнити їх від прив'язки до окремих хостів, тобто контейнеризувати. Контейнеризовані застосунки більш гнучкі і доступні, ніж попередні моделі розгортання, що передбачали встановлення застосунків безпосередньо на призначені для цього машини у вигляді програмного забезпечення, яке глибоко інтегрувалося із хостом. Kubernetes дозволяє автоматизувати розподіл і запуск контейнерів застосунку у кластері, а це набагато ефективніше. Kubernetes - це платформа з відкритим вихідним кодом, готова для використання у проді.
+
+
+
Kubernetes кластер складається з двох типів ресурсів:
+
+
master, що координує роботу кластера
+
вузли (nodes) - робочі машини, на яких запущені застосунки
+
+
+
+
+
+
+
+
Зміст:
+
+
+
Kubernetes кластер
+
+
Minikube
+
+
+
+
+
+ Kubernetes - це довершена платформа з відкритим вихідним кодом, що оркеструє розміщення і запуск контейнерів застосунку всередині та між комп'ютерними кластерами.
+
+
+
+
+
+
+
+
+
Схема кластера
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Master відповідає за керування кластером. Master координує всі процеси у вашому кластері, такі як запуск застосунків, підтримка їх бажаного стану, масштабування застосунків і викатка оновлень.
+
+
+
Вузол (node) - це ВМ або фізичний комп'ютер, що виступає у ролі робочої машини в Kubernetes кластері. Кожен вузол має kubelet - агент для управління вузлом і обміну даними з Kubernetes master. Також на вузлі мають бути встановлені інструменти для виконання операцій з контейнерами, такі як Docker або rkt. Kubernetes кластер у проді повинен складатися як мінімум із трьох вузлів.
+
+
+
+
+
+
Master'и керують кластером, а вузли використовуються для запуску застосунків.
+
+
+
+
+
+
+
+
Коли ви розгортаєте застосунки у Kubernetes, ви кажете master-вузлу запустити контейнери застосунку. Master розподіляє контейнери для запуску на вузлах кластера. Для обміну даними з master вузли використовують Kubernetes API, який надається master-вузлом. Кінцеві користувачі також можуть взаємодіяти із кластером безпосередньо через Kubernetes API.
+
+
+
Kubernetes кластер можна розгорнути як на фізичних, так і на віртуальних серверах. Щоб розпочати розробку під Kubernetes, ви можете скористатися Minikube - спрощеною реалізацією Kubernetes. Minikube створює на вашому локальному комп'ютері ВМ, на якій розгортає простий кластер з одного вузла. Існують версії Minikube для операційних систем Linux, macOS та Windows. Minikube CLI надає основні операції для роботи з вашим кластером, такі як start, stop, status і delete. Однак у цьому уроці ви використовуватимете онлайн термінал із вже встановленим Minikube.
+
+
+
Тепер ви знаєте, що таке Kubernetes. Тож давайте перейдемо до інтерактивного уроку і створимо ваш перший кластер!
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
new file mode 100644
index 0000000000000..ce9229ca852da
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
@@ -0,0 +1,151 @@
+---
+title: Використання kubectl для створення Deployment'а
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+
Дізнатися, що таке Deployment застосунків.
+
+
Розгорнути свій перший застосунок у Kubernetes за допомогою kubectl.
+
+
+
+
+
+
Процеси Kubernetes Deployment
+
+
+ Після того, як ви запустили Kubernetes кластер, ви можете розгортати в ньому контейнеризовані застосунки. Для цього вам необхідно створити Deployment конфігурацію. Вона інформує Kubernetes, як створювати і оновлювати Поди для вашого застосунку. Після того, як ви створили Deployment, Kubernetes master розподіляє ці Поди по окремих вузлах кластера.
+
+
+
+
Після створення Поди застосунку безперервно моніторяться контролером Kubernetes Deployment. Якщо вузол, на якому розміщено Под, зупинив роботу або був видалений, Deployment контролер переміщає цей Под на інший вузол кластера. Так працює механізм самозцілення, що підтримує робочий стан кластера у разі апаратного збою чи технічних робіт.
+
+
+
До появи оркестрації застосунки часто запускали за допомогою скриптів установлення. Однак скрипти не давали можливості відновити працездатний стан застосунку після апаратного збою. Завдяки створенню Подів та їхньому запуску на вузлах кластера, Kubernetes Deployment надає цілковито інший підхід до управління застосунками.
+
+
+
+
+
+
+
Зміст:
+
+
+
Deployment'и
+
Kubectl
+
+
+
+
+
+ Deployment відповідає за створення і оновлення Подів для вашого застосунку
+
+
+
+
+
+
+
+
+
Як розгорнути ваш перший застосунок у Kubernetes
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Ви можете створити Deployment і керувати ним за допомогою командного рядка Kubernetes - kubectl. kubectl взаємодіє з кластером через API Kubernetes. У цьому модулі ви вивчите найпоширеніші команди kubectl для створення Deployment'ів, які запускатимуть ваші застосунки у Kubernetes кластері.
+
+
+
Коли ви створюєте Deployment, вам необхідно задати образ контейнера для вашого застосунку і скільки реплік ви хочете запустити. Згодом цю інформацію можна змінити, оновивши Deployment. У навчальних модулях 5 і 6 йдеться про те, як масштабувати і оновлювати Deployment'и.
+
+
+
+
+
+
+
+
+
Для того, щоб розгортати застосунки в Kubernetes, їх потрібно упакувати в один із підтримуваних форматів контейнерів
+
+
+
+
+
+
+
+
+ Для створення Deployment'а ви використовуватимете застосунок, написаний на Node.js і упакований в Docker контейнер. (Якщо ви ще не пробували створити Node.js застосунок і розгорнути його у контейнері, радимо почати саме з цього; інструкції ви знайдете у навчальному матеріалі Привіт Minikube).
+
+
+
+
Тепер ви знаєте, що таке Deployment. Тож давайте перейдемо до інтерактивного уроку і розгорнемо ваш перший застосунок!
Коли ви створили Deployment у модулі 2, Kubernetes створив Под, щоб розмістити ваш застосунок. Под - це абстракція в Kubernetes, що являє собою групу з одного або декількох контейнерів застосунку (як Docker або rkt) і ресурси, спільні для цих контейнерів. До цих ресурсів належать:
+
+
+
Спільні сховища даних, або Volumes
+
+
Мережа, адже кожен Под у кластері має унікальну IP-адресу
+
+
Інформація з запуску кожного контейнера, така як версія образу контейнера або використання певних портів
+
+
+
Под моделює специфічний для даного застосунку "логічний хост" і може містити різні, але доволі щільно зв'язані контейнери. Наприклад, в одному Поді може бути контейнер з вашим Node.js застосунком та інший контейнер, що передає дані для публікації Node.js вебсерверу. Контейнери в межах Пода мають спільну IP-адресу і порти, завжди є сполученими, плануються для запуску разом і запускаються у спільному контексті на одному вузлі.
+
+
+
Под є неподільною одиницею платформи Kubernetes. Коли ви створюєте Deployment у Kubernetes, цей Deployment створює Поди вже з контейнерами всередині, на відміну від створення контейнерів окремо. Кожен Под прив'язаний до вузла, до якого його було розподілено, і лишається на ньому до припинення роботи (згідно з політикою перезапуску) або видалення. У разі відмови вузла ідентичні Поди розподіляються по інших доступних вузлах кластера.
+
+
+
+
+
Зміст:
+
+
Поди
+
Вузли
+
Основні команди kubectl
+
+
+
+
+
+ Под - це група з одного або декількох контейнерів (таких як Docker або rkt), що має спільне сховище даних (volumes), унікальну IP-адресу і містить інформацію як їх запустити.
+
+
+
+
+
+
+
+
+
Узагальнена схема Подів
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Вузли
+
+
Под завжди запускається на вузлі. Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. Функціонування кожного вузла контролюється master'ом. Вузол може мати декілька Подів. Kubernetes master автоматично розподіляє Поди по вузлах кластера з урахуванням ресурсів, наявних на кожному вузлі.
+
+
+
На кожному вузлі Kubernetes запущені як мінімум:
+
+
+
kubelet - процес, що забезпечує обмін даними між Kubernetes master і робочим вузлом; kubelet контролює Поди і контейнери, запущені на машині.
+
+
оточення для контейнерів (таке як Docker, rkt), що забезпечує завантаження образу контейнера з реєстру, розпакування контейнера і запуск застосунку.
+
+
+
+
+
+
+
+
Контейнери повинні бути разом в одному Поді, лише якщо вони щільно зв'язані і мають спільні ресурси, такі як диск.
+
+
+
+
+
+
+
+
+
Узагальнена схема вузлів
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Діагностика за допомогою kubectl
+
+
У модулі 2 ви вже використовували інтерфейс командного рядка kubectl. У модулі 3 ви продовжуватимете користуватися ним для отримання інформації про застосунки та оточення, в яких вони розгорнуті. Нижченаведені команди kubectl допоможуть вам виконати наступні поширені дії:
+
+
+
kubectl get - відобразити список ресурсів
+
+
kubectl describe - показати детальну інформацію про ресурс
+
+
kubectl logs - вивести логи контейнера, розміщеного в Поді
+
+
kubectl exec - виконати команду в контейнері, розміщеному в Поді
+
+
+
+
За допомогою цих команд ви можете подивитись, коли і в якому оточенні був розгорнутий застосунок, перевірити його поточний статус і конфігурацію.
+
+
+
А зараз, коли ми дізналися більше про складові нашого кластера і командний рядок, давайте детальніше розглянемо наш застосунок.
+
+
+
+
+
+
Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. На одному вузлі можуть бути запущені декілька Подів.
+
+
+
diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html
new file mode 100644
index 0000000000000..0ddad3b8da039
--- /dev/null
+++ b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html
@@ -0,0 +1,169 @@
+---
+#title: Using a Service to Expose Your App
+title: Використання Cервісу для відкриття доступу до застосунку за межами кластера
+weight: 10
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Цілі
+
+
+
Дізнатись, що таке Cервіс у Kubernetes
+
+
Зрозуміти, яке відношення до Cервісу мають мітки та LabelSelector
+
+
Відкрити доступ до застосунку за межами Kubernetes кластера, використовуючи Cервіс
+
+
+
+
+
+
Загальна інформація про Kubernetes Cервіси
+
+
+
Поди Kubernetes "смертні" і мають власний життєвий цикл. Коли робочий вузол припиняє роботу, ми також втрачаємо всі Поди, запущені на ньому. ReplicaSet здатна динамічно повернути кластер до бажаного стану шляхом створення нових Подів, забезпечуючи безперебійність роботи вашого застосунку. Як інший приклад, візьмемо бекенд застосунку для обробки зображень із трьома репліками. Ці репліки взаємозамінні; система фронтенду не повинна зважати на репліки бекенду чи на втрату та перестворення Поду. Водночас, кожний Под у Kubernetes кластері має унікальну IP-адресу, навіть Поди на одному вузлі. Відповідно, має бути спосіб автоматично синхронізувати зміни між Подами для того, щоб ваші застосунки продовжували працювати.
+
+
+
Сервіс у Kubernetes - це абстракція, що визначає логічний набір Подів і політику доступу до них. Сервіси уможливлюють слабку зв'язаність між залежними Подами. Для визначення Сервісу використовують YAML-файл (рекомендовано) або JSON, як для решти об'єктів Kubernetes. Набір Подів, призначених для Сервісу, зазвичай визначається через LabelSelector (нижче пояснюється, чому параметр selector іноді не включають у специфікацію сервісу).
+
+
+
Попри те, що кожен Под має унікальний IP, ці IP-адреси не видні за межами кластера без Сервісу. Сервіси уможливлюють надходження трафіка до ваших застосунків. Відкрити Сервіс можна по-різному, вказавши потрібний type у ServiceSpec:
+
+
+
ClusterIP (типове налаштування) - відкриває доступ до Сервісу у кластері за внутрішнім IP. Цей тип робить Сервіс доступним лише у межах кластера.
+
+
NodePort - відкриває доступ до Сервісу на однаковому порту кожного обраного вузла в кластері, використовуючи NAT. Робить Сервіс доступним поза межами кластера, використовуючи <NodeIP>:<NodePort>. Є надмножиною відносно ClusterIP.
+
+
LoadBalancer - створює зовнішній балансувальник навантаження у хмарі (за умови хмарної інфраструктури) і призначає Сервісу статичну зовнішню IP-адресу. Є надмножиною відносно NodePort.
+
+
ExternalName - відкриває доступ до Сервісу, використовуючи довільне ім'я (визначається параметром externalName у специфікації), повертає запис CNAME. Проксі не використовується. Цей тип потребує версії kube-dns 1.7 і вище.
Також зауважте, що для деяких сценаріїв використання Сервісів параметр selector не задається у специфікації Сервісу. Сервіс, створений без визначення параметра selector, також не створюватиме відповідного Endpoint об'єкта. Це дозволяє користувачам вручну спроектувати Сервіс на конкретні кінцеві точки (endpoints). Інший випадок, коли селектор може бути не потрібний - використання строго заданого параметра type: ExternalName.
+
+
+
+
+
Зміст
+
+
+
Відкриття Подів для зовнішнього трафіка
+
+
Балансування навантаження трафіка між Подами
+
+
Використання міток
+
+
+
+
+
Сервіс Kubernetes - це шар абстракції, який визначає логічний набір Подів і відкриває їх для зовнішнього трафіка, балансує навантаження і здійснює виявлення цих Подів.
+
+
+
+
+
+
+
+
+
Сервіси і мітки
+
+
+
+
+
+
+
+
+
+
+
+
+
Сервіс маршрутизує трафік між Подами, що входять до його складу. Сервіс - це абстракція, завдяки якій Поди в Kubernetes "вмирають" і відтворюються, не впливаючи на роботу вашого застосунку. Сервіси в Kubernetes здійснюють виявлення і маршрутизацію між залежними Подами (як наприклад, фронтенд- і бекенд-компоненти застосунку).
+
+
Сервіси співвідносяться з набором Подів за допомогою міток і селекторів -- примітивів групування, що роблять можливими логічні операції з об'єктами у Kubernetes. Мітки являють собою пари ключ/значення, що прикріплені до об'єктів і можуть використовуватися для різних цілей:
+
+
+
Позначення об'єктів для дев, тест і прод оточень
+
+
Прикріплення тегу версії
+
+
Класифікування об'єктів за допомогою тегів
+
+
+
+
+
+
+
Ви можете створити Сервіс одночасно із Deployment, виконавши команду --expose в kubectl.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Мітки можна прикріпити до об'єктів під час створення або пізніше. Їх можна змінити у будь-який час. А зараз давайте відкриємо наш застосунок за допомогою Сервісу і прикріпимо мітки.
У попередніх модулях ми створили Deployment і відкрили його для зовнішнього трафіка за допомогою Сервісу. Deployment створив лише один Под для запуску нашого застосунку. Коли трафік збільшиться, нам доведеться масштабувати застосунок, аби задовольнити вимоги користувачів.
+
+
+
Масштабування досягається шляхом зміни кількості реплік у Deployment'і.
+
+
+
+
+
+
Зміст:
+
+
+
Масштабування Deployment'а
+
+
+
+
+
Кількість Подів можна вказати одразу при створенні Deployment'а за допомогою параметра --replicas, під час запуску команди kubectl run
Масштабування Deployment'а забезпечує створення нових Подів і їх розподілення по вузлах з доступними ресурсами. Масштабування збільшить кількість Подів відповідно до нового бажаного стану. Kubernetes також підтримує автоматичне масштабування, однак це виходить за межі даного матеріалу. Масштабування до нуля також можливе - це призведе до видалення всіх Подів у визначеному Deployment'і.
+
+
+
Запустивши застосунок на декількох Подах, необхідно розподілити між ними трафік. Сервіси мають інтегрований балансувальник навантаження, що розподіляє мережевий трафік між усіма Подами відкритого Deployment'а. Сервіси безперервно моніторять запущені Поди за допомогою кінцевих точок, для того щоб забезпечити надходження трафіка лише на доступні Поди.
+
+
+
+
+
+
Масштабування досягається шляхом зміни кількості реплік у Deployment'і.
+
+
+
+
+
+
+
+
+
+
Після запуску декількох примірників застосунку ви зможете виконувати послідовне оновлення без шкоди для доступності системи. Ми розповімо вам про це у наступному модулі. А зараз давайте повернемось до онлайн термінала і масштабуємо наш застосунок.
Виконати послідовне оновлення, використовуючи kubectl.
+
+
+
+
+
+
Оновлення застосунку
+
+
+
Користувачі очікують від застосунків високої доступності у будь-який час, а розробники - оновлення цих застосунків декілька разів на день. У Kubernetes це стає можливим завдяки послідовному оновленню. Послідовні оновлення дозволяють оновити Deployment без простою, шляхом послідовної заміни одних Подів іншими. Нові Поди розподіляються по вузлах з доступними ресурсами.
+
+
+
У попередньому модулі ми масштабували наш застосунок, запустивши його на декількох Подах. Масштабування - необхідна умова для проведення оновлень без шкоди для доступності застосунку. За типовими налаштуваннями, максимальна кількість Подів, недоступних під час оновлення, і максимальна кількість нових Подів, які можуть бути створені, дорівнює одиниці. Обидві опції можна налаштувати в числовому або відсотковому (від кількості Подів) еквіваленті.
+ У Kubernetes оновлення версіонуються, тому кожне оновлення Deployment'а можна відкотити до попередньої (стабільної) версії.
+
+
+
+
+
+
Зміст:
+
+
+
Оновлення застосунку
+
+
+
+
+
Послідовне оновлення дозволяє оновити Deployment без простою шляхом послідовної заміни одних Подів іншими.
Як і у випадку з масштабуванням, якщо Deployment "відкритий у світ", то під час оновлення Сервіс розподілятиме трафік лише на доступні Поди. Під доступним мається на увазі Под, готовий до експлуатації користувачами застосунку.
+
+
+
Послідовне оновлення дозволяє вам:
+
+
+
Просувати застосунок з одного оточення в інше (шляхом оновлення образу контейнера)
+
+
Відкочуватися до попередніх версій
+
+
Здійснювати безперервну інтеграцію та розгортання застосунків без простою
+
+
+
+
+
+
+
+
Якщо Deployment "відкритий у світ", то під час оновлення сервіс розподілятиме трафік лише на доступні Поди.
+
+
+
+
+
+
+
+
+
+
В інтерактивному уроці ми оновимо наш застосунок до нової версії, а потім відкотимося до попередньої.