diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index c7040f7fa8133..25b0ff475351a 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -6,6 +6,8 @@ your pull request. The description should explain what will change, and why. + PLEASE title the FIRST commit appropriately, so that if you squash all + your commits into one, the combined commit message makes sense. For overall help on editing and submitting pull requests, visit: https://kubernetes.io/docs/contribute/start/#improve-existing-content diff --git a/OWNERS b/OWNERS index 29499e0b4fe6e..3dd8f337bae42 100644 --- a/OWNERS +++ b/OWNERS @@ -7,10 +7,10 @@ approvers: - sig-docs-en-owners # Defined in OWNERS_ALIASES emeritus_approvers: -- chenopis -- jaredbhatti +# chenopis, you're welcome to return when you're ready to resume PR wrangling +# jaredbhatti, you're welcome to return when you're ready to resume PR wrangling # stewart-yu, you're welcome to return when you're ready to resume PR wrangling - stewart-yu labels: -- sig/docs \ No newline at end of file +- sig/docs diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index 8a8da8978e5d9..0a56370edd81f 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -48,6 +48,7 @@ aliases: - makoscafee - onlydole - Rajakavitha1 + - savitharaghunathan - sftim - steveperry-53 - tengqm @@ -148,6 +149,7 @@ aliases: - gochist - ianychoi - seokho-son + - ysyukr - zacharysarah sig-docs-ko-reviews: # PR reviews for Korean content - ClaudiaJKang @@ -155,13 +157,13 @@ aliases: - ianychoi - seokho-son - ysyukr - sig-docs-maintainers: # Website maintainers + sig-docs-leads: # Website chairs and tech leads - jimangel - kbarnard10 - - pwittrock - - steveperry-53 + - kbhawkey + - onlydole + - sftim - zacharysarah - - zparnold sig-docs-zh-owners: # Admins for Chinese content - chenopis - chenrui333 @@ -181,6 +183,7 @@ aliases: - idealhack - markthink - SataQiu + - tanjunchen - tengqm - xiangpengzhao - xichengliudui @@ -221,3 +224,13 @@ aliases: - mfilocha - nvtkaszpir - kpucynski + sig-docs-uk-owners: # Admins for Ukrainian content + - anastyakulyk + - butuzov + - MaxymVlasov + sig-docs-uk-reviews: # PR reviews for Ukrainian content + - anastyakulyk + - butuzov + - idvoretskyi + - MaxymVlasov + - Potapy4 diff --git a/README-pl.md b/README-pl.md index c05631df91fe8..f8bffdb0b24df 100644 --- a/README-pl.md +++ b/README-pl.md @@ -22,14 +22,17 @@ Więcej informacji na temat współpracy przy tworzeniu dokumentacji znajdziesz * [Lokalizacja dokumentacji Kubernetes](https://kubernetes.io/docs/contribute/localization/) ## Różne wersje językowe `README.md` -| | | -|---|---| -|[README po francusku](README-fr.md)|[README po koreańsku](README-ko.md)| -|[README po niemiecku](README-de.md)|[README po portugalsku](README-pt.md)| -|[README w hindi](README-hi.md)|[README po hiszpańsku](README-es.md)| -|[README po indonezyjsku](README-id.md)|[README po chińsku](README-zh.md)| -|[README po japońsku](README-ja.md)|[README po polsku](README-pl.md)| -||| + +| | | +|----------------------------------------|----------------------------------------| +| [README po angielsku](README.md) | [README po francusku](README-fr.md) | +| [README po koreańsku](README-ko.md) | [README po niemiecku](README-de.md) | +| [README po portugalsku](README-pt.md) | [README w hindi](README-hi.md) | +| [README po hiszpańsku](README-es.md) | [README po indonezyjsku](README-id.md) | +| [README po chińsku](README-zh.md) | [README po japońsku](README-ja.md) | +| [README po wietnamsku](README-vi.md) | [README po rosyjsku](README-ru.md) | +| [README po włosku](README-it.md) | [README po ukraińsku](README-uk.md) | +| | | ## Jak uruchomić lokalną kopię strony przy pomocy Dockera? @@ -41,7 +44,7 @@ Zalecaną metodą uruchomienia serwisu internetowego Kubernetesa lokalnie jest u choco install make ``` -> Jeśli wolisz uruchomić serwis lokalnie bez Dockera, przeczytaj [jak uruchomić serwis lokalnie przy pomocy Hugo](#jak-uruchomić-serwis-lokalnie-przy-pomocy-hugo) poniżej. +> Jeśli wolisz uruchomić serwis lokalnie bez Dockera, przeczytaj [jak uruchomić serwis lokalnie przy pomocy Hugo](#jak-uruchomić-lokalną-kopię-strony-przy-pomocy-hugo) poniżej. Jeśli [zainstalowałeś i uruchomiłeś](https://www.docker.com/get-started) już Dockera, zbuduj obraz `kubernetes-hugo` lokalnie: diff --git a/README-ru.md b/README-ru.md index 5b110b903dfb0..357578acf8a46 100644 --- a/README-ru.md +++ b/README-ru.md @@ -22,10 +22,13 @@ ## Файл `README.md` на других языках | | | |-------------------------------|-------------------------------| -| [Французский](README-fr.md) | [Корейский](README-ko.md) | -| [Немецкий](README-de.md) | [Португальский](README-pt.md) | -| [Хинди](README-hi.md) | [Испанский](README-es.md) | -| [Индонезийский](README-id.md) | [Китайский](README-zh.md) | +| [Английский](README.md) | [Французский](README-fr.md) | +| [Корейский](README-ko.md) | [Немецкий](README-de.md) | +| [Португальский](README-pt.md) | [Хинди](README-hi.md) | +| [Испанский](README-es.md) | [Индонезийский](README-id.md) | +| [Китайский](README-zh.md) | [Японский](README-ja.md) | +| [Вьетнамский](README-vi.md) | [Итальянский](README-it.md) | +| [Польский]( README-pl.md) | [Украинский](README-uk.md) | | | | ## Запуск сайта локально с помощью Docker diff --git a/README-uk.md b/README-uk.md new file mode 100644 index 0000000000000..53d8b2bb46a85 --- /dev/null +++ b/README-uk.md @@ -0,0 +1,84 @@ +# Документація Kubernetes + +[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website) +[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) + +Вітаємо! В цьому репозиторії міститься все необхідне для роботи над [сайтом і документацією Kubernetes](https://kubernetes.io/). Ми щасливі, що ви хочете зробити свій внесок! + +## Внесок у документацію + +Ви можете створити копію цього репозиторія у своєму акаунті на GitHub, натиснувши на кнопку **Fork**, що розташована справа зверху. Ця копія називатиметься *fork* (відгалуження). Зробіть будь-які необхідні зміни у своєму відгалуженні. Коли ви будете готові надіслати їх нам, перейдіть до свого відгалуження і створіть новий pull request, щоб сповістити нас. + +Після того, як ви створили pull request, рецензент Kubernetes зобов’язується надати вам по ньому чіткий і конструктивний коментар. **Ваш обов’язок як творця pull request - відкоригувати його відповідно до зауважень рецензента Kubernetes.** Також, зауважте: може статися так, що ви отримаєте коментарі від декількох рецензентів Kubernetes або від іншого рецензента, ніж той, якого вам було призначено від початку. Крім того, за потреби один із ваших рецензентів може запросити технічну перевірку від одного з [технічних рецензентів Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers). Рецензенти намагатимуться відреагувати вчасно, проте час відповіді може відрізнятися в залежності від обставин. + +Більше інформації про внесок у документацію Kubernetes ви знайдете у наступних джерелах: + +* [Внесок: з чого почати](https://kubernetes.io/docs/contribute/start/) +* [Візуалізація запропонованих змін до документації](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally) +* [Використання шаблонів сторінок](http://kubernetes.io/docs/contribute/style/page-templates/) +* [Керівництво зі стилю оформлення документації](http://kubernetes.io/docs/contribute/style/style-guide/) +* [Переклад документації Kubernetes іншими мовами](https://kubernetes.io/docs/contribute/localization/) + +## Файл `README.md` іншими мовами + +| | | +|-------------------------------|-------------------------------| +| [Англійська](README.md) | [Французька](README-fr.md) | +| [Корейська](README-ko.md) | [Німецька](README-de.md) | +| [Португальська](README-pt.md) | [Хінді](README-hi.md) | +| [Іспанська](README-es.md) | [Індонезійська](README-id.md) | +| [Китайська](README-zh.md) | [Японська](README-ja.md) | +| [В'єтнамська](README-vi.md) | [Російська](README-ru.md) | +| [Італійська](README-it.md) | [Польська](README-pl.md) | +| | | + +## Запуск сайту локально за допомогою Docker + +Для локального запуску сайту Kubernetes рекомендовано запустити спеціальний [Docker](https://docker.com)-образ, що містить генератор статичних сайтів [Hugo](https://gohugo.io). + +> Якщо ви працюєте під Windows, вам знадобиться ще декілька інструментів, які можна встановити за допомогою [Chocolatey](https://chocolatey.org). `choco install make` + +> Якщо ви вважаєте кращим запустити сайт локально без використання Docker, дивіться пункт нижче [Запуск сайту локально за допомогою Hugo](#запуск-сайту-локально-зa-допомогою-hugo). + +Якщо у вас вже [запущений](https://www.docker.com/get-started) Docker, зберіть локальний Docker-образ `kubernetes-hugo`: + +```bash +make docker-image +``` + +Після того, як образ зібрано, ви можете запустити сайт локально: + +```bash +make docker-serve +``` + +Відкрийте у своєму браузері http://localhost:1313, щоб побачити сайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує сайт відповідно до внесених змін і оновлює сторінку у браузері. + +## Запуск сайту локально зa допомогою Hugo + +Для інструкцій по установці Hugo дивіться [офіційну документацію](https://gohugo.io/getting-started/installing/). Обов’язково встановіть розширену версію Hugo, яка позначена змінною оточення `HUGO_VERSION` у файлі [`netlify.toml`](netlify.toml#L9). + +Після установки Hugo, запустіть сайт локально командою: + +```bash +make serve +``` + +Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити сайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує сайт відповідно до внесених змін і оновлює сторінку у браузері. + +## Спільнота, обговорення, внесок і підтримка + +Дізнайтеся, як долучитися до спільноти Kubernetes на [сторінці спільноти](http://kubernetes.io/community/). + +Для зв’язку із супроводжуючими проекту скористайтеся: + +- [Slack](https://kubernetes.slack.com/messages/sig-docs) +- [Поштова розсилка](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) + +### Кодекс поведінки + +Участь у спільноті Kubernetes визначається правилами [Кодексу поведінки спільноти Kubernetes](code-of-conduct.md). + +## Дякуємо! + +Долучення до спільноти - запорука успішного розвитку Kubernetes. Ми цінуємо ваш внесок у наш сайт і документацію! diff --git a/README.md b/README.md index 8be2ae4249ee0..3329b1a8c30f3 100644 --- a/README.md +++ b/README.md @@ -1,83 +1,65 @@ # The Kubernetes documentation -[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website) -[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) +[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) -Welcome! This repository houses all of the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute! +This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute! -## Contributing to the docs - -You can click the **Fork** button in the upper-right area of the screen to create a copy of this repository in your GitHub account. This copy is called a *fork*. Make any changes you want in your fork, and when you are ready to send those changes to us, go to your fork and create a new pull request to let us know about it. - -Once your pull request is created, a Kubernetes reviewer will take responsibility for providing clear, actionable feedback. As the owner of the pull request, **it is your responsibility to modify your pull request to address the feedback that has been provided to you by the Kubernetes reviewer.** Also, note that you may end up having more than one Kubernetes reviewer provide you feedback or you may end up getting feedback from a Kubernetes reviewer that is different than the one initially assigned to provide you feedback. Furthermore, in some cases, one of your reviewers might ask for a technical review from a [Kubernetes tech reviewer](https://github.com/kubernetes/website/wiki/Tech-reviewers) when needed. Reviewers will do their best to provide feedback in a timely fashion but response time can vary based on circumstances. - -For more information about contributing to the Kubernetes documentation, see: - -* [Start contributing](https://kubernetes.io/docs/contribute/start/) -* [Staging Your Documentation Changes](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally) -* [Using Page Templates](http://kubernetes.io/docs/contribute/style/page-templates/) -* [Documentation Style Guide](http://kubernetes.io/docs/contribute/style/style-guide/) -* [Localizing Kubernetes Documentation](https://kubernetes.io/docs/contribute/localization/) - -## Localization `README.md`'s -| | | -|---|---| -|[French README](README-fr.md)|[Korean README](README-ko.md)| -|[German README](README-de.md)|[Portuguese README](README-pt.md)| -|[Hindi README](README-hi.md)|[Spanish README](README-es.md)| -|[Indonesian README](README-id.md)|[Chinese README](README-zh.md)| -|[Japanese README](README-ja.md)|[Vietnamese README](README-vi.md)| -|[Russian README](README-ru.md)|[Italian README](README-it.md)| -|[Polish README](README-pl.md)|| -||| - -## Running the website locally using Docker - -The recommended way to run the Kubernetes website locally is to run a specialized [Docker](https://docker.com) image that includes the [Hugo](https://gohugo.io) static website generator. - -> If you are running on Windows, you'll need a few more tools which you can install with [Chocolatey](https://chocolatey.org). `choco install make` +## Running the website locally using Hugo -> If you'd prefer to run the website locally without Docker, see [Running the website locally using Hugo](#running-the-website-locally-using-hugo) below. +See the [official Hugo documentation](https://gohugo.io/getting-started/installing/) for Hugo installation instructions. Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file. -If you have Docker [up and running](https://www.docker.com/get-started), build the `kubernetes-hugo` Docker image locally: +To run the website locally when you have Hugo installed: ```bash -make docker-image +git clone https://github.com/kubernetes/website.git +cd website +hugo server --buildFuture ``` -Once the image has been built, you can run the website locally: +This will start the local Hugo server on port 1313. Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh. -```bash -make docker-serve -``` +## Get involved with SIG Docs -Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh. +Learn more about SIG Docs Kubernetes community and meetings on the [community page](https://github.com/kubernetes/community/tree/master/sig-docs#meetings). -## Running the website locally using Hugo +You can also reach the maintainers of this project at: -See the [official Hugo documentation](https://gohugo.io/getting-started/installing/) for Hugo installation instructions. Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L9) file. +- [Slack](https://kubernetes.slack.com/messages/sig-docs) +- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) -To run the website locally when you have Hugo installed: +## Contributing to the docs -```bash -make serve -``` +You can click the **Fork** button in the upper-right area of the screen to create a copy of this repository in your GitHub account. This copy is called a *fork*. Make any changes you want in your fork, and when you are ready to send those changes to us, go to your fork and create a new pull request to let us know about it. -This will start the local Hugo server on port 1313. Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh. +Once your pull request is created, a Kubernetes reviewer will take responsibility for providing clear, actionable feedback. As the owner of the pull request, **it is your responsibility to modify your pull request to address the feedback that has been provided to you by the Kubernetes reviewer.** -## Community, discussion, contribution, and support +Also, note that you may end up having more than one Kubernetes reviewer provide you feedback or you may end up getting feedback from a Kubernetes reviewer that is different than the one initially assigned to provide you feedback. -Learn how to engage with the Kubernetes community on the [community page](http://kubernetes.io/community/). +Furthermore, in some cases, one of your reviewers might ask for a technical review from a Kubernetes tech reviewer when needed. Reviewers will do their best to provide feedback in a timely fashion but response time can vary based on circumstances. -You can reach the maintainers of this project at: +For more information about contributing to the Kubernetes documentation, see: -- [Slack](https://kubernetes.slack.com/messages/sig-docs) -- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) +* [Contribute to Kubernetes docs](https://kubernetes.io/docs/contribute/) +* [Using Page Templates](https://kubernetes.io/docs/contribute/style/page-templates/) +* [Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/) +* [Localizing Kubernetes Documentation](https://kubernetes.io/docs/contribute/localization/) + +## Localization `README.md`'s + +| Language | Language | +|---|---| +|[French](README-fr.md)|[Korean](README-ko.md)| +|[German](README-de.md)|[Portuguese](README-pt.md)| +|[Hindi](README-hi.md)|[Spanish](README-es.md)| +|[Indonesian](README-id.md)|[Chinese](README-zh.md)| +|[Japanese](README-ja.md)|[Vietnamese](README-vi.md)| +|[Russian](README-ru.md)|[Italian](README-it.md)| +|[Polish](README-pl.md)|[Ukrainian](README-uk.md)| -### Code of conduct +## Code of conduct -Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md). +Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). ## Thank you! -Kubernetes thrives on community participation, and we appreciate your contributions to our website and our documentation! +Kubernetes thrives on community participation, and we appreciate your contributions to our website and our documentation! \ No newline at end of file diff --git a/assets/sass/_desktop.sass b/assets/sass/_desktop.sass index 6eda82df7bcbf..1e6a49aef7503 100644 --- a/assets/sass/_desktop.sass +++ b/assets/sass/_desktop.sass @@ -2,7 +2,7 @@ $main-max-width: 1200px $vendor-strip-height: 44px $video-section-height: 550px -@media screen and (min-width: 1025px) +@media screen and (min-width: 1100px) #hamburger display: none diff --git a/config.toml b/config.toml index f8da1f89386d0..034b2e7c84d49 100644 --- a/config.toml +++ b/config.toml @@ -31,6 +31,7 @@ disableLanguages = ["hi", "it", "no"] [blackfriday] hrefTargetBlank = true fractions = false +smartDashes = false [frontmatter] date = ["date", ":filename", "publishDate", "lastmod"] @@ -298,3 +299,15 @@ contentDir = "content/pl" time_format_blog = "01.02.2006" # A list of language codes to look for untranslated content, ordered from left to right. language_alternatives = ["en"] + +[languages.uk] +title = "Kubernetes" +description = "Довершена система оркестрації контейнерів" +languageName = "Українська" +weight = 14 +contentDir = "content/uk" + +[languages.uk.params] +time_format_blog = "02.01.2006" +# A list of language codes to look for untranslated content, ordered from left to right. +language_alternatives = ["en"] diff --git a/content/de/docs/contribute/localization.md b/content/de/docs/contribute/localization.md new file mode 100644 index 0000000000000..79084cc0f270d --- /dev/null +++ b/content/de/docs/contribute/localization.md @@ -0,0 +1,289 @@ +--- +title: Lokalisierung der Kubernetes Dokumentation +content_template: templates/concept +weight: 50 +card: + name: mitarbeiten + weight: 50 + title: Übersetzen der Dokumentation +--- + +{{% capture overview %}} + +Diese Seite zeigt dir wie die Dokumentation für verschiedene Sprachen [lokalisiert](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/) wird. + +{{% /capture %}} + +{{% capture body %}} + +## Erste Schritte + +Da Mitwirkende nicht ihren eigenen Pull Request freigeben können, brauchst du mindestens zwei Mitwirkende um mit der Lokalisierung anfangen zu können. + +Alle Lokalisierungsteams müssen sich mit ihren eigenen Ressourcen selbst tragen. Die Kubernetes-Website ist gerne bereit, deine Arbeit zu beherbergen, aber es liegt an dir, sie zu übersetzen. + +### Finden deinen Zwei-Buchstaben-Sprachcode + +Rufe den [ISO 639-1 Standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) auf und finde deinen Zwei-Buchstaben-Ländercode zur Lokalisierung. Zum Beispiel ist der Zwei-Buchstaben-Code für Korea `ko`. + +### Duplizieren und klonen des Repositories + +Als erstes [erstells du dir deine eigenes Duplikat](/docs/contribute/new-content/new-content/#fork-the-repo) vom [kubernetes/website] Repository. + +Dann klonst du das Duplikat und `cd` hinein: + +```shell +git clone https://github.com//website +cd website +``` + +### Eröffne ein Pull Request + +Als nächstes [eröffnest du einen Pull Request](/docs/contribute/new-content/open-a-pr/#open-a-pr) (PR) um eine Lokalisierung zum `kubernetes/website` Repository hinzuzufügen. + +Der PR muss die [minimalen Inhaltsanforderungen](#mindestanforderungen) erfüllen bevor dieser genehmigt werden kann. + +Wie der PR für eine neue Lokalisierung aussieht kannst du dir an dem PR für die [Französische Dokumentation](https://github.com/kubernetes/website/pull/12548) ansehen. + +### Trete der Kubernetes GitHub Organisation bei + +Sobald du eine Lokalisierungs-PR eröffnet hast, kannst du Mitglied der Kubernetes GitHub Organisation werden. Jede Person im Team muss einen eigenen [Antrag auf Mitgliedschaft in der Organisation](https://github.com/kubernetes/org/issues/new/choose) im `kubernetes/org`-Repository erstellen. + +### Lokalisierungs-Team in GitHub hinzufügen + +Im nächsten Schritt musst du dein Kubernetes Lokalisierungs-Team in die [`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml) eintragen. + +Der PR des [Spanischen Lokalisierungs-Teams](https://github.com/kubernetes/org/pull/685) kann dir hierbei eine Hilfestellung sein. + +Mitglieder der `@kubernetes/sig-docs-**-owners` können nur PRs freigeben die innerhalb deines Lokalisierungs-Ordners Änderungen vorgenommen haben: `/content/**/`. + +Für jede Lokalisierung automatisiert das Team `@kubernetes/sig-docs-**-reviews` die Review-Zuweisung für neue PRs. + +Mitglieder von `@kubernetes/website-maintainers` können neue Entwicklungszweige schaffen, um die Übersetzungsarbeiten zu koordinieren. + +Mitglieder von `@kubernetes/website-milestone-maintainers` können den Befehl `/milestone` [Prow Kommando](https://prow.k8s.io/command-help) verwenden, um Themen oder PRs einen Meilenstein zuzuweisen. + +### Workflow konfigurieren + +Als nächstes fügst du ein GitHub-Label für deine Lokalisierung im `kubernetes/test-infra`-Repository hinzu. Mit einem Label kannst du Aufgaben filtern und Anfragen für deine spezifische Sprache abrufen. + +Schau dir den PR zum Hinzufügen der Labels für die [Italienischen Sprachen-Labels](https://github.com/kubernetes/test-infra/pull/11316 an. + + +### Finde eine Gemeinschaft + +Lasse die Kubernetes SIG Docs wissen, dass du an der Erstellung einer Lokalisierung interessiert bist! Trete dem [SIG Docs Slack-Kanal](https://kubernetes.slack.com/messages/C1J0BPD2M/) bei. Andere Lokalisierungsteams helfen dir gerne beim Einstieg und beantworten deine Fragen. + +Du kannst auch einen Slack-Kanal für deine Lokalisierung im `kubernetes/community`-Repository erstellen. Ein Beispiel für das Hinzufügen eines Slack-Kanals findest du im PR für [Kanäle für Indonesisch und Portugiesisch hinzufügen](https://github.com/kubernetes/community/pull/3605). + + +## Mindestanforderungen + +### Ändere die Website-Konfiguration + +Die Kubernetes-Website verwendet Hugo als Web-Framework. Die Hugo-Konfiguration der Website befindet sich in der Datei [`config.toml`](https://github.com/kubernetes/website/tree/master/config.toml). Um eine neue Lokalisierung zu unterstützen, musst du die Datei `config.toml` modifizieren. + +Dazu fügst du einen neuen Block für die neue Sprache unter den bereits existierenden `[languages]` Block in das `config.toml` ein, wie folgendes Beispiel zeigt: + +```toml +[languages.de] +title = "Kubernetes" +description = "Produktionsreife Container-Verwaltung" +languageName = "Deutsch" +contentDir = "content/de" +weight = 3 +``` + +Wenn du deinem Block einen Parameter `weight` zuweist, suche den Sprachblock mit dem höchsten Gewicht und addiere 1 zu diesem Wert. + +Weitere Informationen zu Hugos Multilingualen Support findest du unter "[Multilingual Mode](https://gohugo.io/content-management/multilingual/)" auf in der Hugo Dokumentation. + +### Neuen Lokalisierungsordner erstellen + +Füge eine sprachspezifisches Unterverzeichnis zum Ordner [`content`](https://github.com/kubernetes/website/tree/master/content) im Repository hinzu. Der Zwei-Buchstaben-Code für Deutsch ist zum Beispiel `de`: + +```shell +mkdir content/de +``` + +### Lokalisiere den Verhaltenscodex +Öffne einen PR gegen das [`cncf/foundation`](https://github.com/cncf/foundation/tree/master/code-of-conduct-languages) Repository, um den Verhaltenskodex in deiner Sprache hinzuzufügen. + +### Lokalisierungs README Datei hinzufügen + +Um andere Lokalisierungsmitwirkende anzuleiten, füge eine neue [`README-**.md`](https://help.github.com/articles/about-readmes/) auf der obersten Ebene von k/website hinzu, wobei `**` der aus zwei Buchstaben bestehende Sprachcode ist. Eine deutsche README-Datei wäre zum Beispiel `README-de.md`. + +Gebe den Lokalisierungsmitwirkende in der lokalisierten `README-**.md`-Datei Anleitung zum Mitwirken. Füge dieselben Informationen ein, die auch in `README.md` enthalten sind, sowie: + +- Eine Anlaufstelle für das Lokalisierungsprojekt +- Alle für die Lokalisierung spezifischen Informationen + +Nachdem du das lokalisierte README erstellt hast, füge der Datei einen Link aus der englischen Hauptdatei `README.md` hinzu und gebe die Kontaktinformationen auf Englisch an. Du kannst eine GitHub-ID, eine E-Mail-Adresse, [Slack-Kanal](https://slack.com/) oder eine andere Kontaktmethode angeben. Du musst auch einen Link zu deinem lokalisierten Verhaltenskodex der Gemeinschaft angeben. + +### Richte eine OWNERS Datei ein + +Um die Rollen der einzelnen an der Lokalisierung beteiligten Benutzer festzulegen, erstelle eine `OWNERS`-Datei innerhalb des sprachspezifischen Unterverzeichnisses mit: + +- **reviewers**: Eine Liste von kubernetes-Teams mit Gutachter-Rollen, in diesem Fall das `sig-docs-**-reviews` Team, das in [Lokalisierungsteam in GitHub hinzufügen](#lokalisierungs-team-in-github-hinzufügen) erstellt wurde. +- **approvers**: Eine Liste der Kubernetes-Teams mit der Rolle des Genehmigers, in diesem Fall das `sig-docs-**-owners` Team, das in [Lokalisierungsteam in GitHub hinzufügen](#lokalisierungs-team-in-github-hinzufügen) erstellt wurde. +- **labels**: Eine Liste von GitHub-Labels, die automatisch auf einen PR angewendet werden sollen, in diesem Fall das Sprachlabel, das unter [Workflow konfigurieren](#workflow-konfigurieren) erstellt wurde. + +Weitere Informationen über die Datei `OWNERS` findest du unter [go.k8s.io/owners](https://go.k8s.io/owners). + +Die Datei [Spanish OWNERS file](https://git.k8s.io/website/content/es/OWNERS), mit dem Sprachcode `es`, sieht wie folgt aus: + +```yaml +# See the OWNERS docs at https://go.k8s.io/owners + +# This is the localization project for Spanish. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-es-reviews + +approvers: +- sig-docs-es-owners + +labels: +- language/es +``` +Nachdem du die sprachspezifische Datei `OWNERS` hinzugefügt hast, aktualisiere die root Datei [`OWNERS_ALIASES`](https://git.k8s.io/website/OWNERS_ALIASES) mit den neuen Kubernetes-Teams für die Lokalisierung, `sig-docs-**-owners` und `sig-docs-**-reviews`. + +Füge für jedes Team die Liste der unter [Ihr Lokalisierungsteam in GitHub hinzufügen](#lokalisierungs-team-in-github-hinzufügen) angeforderten GitHub-Benutzer in alphabetischer Reihenfolge hinzu. + +```diff +--- a/OWNERS_ALIASES ++++ b/OWNERS_ALIASES +@@ -48,6 +48,14 @@ aliases: + - stewart-yu + - xiangpengzhao + - zhangxiaoyu-zidif ++ sig-docs-es-owners: # Admins for Spanish content ++ - alexbrand ++ - raelga ++ sig-docs-es-reviews: # PR reviews for Spanish content ++ - alexbrand ++ - electrocucaracha ++ - glo-pena ++ - raelga + sig-docs-fr-owners: # Admins for French content + - perriea + - remyleone +``` + +## Inhalte übersetzen + +Die Lokalisierung *aller* Dokumentationen des Kubernetes ist eine enorme Aufgabe. Es ist in Ordnung, klein anzufangen und mit der Zeit zu erweitern. + +Alle Lokalisierungen müssen folgende Inhalte enthalten: + +Beschreibung | URLs +-----|----- +Startseite | [Alle Überschriften und Untertitel URLs](/docs/home/) +Einrichtung | [Alle Überschriften und Untertitel URLs](/docs/setup/) +Tutorials | [Kubernetes Grundlagen](/docs/tutorials/kubernetes-basics/), [Hello Minikube](/docs/tutorials/stateless-application/hello-minikube/) +Site strings | [Alle Website-Zeichenfolgen in einer neuen lokalisierten TOML-Datei](https://github.com/kubernetes/website/tree/master/i18n) + +Übersetzte Dokumente müssen sich in ihrem eigenen Unterverzeichnis `content/**/` befinden, aber ansonsten dem gleichen URL-Pfad wie die englische Quelle folgen. Um z.B. das Tutorial [Kubernetes Grundlagen](/docs/tutorials/kubernetes-basics/) für die Übersetzung ins Deutsche vorzubereiten, erstelle einen Unterordner unter dem Ordner `content/de/` und kopiere die englische Quelle: + +```shell +mkdir -p content/de/docs/tutorials +cp content/en/docs/tutorials/kubernetes-basics.md content/de/docs/tutorials/kubernetes-basics.md +``` + +Übersetzungswerkzeuge können den Übersetzungsprozess beschleunigen. Einige Redakteure bieten beispielsweise Plugins zur schnellen Übersetzung von Text an. + + +{{< caution >}} +Maschinelle Übersetzung allein reicht nicht aus. Die Lokalisierung erfordert eine umfassende menschliche Überprüfung, um Mindestqualitätsstandards zu erfüllen. +{{< /caution >}} + +Um die Genauigkeit in Grammatik und Bedeutung zu gewährleisten, sollten die Mitglieder deines Lokalisierungsteams alle maschinell erstellten Übersetzungen vor der Veröffentlichung sorgfältig überprüfen. + +### Quelldaten + +Lokalisierungen müssen auf den englischen Dateien der neuesten Version basieren, {{< latest-version >}}. + +Um die Quelldatei für das neueste Release führe folgende Schritte durch: + +1. Navigiere zum Repository der Website Kubernetes unter https://github.com/kubernetes/website. +2. Wähle den `release-1.X`-Zweig für die aktuellste Version. + +Die neueste Version ist {{< latest-version >}}, so dass der neueste Versionszweig [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}) ist. + +### Seitenverlinkung in der Internationalisierung + +Lokalisierungen müssen den Inhalt von [`i18n/de.toml`](https://github.com/kubernetes/website/blob/master/i18n/en.toml) in einer neuen sprachspezifischen Datei enthalten. Als Beispiel: `i18n/de.toml`. + +Füge eine neue Lokalisierungsdatei zu `i18n/` hinzu. Zum Beispiel mit Deutsch (`de`): + +```shell +cp i18n/en.toml i18n/de.toml +``` + +Übersetze dann den Wert jeder Zeichenfolge: + +```TOML +[docs_label_i_am] +other = "ICH BIN..." +``` +Durch die Lokalisierung von Website-Zeichenfolgen kannst du Website-weiten Text und Funktionen anpassen: z. B. den gesetzlichen Copyright-Text in der Fußzeile auf jeder Seite. + +### Sprachspezifischer Styleguide und Glossar + +Einige Sprachteams haben ihren eigenen sprachspezifischen Styleguide und ihr eigenes Glossar. Siehe zum Beispiel den [Leitfaden zur koreanischen Lokalisierung](/ko/docs/contribute/localization_ko/). + +## Branching Strategie + +Da Lokalisierungsprojekte in hohem Maße gemeinschaftliche Bemühungen sind, ermutigen wir Teams, in gemeinsamen Entwicklungszweigen zu arbeiten. + +In einem Entwicklungszweig zusammenzuarbeiten: + +1. Ein Teammitglied von [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) eröffnet einen Entwicklungszweig aus einem Quellzweig auf https://github.com/kubernetes/website. + + Deine Genehmiger sind dem `@kubernetes/website-maintainers`-Team beigetreten, als du [dein Lokalisierungsteam](#lokalisierungs-team-in-github-hinzufügen) zum Repository [`kubernetes/org`](https://github.com/kubernetes/org) hinzugefügt hast. + + Wir empfehlen das folgende Zweigbenennungsschema: + + `dev--.` + + Beispielsweise öffnet ein Genehmigender in einem deutschen Lokalisierungsteam den Entwicklungszweig `dev-1.12-de.1` direkt gegen das k/website-Repository, basierend auf dem Quellzweig für Kubernetes v1.12. + +2. Einzelne Mitwirkende öffnen Feature-Zweige, die auf dem Entwicklungszweig basieren. + + Zum Beispiel öffnet ein deutscher Mitwirkende eine Pull-Anfrage mit Änderungen an `kubernetes:dev-1.12-de.1` von `Benutzername:lokaler-Zweig-Name`. + +3. Genehmiger Überprüfen und führen die Feature-Zweigen in den Entwicklungszweig zusammen. + +4. In regelmäßigen Abständen führt ein Genehmiger den Entwicklungszweig mit seinem Ursprungszweig zusammen, indem er eine neue Pull-Anfrage eröffnet und genehmigt. Achtet darauf, die Commits zusammenzuführen (squash), bevor die Pull-Anfrage genehmigt wird. + +Wiederhole die Schritte 1-4 nach Bedarf, bis die Lokalisierung abgeschlossen ist. Zum Beispiel würden nachfolgende deutsche Entwicklungszweige sein: `dev-1.12-de.2`, `dev-1.12-de.3`, usw. + +Die Teams müssen den lokalisierten Inhalt in demselben Versionszweig zusammenführen, aus dem der Inhalt stammt. Beispielsweise muss ein Entwicklungszweig, der von {{< release-branch >}} ausgeht, auf {{{{< release-branch >}}} basieren. + +Ein Genehmiger muss einen Entwicklungszweig aufrechterhalten, indem er seinen Quellzweig auf dem aktuellen Stand hält und Merge-Konflikte auflöst. Je länger ein Entwicklungszweig geöffnet bleibt, desto mehr Wartung erfordert er in der Regel. Ziehe in Betracht, regelmäßig Entwicklungszweige zusammenzuführen und neue zu eröffnen, anstatt einen extrem lang laufenden Entwicklungszweig zu unterhalten. + +Zu Beginn jedes Team-Meilensteins ist es hilfreich, ein Problem [Vergleich der Upstream-Änderungen](https://github.com/kubernetes/website/blob/master/scripts/upstream_changes.py) zwischen dem vorherigen Entwicklungszweig und dem aktuellen Entwicklungszweig zu öffnen. + + Während nur Genehmiger einen neuen Entwicklungszweig eröffnen und Pull-Anfragen zusammenführen können, kann jeder eine Pull-Anfrage für einen neuen Entwicklungszweig eröffnen. Es sind keine besonderen Genehmigungen erforderlich. + +Weitere Informationen über das Arbeiten von Forks oder direkt vom Repository aus findest du unter ["fork and clone the repo"](#duplizieren-und-klonen-des-repositories). + +## Am Upstream Mitwirken + +SIG Docs begrüßt Upstream Beiträge, also auf das englische Original, und Korrekturen an der englischen Quelle. + +## Unterstütze bereits bestehende Lokalisierungen + +Du kannst auch dazu beitragen, Inhalte zu einer bestehenden Lokalisierung hinzuzufügen oder zu verbessern. Trete dem [Slack-Kanal](https://kubernetes.slack.com/messages/C1J0BPD2M/) für die Lokalisierung bei und beginne mit der Eröffnung von PRs, um zu helfen. Bitte beschränke deine Pull-Anfragen auf eine einzige Lokalisierung, da Pull-Anfragen, die Inhalte in mehreren Lokalisierungen ändern, schwer zu überprüfen sein könnten. + +{{% /capture %}} + +{{% capture whatsnext %}} + +Sobald eine Lokalisierung die Anforderungen an den Arbeitsablauf und die Mindestausgabe erfüllt, wird SIG docs: + +- Die Sprachauswahl auf der Website aktivieren +- Die Verfügbarkeit der Lokalisierung über die Kanäle der [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF), einschließlich des [Kubernetes Blogs](https://kubernetes.io/blog/) veröffentlichen. + +{{% /capture %}} diff --git a/content/de/docs/tutorials/hello-minikube.md b/content/de/docs/tutorials/hello-minikube.md index 2e244c3f599a0..4d0bc7f0f631f 100644 --- a/content/de/docs/tutorials/hello-minikube.md +++ b/content/de/docs/tutorials/hello-minikube.md @@ -75,7 +75,7 @@ Deployments sind die empfohlene Methode zum Verwalten der Erstellung und Skalier Der Pod führt einen Container basierend auf dem bereitgestellten Docker-Image aus. ```shell - kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 ``` 2. Anzeigen des Deployments: diff --git a/content/de/includes/user-guide-migration-notice.md b/content/de/includes/user-guide-migration-notice.md deleted file mode 100644 index 43460829ad995..0000000000000 --- a/content/de/includes/user-guide-migration-notice.md +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - -
-

ACHTUNG

-

Seit dem 14. März 2017 hat die Gruppe Kubernetes SIG-Docs-Maintainers mit der Migration des Benutzerhandbuch-Inhalts begonnen, wie bereits zuvor in der SIG Docs community angekündigt. Dies wurde in der kubernetes-sig-docs Gruppe und im kubernetes.slack.com #sig-docs Kanal bekanntgegeben.

-

Die Benutzerhandbücher in diesem Abschnitt werden in den Themen Tutorials, Aufgaben und Konzepte umgestaltet. Alles, was verschoben wurde, wird an seinem vorherigen Standort mit einer Benachrichtigung sowie einem Link zu seinem neuen Standort versehen. Die Reorganisation implementiert ein neues Inhaltsverzeichnis und sollte die Auffindbarkeit und Lesbarkeit der Dokumentation für ein breiteres Publikum verbessern.

-

Bei Fragen wenden Sie sich bitte an: kubernetes-sig-docs@googlegroups.com

-
diff --git a/content/en/blog/_posts/2019-03-21-a-guide-to-kubernetes-admission-controllers.md b/content/en/blog/_posts/2019-03-21-a-guide-to-kubernetes-admission-controllers.md index f3da77db5ba02..e5e5dfcc2877d 100644 --- a/content/en/blog/_posts/2019-03-21-a-guide-to-kubernetes-admission-controllers.md +++ b/content/en/blog/_posts/2019-03-21-a-guide-to-kubernetes-admission-controllers.md @@ -52,7 +52,7 @@ In this way, admission controllers and policy management help make sure that app To illustrate how admission controller webhooks can be leveraged to establish custom security policies, let’s consider an example that addresses one of the shortcomings of Kubernetes: a lot of its defaults are optimized for ease of use and reducing friction, sometimes at the expense of security. One of these settings is that containers are by default allowed to run as root (and, without further configuration and no `USER` directive in the Dockerfile, will also do so). Even though containers are isolated from the underlying host to a certain extent, running containers as root does increase the risk profile of your deployment— and should be avoided as one of many [security best practices](https://www.stackrox.com/post/2018/12/6-container-security-best-practices-you-should-be-following/). The [recently exposed runC vulnerability](https://www.stackrox.com/post/2019/02/the-runc-vulnerability-a-deep-dive-on-protecting-yourself/) ([CVE-2019-5736](https://nvd.nist.gov/vuln/detail/CVE-2019-5736)), for example, could be exploited only if the container ran as root. -You can use a custom mutating admission controller webhook to apply more secure defaults: unless explicitly requested, our webhook will ensure that pods run as a non-root user (we assign the user ID 1234 if no explicit assignment has been made). Note that this setup does not prevent you from deploying any workloads in your cluster, including those that legitimately require running as root. It only requires you to explicitly enable this risker mode of operation in the deployment configuration, while defaulting to non-root mode for all other workloads. +You can use a custom mutating admission controller webhook to apply more secure defaults: unless explicitly requested, our webhook will ensure that pods run as a non-root user (we assign the user ID 1234 if no explicit assignment has been made). Note that this setup does not prevent you from deploying any workloads in your cluster, including those that legitimately require running as root. It only requires you to explicitly enable this riskier mode of operation in the deployment configuration, while defaulting to non-root mode for all other workloads. The full code along with deployment instructions can be found in our accompanying [GitHub repository](https://github.com/stackrox/admission-controller-webhook-demo). Here, we will highlight a few of the more subtle aspects about how webhooks work. @@ -80,7 +80,7 @@ webhooks: resources: ["pods"] ``` -This configuration defines a `webhook webhook-server.webhook-demo.svc`, and instructs the Kubernetes API server to consult the service `webhook-server` in n`amespace webhook-demo` whenever a pod is created by making a HTTP POST request to the `/mutate` URL. For this configuration to work, several prerequisites have to be met. +This configuration defines a `webhook webhook-server.webhook-demo.svc`, and instructs the Kubernetes API server to consult the service `webhook-server` in `namespace webhook-demo` whenever a pod is created by making a HTTP POST request to the `/mutate` URL. For this configuration to work, several prerequisites have to be met. ## Webhook REST API diff --git a/content/en/blog/_posts/Kong-Ingress-Controller-and-Service-Mesh.md b/content/en/blog/_posts/2020-03-18-Kong-Ingress-Controller-and-Service-Mesh.md similarity index 100% rename from content/en/blog/_posts/Kong-Ingress-Controller-and-Service-Mesh.md rename to content/en/blog/_posts/2020-03-18-Kong-Ingress-Controller-and-Service-Mesh.md diff --git a/content/en/blog/_posts/2020-03-25-kubernetes-1.18-release-announcement.md b/content/en/blog/_posts/2020-03-25-kubernetes-1.18-release-announcement.md index 8ef59b1f96860..d4fb5dc7df1df 100644 --- a/content/en/blog/_posts/2020-03-25-kubernetes-1.18-release-announcement.md +++ b/content/en/blog/_posts/2020-03-25-kubernetes-1.18-release-announcement.md @@ -37,8 +37,7 @@ SIG-CLI was debating the need for a debug utility for quite some time already. W ### Introducing Windows CSI support alpha for Kubernetes -With the release of Kubernetes 1.18, an alpha version of CSI Proxy for Windows is getting released. CSI proxy enables non-privileged (pre-approved) containers to perform privileged storage operations on Windows. CSI drivers can now be supported in Windows by leveraging CSI proxy. - +The alpha version of CSI Proxy for Windows is being released with Kubernetes 1.18. CSI proxy enables CSI Drivers on Windows by allowing containers in Windows to perform privileged storage operations. ## Other Updates diff --git a/content/en/blog/_posts/2020-03-30-topology-manager-beta.md b/content/en/blog/_posts/2020-03-30-topology-manager-beta.md new file mode 100644 index 0000000000000..85c5cd81d54fa --- /dev/null +++ b/content/en/blog/_posts/2020-03-30-topology-manager-beta.md @@ -0,0 +1,503 @@ +--- +layout: blog +title: "Kubernetes Topology Manager Moves to Beta - Align Up!" +date: 2020-04-01 +slug: kubernetes-1-18-feature-topoloy-manager-beta +--- + +**Authors:** Kevin Klues (NVIDIA), Victor Pickard (Red Hat), Conor Nolan (Intel) + +This blog post describes the **TopologyManager**, a beta feature of Kubernetes in release 1.18. The **TopologyManager** feature enables NUMA alignment of CPUs and peripheral devices (such as SR-IOV VFs and GPUs), allowing your workload to run in an environment optimized for low-latency. + +Prior to the introduction of the **TopologyManager**, the CPU and Device Manager would make resource allocation decisions independent of each other. This could result in undesirable allocations on multi-socket systems, causing degraded performance on latency critical applications. With the introduction of the **TopologyManager**, we now have a way to avoid this. + +This blog post covers: + +1. A brief introduction to NUMA and why it is important +1. The policies available to end-users to ensure NUMA alignment of CPUs and devices +1. The internal details of how the **TopologyManager** works +1. Current limitations of the **TopologyManager** +1. Future directions of the **TopologyManager** + +## So, what is NUMA and why do I care? + +The term NUMA stands for Non-Uniform Memory Access. It is a technology available on multi-cpu systems that allows different CPUs to access different parts of memory at different speeds. Any memory directly connected to a CPU is considered "local" to that CPU and can be accessed very fast. Any memory not directly connected to a CPU is considered "non-local" and will have variable access times depending on how many interconnects must be passed through in order to reach it. On modern systems, the idea of having "local" vs. "non-local" memory can also be extended to peripheral devices such as NICs or GPUs. For high performance, CPUs and devices should be allocated such that they have access to the same local memory. + +All memory on a NUMA system is divided into a set of "NUMA nodes", with each node representing the local memory for a set of CPUs or devices. We talk about an individual CPU as being part of a NUMA node if its local memory is associated with that NUMA node. + +We talk about a peripheral device as being part of a NUMA node based on the shortest number of interconnects that must be passed through in order to reach it. + +For example, in Figure 1, CPUs 0-3 are said to be part of NUMA node 0, whereas CPUs 4-7 are part of NUMA node 1. Likewise GPU 0 and NIC 0 are said to be part of NUMA node 0 because they are attached to Socket 0, whose CPUs are all part of NUMA node 0. The same is true for GPU 1 and NIC 1 on NUMA node 1. + +

+ +

+ + +**Figure 1:** An example system with 2 NUMA nodes, 2 Sockets with 4 CPUs each, 2 GPUs, and 2 NICs. CPUs on Socket 0, GPU 0, and NIC 0 are all part of NUMA node 0. CPUs on Socket 1, GPU 1, and NIC 1 are all part of NUMA node 1. + + +Although the example above shows a 1-1 mapping of NUMA Node to Socket, this is not necessarily true in the general case. There may be multiple sockets on a single NUMA node, or individual CPUs of a single socket may be connected to different NUMA nodes. Moreover, emerging technologies such as Sub-NUMA Clustering ([available on recent intel CPUs](https://software.intel.com/en-us/articles/intel-xeon-processor-scalable-family-technical-overview)) allow single CPUs to be associated with multiple NUMA nodes so long as their memory access times to both nodes are the same (or have a negligible difference). + +The **TopologyManager** has been built to handle all of these scenarios. + +## Align Up! It's a TeaM Effort! + +As previously stated, the **TopologyManager** allows users to align their CPU and peripheral device allocations by NUMA node. There are several policies available for this: + +* **none:** this policy will not attempt to do any alignment of resources. It will act the same as if the **TopologyManager** were not present at all. This is the default policy. +* **best-effort:** with this policy, the **TopologyManager** will attempt to align allocations on NUMA nodes as best it can, but will always allow the pod to start even if some of the allocated resources are not aligned on the same NUMA node. +* **restricted:** this policy is the same as the **best-effort** policy, except it will fail pod admission if allocated resources cannot be aligned properly. Unlike with the **single-numa-node** policy, some allocations may come from multiple NUMA nodes if it is impossible to _ever_ satisfy the allocation request on a single NUMA node (e.g. 2 devices are requested and the only 2 devices on the system are on different NUMA nodes). +* **single-numa-node:** this policy is the most restrictive and will only allow a pod to be admitted if _all_ requested CPUs and devices can be allocated from exactly one NUMA node. + +It is important to note that the selected policy is applied to each container in a pod spec individually, rather than aligning resources across all containers together. + +Moreover, a single policy is applied to _all_ pods on a node via a global **kubelet** flag, rather than allowing users to select different policies on a pod-by-pod basis (or a container-by-container basis). We hope to relax this restriction in the future. + +The **kubelet** flag to set one of these policies can be seen below: + + +``` +--topology-manager-policy= + [none | best-effort | restricted | single-numa-node] +``` + + +Additionally, the **TopologyManager** is protected by a feature gate. This feature gate has been available since Kubernetes 1.16, but has only been enabled by default since 1.18. + +The feature gate can be enabled or disabled as follows (as described in more detail [here](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/)): + + +``` +--feature-gates="...,TopologyManager=" +``` + + +In order to trigger alignment according to the selected policy, a user must request CPUs and peripheral devices in their pod spec, according to a certain set of requirements. + +For peripheral devices, this means requesting devices from the available resources provided by a device plugin (e.g. **intel.com/sriov**, **nvidia.com/gpu**, etc.). This will only work if the device plugin has been extended to integrate properly with the **TopologyManager**. Currently, the only plugins known to have this extension are the [Nvidia GPU device plugin](https://github.com/NVIDIA/k8s-device-plugin/blob/5cb45d52afdf5798a40f8d0de049bce77f689865/nvidia.go#L74), and the [Intel SRIOV network device plugin](https://github.com/intel/sriov-network-device-plugin/blob/30e33f1ce2fc7b45721b6de8c8207e65dbf2d508/pkg/resources/pciNetDevice.go#L80). Details on how to extend a device plugin to integrate with the **TopologyManager** can be found [here](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager). + +For CPUs, this requires that the **CPUManager** has been configured with its **--static** policy enabled and that the pod is running in the Guaranteed QoS class (i.e. all CPU and memory **limits** are equal to their respective CPU and memory **requests**). CPUs must also be requested in whole number values (e.g. **1**, **2**, **1000m**, etc). Details on how to set the **CPUManager** policy can be found [here](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies). + +For example, assuming the **CPUManager** is running with its **--static** policy enabled and the device plugins for **gpu-vendor.com**, and **nic-vendor.com** have been extended to integrate with the **TopologyManager** properly, the pod spec below is sufficient to trigger the **TopologyManager** to run its selected policy: + +``` +spec: + containers: + - name: numa-aligned-container + image: alpine + resources: + limits: + cpu: 2 + memory: 200Mi + gpu-vendor.com/gpu: 1 + nic-vendor.com/nic: 1 +``` + +Following Figure 1 from the previous section, this would result in one of the following aligned allocations: + +``` +{cpu: {0, 1}, gpu: 0, nic: 0} +{cpu: {0, 2}, gpu: 0, nic: 0} +{cpu: {0, 3}, gpu: 0, nic: 0} +{cpu: {1, 2}, gpu: 0, nic: 0} +{cpu: {1, 3}, gpu: 0, nic: 0} +{cpu: {2, 3}, gpu: 0, nic: 0} + +{cpu: {4, 5}, gpu: 1, nic: 1} +{cpu: {4, 6}, gpu: 1, nic: 1} +{cpu: {4, 7}, gpu: 1, nic: 1} +{cpu: {5, 6}, gpu: 1, nic: 1} +{cpu: {5, 7}, gpu: 1, nic: 1} +{cpu: {6, 7}, gpu: 1, nic: 1} +``` + +And that’s it! Just follow this pattern to have the **TopologyManager** ensure NUMA alignment across containers that request topology-aware devices and exclusive CPUs. + +**NOTE:** if a pod is rejected by one of the **TopologyManager** policies, it will be placed in a **Terminated** state with a pod admission error and a reason of "**TopologyAffinityError**". Once a pod is in this state, the Kubernetes scheduler will not attempt to reschedule it. It is therefore recommended to use a [**Deployment**](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) with replicas to trigger a redeploy of the pod on such a failure. An [external control loop](https://kubernetes.io/docs/concepts/architecture/controller/) can also be implemented to trigger a redeployment of pods that have a **TopologyAffinityError**. + +## This is great, so how does it work under the hood? + +Pseudocode for the primary logic carried out by the **TopologyManager** can be seen below: + +``` +for container := range append(InitContainers, Containers...) { + for provider := range HintProviders { + hints += provider.GetTopologyHints(container) + } + + bestHint := policy.Merge(hints) + + for provider := range HintProviders { + provider.Allocate(container, bestHint) + } +} +``` + +The following diagram summarizes the steps taken during this loop: + +

+ +

+ +The steps themselves are: + +1. Loop over all containers in a pod. +1. For each container, gather "**TopologyHints**" from a set of "**HintProviders**" for each topology-aware resource type requested by the container (e.g. **gpu-vendor.com/gpu**, **nic-vendor.com/nic**, **cpu**, etc.). +1. Using the selected policy, merge the gathered **TopologyHints** to find the "best" hint that aligns resource allocations across all resource types. +1. Loop back over the set of hint providers, instructing them to allocate the resources they control using the merged hint as a guide. +1. This loop runs at pod admission time and will fail to admit the pod if any of these steps fail or alignment cannot be satisfied according to the selected policy. Any resources allocated before the failure are cleaned up accordingly. + +The following sections go into more detail on the exact structure of **TopologyHints** and **HintProviders**, as well as some details on the merge strategies used by each policy. + +### TopologyHints + +A **TopologyHint** encodes a set of constraints from which a given resource request can be satisfied. At present, the only constraint we consider is NUMA alignment. It is defined as follows: + +``` +type TopologyHint struct { + NUMANodeAffinity bitmask.BitMask + Preferred bool +} +``` + +The **NUMANodeAffinity** field contains a bitmask of NUMA nodes where a resource request can be satisfied. For example, the possible masks on a system with 2 NUMA nodes include: + +``` +{00}, {01}, {10}, {11} +``` + +The **Preferred** field contains a boolean that encodes whether the given hint is "preferred" or not. With the **best-effort** policy, preferred hints will be given preference over non-preferred hints when generating a "best" hint. With the **restricted** and **single-numa-node** policies, non-preferred hints will be rejected. + +In general, **HintProviders** generate **TopologyHints** by looking at the set of currently available resources that can satisfy a resource request. More specifically, they generate one **TopologyHint** for every possible mask of NUMA nodes where that resource request can be satisfied. If a mask cannot satisfy the request, it is omitted. For example, a **HintProvider** might provide the following hints on a system with 2 NUMA nodes when being asked to allocate 2 resources. These hints encode that both resources could either come from a single NUMA node (either 0 or 1), or they could each come from different NUMA nodes (but we prefer for them to come from just one). + +``` +{01: True}, {10: True}, {11: False} +``` + +At present, all **HintProviders** set the **Preferred** field to **True** if and only if the **NUMANodeAffinity** encodes a _minimal_ set of NUMA nodes that can satisfy the resource request. Normally, this will only be **True** for **TopologyHints** with a single NUMA node set in their bitmask. However, it may also be **True** if the only way to _ever_ satisfy the resource request is to span multiple NUMA nodes (e.g. 2 devices are requested and the only 2 devices on the system are on different NUMA nodes): + +``` +{0011: True}, {0111: False}, {1011: False}, {1111: False} +``` + +**NOTE:** Setting of the **Preferred** field in this way is _not_ based on the set of currently available resources. It is based on the ability to physically allocate the number of requested resources on some minimal set of NUMA nodes. + +In this way, it is possible for a **HintProvider** to return a list of hints with _all_ **Preferred** fields set to **False** if an actual preferred allocation cannot be satisfied until other containers release their resources. For example, consider the following scenario from the system in Figure 1: + +1. All but 2 CPUs are currently allocated to containers +1. The 2 remaining CPUs are on different NUMA nodes +1. A new container comes along asking for 2 CPUs + +In this case, the only generated hint would be **{11: False}** and not **{11: True}**. This happens because it _is_ possible to allocate 2 CPUs from the same NUMA node on this system (just not right now, given the current allocation state). The idea being that it is better to fail pod admission and retry the deployment when the minimal alignment can be satisfied than to allow a pod to be scheduled with sub-optimal alignment. + +### HintProviders + +A **HintProvider** is a component internal to the **kubelet** that coordinates aligned resource allocations with the **TopologyManager**. At present, the only **HintProviders** in Kubernetes are the **CPUManager** and the **DeviceManager**. We plan to add support for **HugePages** soon. + +As discussed previously, the **TopologyManager** both gathers **TopologyHints** from **HintProviders** as well as triggers aligned resource allocations on them using a merged "best" hint. As such, **HintProviders** implement the following interface: + +``` +type HintProvider interface { + GetTopologyHints(*v1.Pod, *v1.Container) map[string][]TopologyHint + Allocate(*v1.Pod, *v1.Container) error +} +``` + +Notice that the call to **GetTopologyHints()** returns a **map[string][]TopologyHint**. This allows a single **HintProvider** to provide hints for multiple resource types instead of just one. For example, the **DeviceManager** requires this in order to pass hints back for every resource type registered by its plugins. + +As **HintProviders** generate their hints, they only consider how alignment could be satisfied for _currently_ available resources on the system. Any resources already allocated to other containers are not considered. + +For example, consider the system in Figure 1, with the following two containers requesting resources from it: + + + + + + + + + + +
Container0 + Container1 +
+ +
+spec:
+    containers:
+    - name: numa-aligned-container0
+      image: alpine
+      resources:
+          limits:
+              cpu: 2
+              memory: 200Mi
+              gpu-vendor.com/gpu: 1
+              nic-vendor.com/nic: 1
+
+ +
+ +
+spec:
+    containers:
+    - name: numa-aligned-container1
+      image: alpine
+      resources:
+          limits:
+              cpu: 2
+              memory: 200Mi
+              gpu-vendor.com/gpu: 1
+              nic-vendor.com/nic: 1
+
+ +
+ +If **Container0** is the first container considered for allocation on the system, the following set of hints will be generated for the three topology-aware resource types in the spec. + +``` + cpu: {{01: True}, {10: True}, {11: False}} +gpu-vendor.com/gpu: {{01: True}, {10: True}} +nic-vendor.com/nic: {{01: True}, {10: True}} +``` + +With a resulting aligned allocation of: + +``` +{cpu: {0, 1}, gpu: 0, nic: 0} +``` + +

+ +

+ + +When considering **Container1** these resources are then presumed to be unavailable, and thus only the following set of hints will be generated: + +``` + cpu: {{01: True}, {10: True}, {11: False}} +gpu-vendor.com/gpu: {{10: True}} +nic-vendor.com/nic: {{10: True}} +``` + +With a resulting aligned allocation of: + + +``` +{cpu: {4, 5}, gpu: 1, nic: 1} +``` + +

+ +

+ + +**NOTE:** Unlike the pseudocode provided at the beginning of this section, the call to **Allocate()** does not actually take a parameter for the merged "best" hint directly. Instead, the **TopologyManager** implements the following **Store** interface that **HintProviders** can query to retrieve the hint generated for a particular container once it has been generated: + +``` +type Store interface { + GetAffinity(podUID string, containerName string) TopologyHint +} +``` + +Separating this out into its own API call allows one to access this hint outside of the pod admission loop. This is useful for debugging as well as for reporting generated hints in tools such as **kubectl**(not yet available). + +### Policy.Merge + +The merge strategy defined by a given policy dictates how it combines the set of **TopologyHints** generated by all **HintProviders** into a single **TopologyHint** that can be used to inform aligned resource allocations. + +The general merge strategy for all supported policies begins the same: + +1. Take the cross-product of **TopologyHints** generated for each resource type +1. For each entry in the cross-product, **bitwise-and** the NUMA affinities of each **TopologyHint** together. Set this as the NUMA affinity in a resulting "merged" hint. +1. If all of the hints in an entry have **Preferred** set to **True** , set **Preferred** to **True** in the resulting "merged" hint. +1. If even one of the hints in an entry has **Preferred** set to **False** , set **Preferred** to **False** in the resulting "merged" hint. Also set **Preferred** to **False** in the "merged" hint if its NUMA affinity contains all 0s. + +Following the example from the previous section with hints for **Container0** generated as: + +``` + cpu: {{01: True}, {10: True}, {11: False}} +gpu-vendor.com/gpu: {{01: True}, {10: True}} +nic-vendor.com/nic: {{01: True}, {10: True}} +``` + +The above algorithm results in the following set of cross-product entries and "merged" hints: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
cross-product entry +

+{cpu, gpu-vendor.com/gpu, nic-vendor.com/nic} +

+
"merged" hint +
+ {{01: True}, {01: True}, {01: True}} + {01: True} +
+ {{01: True}, {01: True}, {10: True}} + {00: False} +
+ {{01: True}, {10: True}, {01: True}} + {00: False} +
+ {{01: True}, {10: True}, {10: True}} + {00: False} +
+ +
+ {{10: True}, {01: True}, {01: True}} + {00: False} +
+ {{10: True}, {01: True}, {10: True}} + {00: False} +
+ {{10: True}, {10: True}, {01: True}} + {00: False} +
+ {{10: True}, {10: True}, {10: True}} + {01: True} +
+ +
+ {{11: False}, {01: True}, {01: True}} + {01: False} +
+ {{11: False}, {01: True}, {10: True}} + {00: False} +
+ {{11: False}, {10: True}, {01: True}} + {00: False} +
+ {{11: False}, {10: True}, {10: True}} + {10: False} +
+ + +Once this list of "merged" hints has been generated, it is the job of the specific **TopologyManager** policy in use to decide which one to consider as the "best" hint. + +In general, this involves: + +1. Sorting merged hints by their "narrowness". Narrowness is defined as the number of bits set in a hint’s NUMA affinity mask. The fewer bits set, the narrower the hint. For hints that have the same number of bits set in their NUMA affinity mask, the hint with the most low order bits set is considered narrower. +1. Sorting merged hints by their **Preferred** field. Hints that have **Preferred** set to **True** are considered more likely candidates than hints with **Preferred** set to **False**. +1. Selecting the narrowest hint with the best possible setting for **Preferred**. + +In the case of the **best-effort** policy this algorithm will always result in _some_ hint being selected as the "best" hint and the pod being admitted. This "best" hint is then made available to **HintProviders** so they can make their resource allocations based on it. + +However, in the case of the **restricted** and **single-numa-node** policies, any selected hint with **Preferred** set to **False** will be rejected immediately, causing pod admission to fail and no resources to be allocated. Moreover, the **single-numa-node** will also reject a selected hint that has more than one NUMA node set in its affinity mask. + +In the example above, the pod would be admitted by all policies with a hint of **{01: True}**. + +## Upcoming enhancements + +While the 1.18 release and promotion to Beta brings along some great enhancements and fixes, there are still a number of limitations, described [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#known-limitations). We are already underway working to address these limitations and more. + +This section walks through the set of enhancements we plan to implement for the **TopologyManager** in the near future. This list is not exhaustive, but it gives a good idea of the direction we are moving in. It is ordered by the timeframe in which we expect to see each enhancement completed. + +If you would like to get involved in helping with any of these enhancements, please [join the weekly Kubernetes SIG-node meetings](https://github.com/kubernetes/community/tree/master/sig-node) to learn more and become part of the community effort! + +### Supporting device-specific constraints + +Currently, NUMA affinity is the only constraint considered by the **TopologyManager** for resource alignment. Moreover, the only scalable extensions that can be made to a **TopologyHint** involve _node-level_ constraints, such as PCIe bus alignment across device types. It would be intractable to try and add any _device-specific_ constraints to this struct (e.g. the internal NVLINK topology among a set of GPU devices). + +As such, we propose an extension to the device plugin interface that will allow a plugin to state its topology-aware allocation preferences, without having to expose any device-specific topology information to the kubelet. In this way, the **TopologyManager** can be restricted to only deal with common node-level topology constraints, while still having a way of incorporating device-specific topology constraints into its allocation decisions. + +Details of this proposal can be found [here](https://github.com/kubernetes/enhancements/pull/1121), and should be available as soon as Kubernetes 1.19. + +### NUMA alignment for hugepages + +As stated previously, the only two **HintProviders** currently available to the **TopologyManager** are the **CPUManager** and the **DeviceManager**. However, work is currently underway to add support for hugepages as well. With the completion of this work, the **TopologyManager** will finally be able to allocate memory, hugepages, CPUs and PCI devices all on the same NUMA node. + +A [KEP](https://github.com/kubernetes/enhancements/blob/253f1e5bdd121872d2d0f7020a5ac0365b229e30/keps/sig-node/20200203-memory-manager.md) for this work is currently under review, and a prototype is underway to get this feature implemented very soon. + + +### Scheduler awareness + +Currently, the **TopologyManager** acts as a Pod Admission controller. It is not directly involved in the scheduling decision of where a pod will be placed. Rather, when the kubernetes scheduler (or whatever scheduler is running in the deployment), places a pod on a node to run, the **TopologyManager** will decide if the pod should be "admitted" or "rejected". If the pod is rejected due to lack of available NUMA aligned resources, things can get a little interesting. This kubernetes [issue](https://github.com/kubernetes/kubernetes/issues/84869) highlights and discusses this situation well. + +So how do we go about addressing this limitation? We have the [Kubernetes Scheduling Framework](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md) to the rescue! This framework provides a new set of plugin APIs that integrate with the existing Kubernetes Scheduler and allow scheduling features, such as NUMA alignment, to be implemented without having to resort to other, perhaps less appealing alternatives, including writing your own scheduler, or even worse, creating a fork to add your own scheduler secret sauce. + +The details of how to implement these extensions for integration with the **TopologyManager** have not yet been worked out. We still need to answer questions like: + +* Will we require duplicated logic to determine device affinity in the **TopologyManager** and the scheduler? +* Do we need a new API to get **TopologyHints** from the **TopologyManager** to the scheduler plugin? + +Work on this feature should begin in the next couple of months, so stay tuned! + + +### Per-pod alignment policy + +As stated previously, a single policy is applied to _all_ pods on a node via a global **kubelet** flag, rather than allowing users to select different policies on a pod-by-pod basis (or a container-by-container basis). + +While we agree that this would be a great feature to have, there are quite a few hurdles that need to be overcome before it is achievable. The biggest hurdle being that this enhancement will require an API change to be able to express the desired alignment policy in either the Pod spec or its associated **[RuntimeClass](https://kubernetes.io/docs/concepts/containers/runtime-class/)**. + +We are only now starting to have serious discussions around this feature, and it is still a few releases away, at the best, from being available. + +## Conclusion + +With the promotion of the **TopologyManager** to Beta in 1.18, we encourage everyone to give it a try and look forward to any feedback you may have. Many fixes and enhancements have been worked on in the past several releases, greatly improving the functionality and reliability of the **TopologyManager** and its **HintProviders**. While there are still a number of limitations, we have a set of enhancements planned to address them, and look forward to providing you with a number of new features in upcoming releases. + +If you have ideas for additional enhancements or a desire for certain features, don’t hesitate to let us know. The team is always open to suggestions to enhance and improve the **TopologyManager**. + +We hope you have found this blog informative and useful! Let us know if you have any questions or comments. And, happy deploying…..Align Up! + + + \ No newline at end of file diff --git a/content/en/blog/_posts/2020-04-01-server-side-apply-beta2.md b/content/en/blog/_posts/2020-04-01-server-side-apply-beta2.md new file mode 100644 index 0000000000000..3aa81683e55db --- /dev/null +++ b/content/en/blog/_posts/2020-04-01-server-side-apply-beta2.md @@ -0,0 +1,51 @@ +--- +layout: blog +title: Kubernetes 1.18 Feature Server-side Apply Beta 2 +date: 2020-04-01 +slug: Kubernetes-1.18-Feature-Server-side-Apply-Beta-2 +--- + +**Authors:** Antoine Pelisse (Google) + +## What is Server-side Apply? +Server-side Apply is an important effort to migrate “kubectl apply” to the apiserver. It was started in 2018 by the Apply working group. + +The use of kubectl to declaratively apply resources has exposed the following challenges: + +- One needs to use the kubectl go code, or they have to shell out to kubectl. + +- Strategic merge-patch, the patch format used by kubectl, grew organically and was challenging to fix while maintaining compatibility with various api-server versions. + +- Some features are hard to implement directly on the client, for example, unions. + + +Server-side Apply is a new merging algorithm, as well as tracking of field ownership, running on the Kubernetes api-server. Server-side Apply enables new features like conflict detection, so the system knows when two actors are trying to edit the same field. + +## How does it work, what’s managedFields? +Server-side Apply works by keeping track of which actor of the system has changed each field of an object. It does so by diffing all updates to objects, and recording all the fields that have changed as well the time of the operation. All this information is stored in the managedFields in the metadata of objects. Since objects can have many fields, this field can be quite large. + +When someone applies, we can then use the information stored within managedFields to report relevant conflicts and help the merge algorithm to do the right thing. + +## Wasn’t it already Beta before 1.18? +Yes, Server-side Apply has been Beta since 1.16, but it didn’t track the owner for fields associated with objects that had not been applied. This means that most objects didn’t have the managedFields metadata stored, and conflicts for these objects cannot be resolved. With Kubernetes 1.18, all new objects will have the managedFields attached to them and provide accurate information on conflicts. + +## How do I use it? +The most common way to use this is through kubectl: `kubectl apply --server-side`. This is likely to show conflicts with other actors, including client-side apply. When that happens, conflicts can be forced by using the `--force-conflicts` flag, which will grab the ownership for the fields that have changed. + +## Current limitations +We have two important limitations right now, especially with sub-resources. The first is that if you apply with a status, the status is going to be ignored. We are still going to try and acquire the fields, which may lead to invalid conflicts. The other is that we do not update the managedFields on some sub-resources, including scale, so you may not see information about a horizontal pod autoscaler changing the number of replicas. + +## What’s next? +We are working hard to improve the experience of using server-side apply with kubectl, and we are trying to make it the default. As part of that, we want to improve the migration from client-side to server-side. + +## Can I help? +Of course! The working-group apply is available on slack #wg-apply, through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-wg-apply) and we also meet every other Tuesday at 9.30 PT on Zoom. We have lots of exciting features to build and can use all sorts of help. + +We would also like to use the opportunity to thank the hard work of all the contributors involved in making this new beta possible: + +* Daniel Smith +* Jenny Buckley +* Joe Betz +* Julian Modesto +* Kevin Wiesmüller +* Maria Ntalla diff --git a/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md b/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md new file mode 100644 index 0000000000000..6fdb291c5b139 --- /dev/null +++ b/content/en/blog/_posts/2020-04-02-Improvements-to-the-Ingress-API-in-Kubernetes-1.18.md @@ -0,0 +1,87 @@ +--- +layout: blog +title: Improvements to the Ingress API in Kubernetes 1.18 +date: 2020-04-02 +slug: Improvements-to-the-Ingress-API-in-Kubernetes-1.18 +--- + +**Authors:** Rob Scott (Google), Christopher M Luciano (IBM) + +The Ingress API in Kubernetes has enabled a large number of controllers to provide simple and powerful ways to manage inbound network traffic to Kubernetes workloads. In Kubernetes 1.18, we've made 3 significant additions to this API: + +* A new `pathType` field that can specify how Ingress paths should be matched. +* A new `IngressClass` resource that can specify how Ingresses should be implemented by controllers. +* Support for wildcards in hostnames. + +## Better Path Matching With Path Types +The new concept of a path type allows you to specify how a path should be matched. There are three supported types: + +* __ImplementationSpecific (default):__ With this path type, matching is up to the controller implementing the `IngressClass`. Implementations can treat this as a separate `pathType` or treat it identically to the `Prefix` or `Exact` path types. +* __Exact:__ Matches the URL path exactly and with case sensitivity. +* __Prefix:__ Matches based on a URL path prefix split by `/`. Matching is case sensitive and done on a path element by element basis. + +## Extended Configuration With Ingress Classes +The Ingress resource was designed with simplicity in mind, providing a simple set of fields that would be applicable in all use cases. Over time, as use cases evolved, implementations began to rely on a long list of custom annotations for further configuration. The new `IngressClass` resource provides a way to replace some of those annotations. + +Each `IngressClass` specifies which controller should implement Ingresses of the class and can reference a custom resource with additional parameters. +```yaml +apiVersion: "networking.k8s.io/v1beta1" +kind: "IngressClass" +metadata: + name: "external-lb" +spec: + controller: "example.com/ingress-controller" + parameters: + apiGroup: "k8s.example.com/v1alpha" + kind: "IngressParameters" + name: "external-lb" +``` + +### Specifying the Class of an Ingress +A new `ingressClassName` field has been added to the Ingress spec that is used to reference the `IngressClass` that should be used to implement this Ingress. + +### Deprecating the Ingress Class Annotation +Before the `IngressClass` resource was added in Kubernetes 1.18, a similar concept of Ingress class was often specified with a `kubernetes.io/ingress.class` annotation on the Ingress. Although this annotation was never formally defined, it was widely supported by Ingress controllers, and should now be considered formally deprecated. + +### Setting a Default IngressClass +It’s possible to mark a specific `IngressClass` as default in a cluster. Setting the +`ingressclass.kubernetes.io/is-default-class` annotation to true on an +IngressClass resource will ensure that new Ingresses without an `ingressClassName` specified will be assigned this default `IngressClass`. + +## Support for Hostname Wildcards +Many Ingress providers have supported wildcard hostname matching like `*.foo.com` matching `app1.foo.com`, but until now the spec assumed an exact FQDN match of the host. Hosts can now be precise matches (for example “`foo.bar.com`”) or a wildcard (for example “`*.foo.com`”). Precise matches require that the http host header matches the Host setting. Wildcard matches require the http host header is equal to the suffix of the wildcard rule. + +| Host | Host header | Match? | +| ----------- |-------------------| --------------------------------------------------| +| `*.foo.com` | `bar.foo.com` | Matches based on shared suffix | +| `*.foo.com` | `baz.bar.foo.com` | No match, wildcard only covers a single DNS label | +| `*.foo.com` | `foo.com` | No match, wildcard only covers a single DNS label | + +### Putting it All Together +These new Ingress features allow for much more configurability. Here’s an example of an Ingress that makes use of pathType, `ingressClassName`, and a hostname wildcard: + +```yaml +apiVersion: "networking.k8s.io/v1beta1" +kind: "Ingress" +metadata: + name: "example-ingress" +spec: + ingressClassName: "external-lb" + rules: + - host: "*.example.com" + http: + paths: + - path: "/example" + pathType: "Prefix" + backend: + serviceName: "example-service" + servicePort: 80 +``` + +### Ingress Controller Support +Since these features are new in Kubernetes 1.18, each Ingress controller implementation will need some time to develop support for these new features. Check the documentation for your preferred Ingress controllers to see when they will support this new functionality. + +## The Future of Ingress +The Ingress API is on pace to graduate from beta to a stable API in Kubernetes 1.19. It will continue to provide a simple way to manage inbound network traffic for Kubernetes workloads. This API has intentionally been kept simple and lightweight, but there has been a desire for greater configurability for more advanced use cases. + +Work is currently underway on a new highly configurable set of APIs that will provide an alternative to Ingress in the future. These APIs are being referred to as the new “Service APIs”. They are not intended to replace any existing APIs, but instead provide a more configurable alternative for complex use cases. For more information, check out the [Service APIs repo on GitHub](http://github.com/kubernetes-sigs/service-apis). diff --git a/content/en/blog/_posts/2020-04-03-Introducing-Windows-CSI-support-alpha-for-Kubernetes.md b/content/en/blog/_posts/2020-04-03-Introducing-Windows-CSI-support-alpha-for-Kubernetes.md new file mode 100644 index 0000000000000..619ca3c63423c --- /dev/null +++ b/content/en/blog/_posts/2020-04-03-Introducing-Windows-CSI-support-alpha-for-Kubernetes.md @@ -0,0 +1,84 @@ +--- +layout: blog +title: "Introducing Windows CSI support alpha for Kubernetes" +date: 2020-04-03 +slug: kubernetes-1-18-feature-windows-csi-support-alpha +--- +**Authors:** Authors: Deep Debroy [Docker], Jing Xu [Google], Krishnakumar R (KK) [Microsoft] + +The alpha version of [CSI Proxy][csi-proxy] for Windows is being released with Kubernetes 1.18. CSI proxy enables CSI Drivers on Windows by allowing containers in Windows to perform privileged storage operations. + +## Background + +Container Storage Interface (CSI) for Kubernetes went GA in the Kubernetes 1.13 release. CSI has become the standard for exposing block and file storage to containerized workloads on Container Orchestration systems (COs) like Kubernetes. It enables third-party storage providers to write and deploy plugins without the need to alter the core Kubernetes codebase. All new storage features will utilize CSI, therefore it is important to get CSI drivers to work on Windows. + +A CSI driver in Kubernetes has two main components: a controller plugin and a node plugin. The controller plugin generally does not need direct access to the host and can perform all its operations through the Kubernetes API and external control plane services (e.g. cloud storage service). The node plugin, however, requires direct access to the host for making block devices and/or file systems available to the Kubernetes kubelet. This was previously not possible for containers on Windows. With the release of [CSIProxy][csi-proxy], CSI drivers can now perform storage operations on the node. This inturn enables containerized CSI Drivers to run on Windows. + +## CSI support for Windows clusters + +CSI drivers (e.g. AzureDisk, GCE PD, etc.) are recommended to be deployed as containers. CSI driver’s node plugin typically runs on every worker node in the cluster (as a DaemonSet). Node plugin containers need to run with elevated privileges to perform storage related operations. However, Windows currently does not support privileged containers. To solve this problem, [CSIProxy][csi-proxy] makes it so that node plugins can now be deployed as unprivileged pods and then use the proxy to perform privileged storage operations on the node. + +## Node plugin interactions with CSIProxy + +The design of the CSI proxy is captured in this [KEP][kep]. The following diagram depicts the interactions with the CSI node plugin and CSI proxy. + +

+ +

+ +The CSI proxy runs as a process directly on the host on every windows node - very similar to kubelet. The CSI code in kubelet interacts with the [node driver registrar][nd-reg] component and the CSI node plugin. The node driver registrar is a community maintained CSI project which handles the registration of vendor specific node plugins. The kubelet initiates CSI gRPC calls like NodeStageVolume/NodePublishVolume on the node plugin as described in the figure. Node plugins interface with the CSIProxy process to perform local host OS storage related operations such as creation/enumeration of volumes, mounting/unmounting, etc. + +## CSI proxy architecture and implementation + +

+ +

+ +In the alpha release, CSIProxy supports the following API groups: +1. Filesystem +1. Disk +1. Volume +1. SMB + +CSI proxy exposes each API group via a Windows named pipe. The communication is performed using gRPC over these pipes. The client library from the CSI proxy project uses these pipes to interact with the CSI proxy APIs. For example, the filesystem APIs are exposed via a pipe like \\.\pipe\csi-proxy-filesystem-v1alpha1 and volume APIs under the \\.\pipe\csi-proxy-volume-v1alpha1, and so on. + +From each API group service, the calls are routed to the host API layer. The host API calls into the host Windows OS by either Powershell or Go standard library calls. For example, when the filesystem API [Rmdir][rmdir] is called the API group service would decode the grpc structure [RmdirRequest][rmdir-req] and find the directory to be removed and call into the Host APIs layer. This would result in a call to [os.Remove][os-rem], a Go standard library call, to perform the remove operation. + +## Control flow details + +The following figure uses CSI call NodeStageVolume as an example to explain the interaction between kubelet, CSI plugin, and CSI proxy for provisioning a fresh volume. After the node plugin receives a CSI RPC call, it makes a few calls to CSIproxy accordingly. As a result of the NodeStageVolume call, first the required disk is identified using either of the Disk API calls: ListDiskLocations (in AzureDisk driver) or GetDiskNumberByName (in GCE PD driver). If the disk is not partitioned, then the PartitionDisk (Disk API group) is called. Subsequently, Volume API calls such as ListVolumesOnDisk, FormatVolume and MountVolume are called to perform the rest of the required operations. Similar operations are performed in case of NodeUnstageVolume, NodePublishVolume, NodeUnpublishedVolume, etc. + +

+ +

+ +## Current support + +CSI proxy is now available as alpha. You can find more details on the [CSIProxy][csi-proxy] GitHub repository. There are currently two cloud providers that provide alpha support for CSI drivers on Windows: Azure and GCE. + +## Future plans + +One key area of focus in beta is going to be Windows based build and CI/CD setup to improve the stability and quality of the code base. Another area is using Go based calls directly instead of Powershell commandlets to improve performance. Enhancing debuggability and adding more tests are other areas which the team will be looking into. + +## How to get involved? + +This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together. Those interested in getting involved with the design and development of CSI Proxy, or any part of the Kubernetes Storage system, may join the [Kubernetes Storage Special Interest Group][sig] (SIG). We’re rapidly growing and always welcome new contributors. + +For those interested in more details, the [CSIProxy][csi-proxy] GitHub repository is a good place to start. In addition, the [#csi-windows][slack] channel on kubernetes slack is available for discussions specific to the CSI on Windows. + +## Acknowledgments + +We would like to thank Michelle Au for guiding us throughout this journey to alpha. We would like to thank Jean Rougé for contributions during the initial CSI proxy effort. We would like to thank Saad Ali for all the guidance with respect to the project and review/feedback on a draft of this blog. We would like to thank Patrick Lang and Mark Rossetti for helping us with Windows specific questions and details. Special thanks to Andy Zhang for reviews and guidance with respect to Azuredisk and Azurefile work. A big thank you to Paul Burt and Karen Chu for the review and suggestions on improving this blog post. + +Last but not the least, we would like to thank the broader Kubernetes community who contributed at every step of the project. + + + +[csi-proxy]: https://github.com/kubernetes-csi/csi-proxy +[kep]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-windows/20190714-windows-csi-support.md +[nd-reg]: https://kubernetes-csi.github.io/docs/node-driver-registrar.html +[rmdir]: https://github.com/kubernetes-csi/csi-proxy/blob/master/client/api/filesystem/v1alpha1/api.proto +[rmdir-req]: https://github.com/kubernetes-csi/csi-proxy/blob/master/client/api/filesystem/v1alpha1/api.pb.go +[os-rem]: https://github.com/kubernetes-csi/csi-proxy/blob/master/internal/os/filesystem/api.go +[sig]: https://github.com/kubernetes/community/tree/master/sig-storage +[slack]: https://kubernetes.slack.com/messages/csi-windows \ No newline at end of file diff --git a/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md b/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md new file mode 100644 index 0000000000000..c4c983c5436db --- /dev/null +++ b/content/en/blog/_posts/2020-04-06-API-Priority-and-Fairness-Alpha.md @@ -0,0 +1,68 @@ +--- +layout: blog +title: "API Priority and Fairness Alpha" +date: 2020-04-06 +slug: kubernetes-1-18-feature-api-priority-and-fairness-alpha +--- + +**Authors:** Min Kim (Ant Financial), Mike Spreitzer (IBM), Daniel Smith (Google) + +This blog describes “API Priority And Fairness”, a new alpha feature in Kubernetes 1.18. API Priority And Fairness permits cluster administrators to divide the concurrency of the control plane into different weighted priority levels. Every request arriving at a kube-apiserver will be categorized into one of the priority levels and get its fair share of the control plane’s throughput. + +## What problem does this solve? +Today the apiserver has a simple mechanism for protecting itself against CPU and memory overloads: max-in-flight limits for mutating and for readonly requests. Apart from the distinction between mutating and readonly, no other distinctions are made among requests; consequently, there can be undesirable scenarios where one subset of the requests crowds out other requests. + +In short, it is far too easy for Kubernetes workloads to accidentally DoS the apiservers, causing other important traffic--like system controllers or leader elections---to fail intermittently. In the worst cases, a few broken nodes or controllers can push a busy cluster over the edge, turning a local problem into a control plane outage. + +## How do we solve the problem? +The new feature “API Priority and Fairness” is about generalizing the existing max-in-flight request handler in each apiserver, to make the behavior more intelligent and configurable. The overall approach is as follows. + +1. Each request is matched by a _Flow Schema_. The Flow Schema states the Priority Level for requests that match it, and assigns a “flow identifier” to these requests. Flow identifiers are how the system determines whether requests are from the same source or not. +2. Priority Levels may be configured to behave in several ways. Each Priority Level gets its own isolated concurrency pool. Priority levels also introduce the concept of queuing requests that cannot be serviced immediately. +3. To prevent any one user or namespace from monopolizing a Priority Level, they may be configured to have multiple queues. [“Shuffle Sharding”](https://aws.amazon.com/builders-library/workload-isolation-using-shuffle-sharding/#What_is_shuffle_sharding.3F) is used to assign each flow of requests to a subset of the queues. +4. Finally, when there is capacity to service a request, a [“Fair Queuing”](https://en.wikipedia.org/wiki/Fair_queuing) algorithm is used to select the next request. Within each priority level the queues compete with even fairness. + +Early results have been very promising! Take a look at this [analysis](https://github.com/kubernetes/kubernetes/pull/88177#issuecomment-588945806). + +## How do I try this out? +You are required to prepare the following things in order to try out the feature: + +* Download and install a kubectl greater than v1.18.0 version +* Enabling the new API groups with the command line flag `--runtime-config="flowcontrol.apiserver.k8s.io/v1alpha1=true"` on the kube-apiservers +* Switch on the feature gate with the command line flag `--feature-gates=APIPriorityAndFairness=true` on the kube-apiservers + +After successfully starting your kube-apiservers, you will see a few default FlowSchema and PriorityLevelConfiguration resources in the cluster. These default configurations are designed for a general protection and traffic management for your cluster. +You can examine and customize the default configuration by running the usual tools, e.g.: + +* `kubectl get flowschemas` +* `kubectl get prioritylevelconfigurations` + + +## How does this work under the hood? +Upon arrival at the handler, a request is assigned to exactly one priority level and exactly one flow within that priority level. Hence understanding how FlowSchema and PriorityLevelConfiguration works will be helping you manage the request traffic going through your kube-apiservers. + +* FlowSchema: FlowSchema will identify a PriorityLevelConfiguration object and the way to compute the request’s “flow identifier”. Currently we support matching requests according to: the identity making the request, the verb, and the target object. The identity can match in terms of: a username, a user group name, or a ServiceAccount. And as for the target objects, we can match by apiGroup, resource[/subresource], and namespace. + * The flow identifier is used for shuffle sharding, so it’s important that requests have the same flow identifier if they are from the same source! We like to consider scenarios with “elephants” (which send many/heavy requests) vs “mice” (which send few/light requests): it is important to make sure the elephant’s requests all get the same flow identifier, otherwise they will look like many different mice to the system! + * See the API Documentation [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io)! + +* PriorityLevelConfiguration: Defines a priority level. + * For apiserver self requests, and any reentrant traffic (e.g., admission webhooks which themselves make API requests), a Priority Level can be marked “exempt”, which means that no queueing or limiting of any sort is done. This is to prevent priority inversions. + * Each non-exempt Priority Level is configured with a number of "concurrency shares" and gets an isolated pool of concurrency to use. Requests of that Priority Level run in that pool when it is not full, never anywhere else. Each apiserver is configured with a total concurrency limit (taken to be the sum of the old limits on mutating and readonly requests), and this is then divided among the Priority Levels in proportion to their concurrency shares. + * A non-exempt Priority Level may select a number of queues and a "hand size" to use for the shuffle sharding. Shuffle sharding maps flows to queues in a way that is better than consistent hashing. A given flow has access to a small collection of queues, and for each incoming request the shortest queue is chosen. When a Priority Level has queues, it also sets a limit on queue length. There is also a limit placed on how long a request can wait in its queue; this is a fixed fraction of the apiserver's request timeout. A request that cannot be executed and cannot be queued (any longer) is rejected. + * Alternatively, a non-exempt Priority Level may select immediate rejection instead of waiting in a queue. + * See the [API documentation](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io) for this feature. + +## What’s missing? When will there be a beta? +We’re already planning a few enhancements based on alpha and there will be more as users send feedback to our community. Here’s a list of them: + +* Traffic management for WATCH and EXEC requests +* Adjusting and improving the default set of FlowSchema/PriorityLevelConfiguration +* Enhancing observability on how this feature works +* Join the discussion [here](https://github.com/kubernetes/enhancements/pull/1632) + +Possibly treat LIST requests differently depending on an estimate of how big their result will be. + +## How can I get involved? +As always! Reach us on slack [#sig-api-machinery](https://kubernetes.slack.com/messages/sig-api-machinery), or through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery). We have lots of exciting features to build and can use all sorts of help. + +Many thanks to the contributors that have gotten this feature this far: Aaron Prindle, Daniel Smith, Jonathan Tomer, Mike Spreitzer, Min Kim, Bruce Ma, Yu Liao, Mengyi Zhou! diff --git a/content/en/blog/_posts/2020-04-21-Cluster-API-v1alpha3-Delivers-New-Features-and-an-Improved-User-Experience.md b/content/en/blog/_posts/2020-04-21-Cluster-API-v1alpha3-Delivers-New-Features-and-an-Improved-User-Experience.md new file mode 100644 index 0000000000000..7b0f935145c9c --- /dev/null +++ b/content/en/blog/_posts/2020-04-21-Cluster-API-v1alpha3-Delivers-New-Features-and-an-Improved-User-Experience.md @@ -0,0 +1,222 @@ +--- +layout: blog +title: "Cluster API v1alpha3 Delivers New Features and an Improved User Experience" +date: 2020-04-21 +slug: cluster-api-v1alpha3-delivers-new-features-and-an-improved-user-experience +--- + +**Author:** Daniel Lipovetsky (D2IQ) + +Cluster API Logo: Turtles All The Way Down + +The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster. + +Following the v1alpha2 release in October 2019, many members of the Cluster API community met in San Francisco, California, to plan the next release. The project had just gone through a major transformation, delivering a new architecture that promised to make the project easier for users to adopt, and faster for the community to build. Over the course of those two days, we found our common goals: To implement the features critical to managing production clusters, to make its user experience more intuitive, and to make it a joy to develop. + +The v1alpha3 release of Cluster API brings significant features for anyone running Kubernetes in production and at scale. Among the highlights: + +* [Declarative Control Plane Management](#declarative-control-plane-management) +* [Support for Distributing Control Plane Nodes Across Failure Domains To Reduce Risk](#distributing-control-plane-nodes-to-reduce-risk) +* [Automated Replacement of Unhealthy Nodes](#automated-replacement-of-unhealthy-nodes) +* [Support for Infrastructure-Managed Node Groups](#infrastructure-managed-node-groups) + +For anyone who wants to understand the API, or prizes a simple, but powerful, command-line interface, the new release brings: + +* [Redesigned clusterctl, a command-line tool (and go library) for installing and managing the lifecycle of Cluster API](#clusterctl) +* [Extensive and up-to-date documentation in The Cluster API Book](#the-cluster-api-book) + +Finally, for anyone extending the Cluster API for their custom infrastructure or software needs: + +* [New End-to-End (e2e) Test Framework](#end-to-end-test-framework) +* [Documentation for integrating Cluster API into your cluster lifecycle stack](#provider-implementer-s-guide) + +All this was possible thanks to the hard work of many contributors. + +## Declarative Control Plane Management +_Special thanks to [Jason DeTiberus](https://github.com/detiber/), [Naadir Jeewa](https://github.com/randomvariable), and [Chuck Ha](https://github.com/chuckha)_ + +The Kubeadm-based Control Plane (KCP) provides a declarative API to deploy and scale the Kubernetes control plane, including etcd. This is the feature many Cluster API users have been waiting for! Until now, to deploy and scale up the control plane, users had to create specially-crafted Machine resources. To scale down the control plane, they had to manually remove members from the etcd cluster. KCP automates deployment, scaling, and upgrades. + +> **What is the Kubernetes Control Plane?** +> The Kubernetes control plane is, at its core, kube-apiserver and etcd. If either of these are unavailable, no API requests can be handled. This impacts not only core Kubernetes APIs, but APIs implemented with CRDs. Other components, like kube-scheduler and kube-controller-manager, are also important, but do not have the same impact on availability. +> +> The control plane was important in the beginning because it scheduled workloads. However, some workloads could continue to run during a control plane outage. Today, workloads depend on operators, service meshes, and API gateways, which all use the control plane as a platform. Therefore, the control plane's availability is more important than ever. +> +> Managing the control plane is one of the most complex parts of cluster operation. Because the typical control plane includes etcd, it is stateful, and operations must be done in the correct sequence. Control plane replicas can and do fail, and maintaining control plane availability means being able to replace failed nodes. +> +> The control plane can suffer a complete outage (e.g. permanent loss of quorum in etcd), and recovery (along with regular backups) is sometimes the only feasible option. +> +> For more details, read about [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) in the Kubernetes documentation. + +Here's an example of a 3-replica control plane for the Cluster API Docker Infrastructure, which the project maintains for testing and development. For brevity, other required resources, like Cluster, and Infrastructure Template, referenced by its name and namespace, are not shown. + +```yaml +apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 +kind: KubeadmControlPlane +metadata: + name: example +spec: + infrastructureTemplate: + apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 + kind: DockerMachineTemplate + name: example + namespace: default + kubeadmConfigSpec: + clusterConfiguration: + replicas: 3 + version: 1.16.3 +``` + +Deploy this control plane with kubectl: +```shell +kubectl apply -f example-docker-control-plane.yaml +``` + +Scale the control plane the same way you scale other Kubernetes resources: +```shell +kubectl scale kubeadmcontrolplane example --replicas=5 +``` +``` +kubeadmcontrolplane.controlplane.cluster.x-k8s.io/example scaled +``` + +Upgrade the control plane to a newer patch of the Kubernetes release: +```shell +kubectl patch kubeadmcontrolplane example --type=json -p '[{"op": "replace", "path": "/spec/version", "value": "1.16.4"}]' +``` + +> **Number of Control Plane Replicas** +> By default, KCP is configured to manage etcd, and requires an odd number of replicas. If KCP is configured to not manage etcd, an odd number is recommended, but not required. An odd number of replicas ensures optimal etcd configuration. To learn why your etcd cluster should have an odd number of members, see the [etcd FAQ](https://etcd.io/docs/v3.4.0/faq/#why-an-odd-number-of-cluster-members). + +Because it is a core Cluster API component, KCP can be used with any v1alpha3-compatible Infrastructure Provider that provides a fixed control plane endpoint, i.e., a load balancer or virtual IP. This endpoint enables requests to reach multiple control plane replicas. + +> **What is an Infrastructure Provider?** +> A source of computational resources (e.g. machines, networking, etc.). The community maintains providers for AWS, Azure, Google Cloud, and VMWare. For details, see the [list of providers](https://cluster-api.sigs.k8s.io/reference/providers.html) in the Cluster API Book. + +## Distributing Control Plane Nodes To Reduce Risk +_Special thanks to [Vince Prignano](https://github.com/vincepri/), and [Chuck Ha](https://github.com/chuckha)_ + +Cluster API users can now deploy nodes in different failure domains, reducing the risk of a cluster failing due to a domain outage. This is especially important for the control plane: If nodes in one domain fail, the cluster can continue to operate as long as the control plane is available to nodes in other domains. + +> **What is a Failure Domain?** +> A failure domain is a way to group the resources that would be made unavailable by some failure. For example, in many public clouds, an "availability zone" is the default failure domain. A zone corresponds to a data center. So, if a specific data center is brought down by a power outage or natural disaster, all resources in that zone become unavailable. If you run Kubernetes on your own hardware, your failure domain might be a rack, a network switch, or power distribution unit. + +The Kubeadm-based ControlPlane distributes nodes across failure domains. To minimize the chance of losing multiple nodes in the event of a domain outage, it tries to distribute them evenly: it deploys a new node in the failure domain with the fewest existing nodes, and it removes an existing node in the failure domain with the most existing nodes. + +MachineDeployments and MachineSets do not distribute nodes across failure domains. To deploy your worker nodes across multiple failure domains, create a MachineDeployment or MachineSet for each failure domain. + +The Failure Domain API works on any infrastructure. That's because every Infrastructure Provider maps failure domains in its own way. The API is optional, so if your infrastructure is not complex enough to need failure domains, you do not need to support it. This example is for the Cluster API Docker Infrastructure Provider. Note that two of the domains are marked as suitable for control plane nodes, while a third is not. The Kubeadm-based ControlPlane will only deploy nodes to domains marked suitable. + +```yaml +apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 +kind: DockerCluster +metadata: + name: example +spec: + controlPlaneEndpoint: + host: 172.17.0.4 + port: 6443 + failureDomains: + domain-one: + controlPlane: true + domain-two: + controlPlane: true + domain-three: + controlPlane: false +``` + +The [AWS Infrastructure Provider](https://github.com/kubernetes-sigs/cluster-api-provider-aws) (CAPA), maintained by the Cluster API project, maps failure domains to AWS Availability Zones. Using CAPA, you can deploy a cluster across multiple Availability Zones. First, define subnets for multiple Availability Zones. The CAPA controller will define a failure domain for each Availability Zone. Deploy the control plane with the KubeadmControlPlane: it will distribute replicas across the failure domains. Finally, create a separate MachineDeployment for each failure domain. + +## Automated Replacement of Unhealthy Nodes +_Special thanks to [Alberto García Lamela](https://github.com/enxebre), and [Joel Speed](http://github.com/joelspeed)_ + +There are many reasons why a node might be unhealthy. The kubelet process may stop. The container runtime might have a bug. The kernel might have a memory leak. The disk may run out of space. CPU, disk, or memory hardware may fail. A power outage may happen. Failures like these are especially common in larger clusters. + +Kubernetes is designed to tolerate them, and to help your applications tolerate them as well. Nevertheless, only a finite number of nodes can be unhealthy before the cluster runs out of resources, and Pods are evicted or not scheduled in the first place. Unhealthy nodes should be repaired or replaced at the earliest opportunity. + +The Cluster API now includes a MachineHealthCheck resource, and a controller that monitors node health. When it detects an unhealthy node, it removes it. (Another Cluster API controller detects the node has been removed and replaces it.) You can configure the controller to suit your needs. You can configure how long to wait before removing the node. You can also set a threshold for the number of unhealthy nodes. When the threshold is reached, no more nodes are removed. The wait can be used to tolerate short-lived outages, and the threshold to prevent too many nodes from being replaced at the same time. + +The controller will remove only nodes managed by a Cluster API MachineSet. The controller does not remove control plane nodes, whether managed by the Kubeadm-based Control Plane, or by the user, as in v1alpha2. For more, see [Limits and Caveats of a MachineHealthCheck](https://cluster-api.sigs.k8s.io/tasks/healthcheck.html#limitations-and-caveats-of-a-machinehealthcheck). + +Here is an example of a MachineHealthCheck. For more details, see [Configure a MachineHealthCheck](https://cluster-api.sigs.k8s.io/tasks/healthcheck.html) in the Cluster API book. + +```yaml +apiVersion: cluster.x-k8s.io/v1alpha3 +kind: MachineHealthCheck +metadata: + name: example-node-unhealthy-5m +spec: + clusterName: example + maxUnhealthy: 33% + nodeStartupTimeout: 10m + selector: + matchLabels: + nodepool: nodepool-0 + unhealthyConditions: + - type: Ready + status: Unknown + timeout: 300s + - type: Ready + status: "False" + timeout: 300s +``` + +## Infrastructure-Managed Node Groups +_Special thanks to [Juan-Lee Pang](https://github.com/juan-lee) and [Cecile Robert-Michon](https://github.com/CecileRobertMichon)_ + +If you run large clusters, you need to create and destroy hundreds of nodes, sometimes in minutes. Although public clouds make it possible to work with large numbers of nodes, having to make a separate API request to create or delete every node may scale poorly. For example, API requests may have to be delayed to stay within rate limits. + +Some public clouds offer APIs to manage groups of nodes as one single entity. For example, AWS has AutoScaling Groups, Azure has Virtual Machine Scale Sets, and GCP has Managed Instance Groups. With this release of Cluster API, Infrastructure Providers can add support for these APIs, and users can deploy groups of Cluster API Machines by using the MachinePool Resource. For more information, see the [proposal](https://github.com/kubernetes-sigs/cluster-api/blob/bf51a2502f9007b531f6a9a2c1a4eae1586fb8ca/docs/proposals/20190919-machinepool-api.md) in the Cluster API repository. + +> **Experimental Feature** +> The MachinePool API is an experimental feature that is not enabled by default. Users are encouraged to try it and report on how well it meets their needs. + +## The Cluster API User Experience, Reimagined + +### clusterctl +_Special thanks to [Fabrizio Pandini](https://github.com/fabriziopandini)_ + +If you are new to Cluster API, your first experience will probably be with the project's command-line tool, clusterctl. And with the new Cluster API release, it has been re-designed to be more pleasing to use than before. The tool is all you need to deploy your first [workload cluster](https://cluster-api.sigs.k8s.io/reference/glossary.html?highlight=pool#workload-cluster) in just a few steps. + +First, use `clusterctl init` to [fetch the configuration](https://cluster-api.sigs.k8s.io/clusterctl/commands/init.html) for your Infrastructure and Bootstrap Providers and deploy all of the components that make up the Cluster API. Second, use `clusterctl config cluster` to [create the workload cluster manifest](https://cluster-api.sigs.k8s.io/clusterctl/commands/config-cluster.html). This manifest is just a collection of Kubernetes objects. To create the workload cluster, just `kubectl apply` the manifest. Don't be surprised if this workflow looks familiar: Deploying a workload cluster with Cluster API is just like deploying an application workload with Kubernetes! + +Clusterctl also helps with the "day 2" operations. Use `clusterctl move` to [migrate Cluster API custom resources](https://cluster-api.sigs.k8s.io/clusterctl/commands/move.html), such as Clusters, and Machines, from one [Management Cluster](https://cluster-api.sigs.k8s.io/reference/glossary.html#management-cluster) to another. This step--also known as a [pivot](https://cluster-api.sigs.k8s.io/reference/glossary.html#pivot)--is necessary to create a workload cluster that manages itself with Cluster API. Finally, use `clusterctl upgrade` to [upgrade all of the installed components](https://cluster-api.sigs.k8s.io/clusterctl/commands/upgrade.html) when a new Cluster API release becomes available. + +One more thing! Clusterctl is not only a command-line tool. It is also a Go library! Think of the library as an integration point for projects that build on top of Cluster API. All of clusterctl's command-line functionality is available in the library, making it easy to integrate into your stack. To get started with the library, please read its [documentation](https://pkg.go.dev/sigs.k8s.io/cluster-api@v0.3.1/cmd/clusterctl/client?tab=doc). + +### The Cluster API Book +_Thanks to many contributors!_ + +The [project's documentation](https://cluster-api.sigs.k8s.io/) is extensive. New users should get some background on the [architecture](https://cluster-api.sigs.k8s.io/user/concepts.html), and then create a cluster of their own with the [Quick Start](https://cluster-api.sigs.k8s.io/user/quick-start.html). The clusterctl tool has its own [reference](https://cluster-api.sigs.k8s.io/clusterctl/overview.html). The [Developer Guide](https://cluster-api.sigs.k8s.io/developer/guide.html) has plenty of information for anyone interested in contributing to the project. + +Above and beyond the content itself, the project's documentation site is a pleasure to use. It is searchable, has an outline, and even supports different color themes. If you think the site a lot like the documentation for a different community project, [Kubebuilder](https://book.kubebuilder.io/), that is no coincidence! Many thanks to Kubebuilder authors for creating a great example of documentation. And many thanks to the [mdBook](https://github.com/rust-lang/mdBook) authors for creating a great tool for building documentation. + +## Integrate & Customize + +### End-to-End Test Framework +_Special thanks to [Chuck Ha](https://github.com/chuckha)_ + +The Cluster API project is designed to be extensible. For example, anyone can develop their own Infrastructure and Bootstrap Providers. However, it's important that Providers work in a uniform way. And, because the project is still evolving, it takes work to make sure that Providers are up-to-date with new releases of the core. + +The End-to-End Test Framework provides a set of standard tests for developers to verify that their Providers integrate correctly with the current release of Cluster API, and help identify any regressions that happen after a new release of the Cluster API, or the Provider. + +For more details on the Framework, see [Testing](https://cluster-api.sigs.k8s.io/developer/testing.html?highlight=e2e#running-the-end-to-end-tests) in the Cluster API Book, and the [README](https://github.com/kubernetes-sigs/cluster-api/tree/master/test/framework) in the repository. + +### Provider Implementer's Guide +_Thanks to many contributors!_ + +The community maintains [Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers.html) for a many popular infrastructures. However, if you want to build your own Infrastructure or Bootstrap Provider, the [Provider Implementer's](https://cluster-api.sigs.k8s.io/developer/providers/implementers-guide/overview.html) guide explains the entire process, from creating a git repository, to creating CustomResourceDefinitions for your Providers, to designing, implementing, and testing the controllers. + +> **Under Active Development** +> The Provider Implementer's Guide is actively under development, and may not yet reflect all of the changes in the v1alpha3 release. + +## Join Us! + +The Cluster API project is a very active project, and covers many areas of interest. If you are an infrastructure expert, you can contribute to one of the Infrastructure Providers. If you like building controllers, you will find opportunities to innovate. If you're curious about testing distributed systems, you can help develop the project's end-to-end test framework. Whatever your interests and background, you can make a real impact on the project. + +Come introduce yourself to the community at our weekly meeting, where we dedicate a block of time for a Q&A session. You can also find maintainers and users on the Kubernetes Slack, and in the Kubernetes forum. Please check out the links below. We look forward to seeing you! + +* Chat with us on the Kubernetes [Slack](http://slack.k8s.io/):[ #cluster-api](https://kubernetes.slack.com/archives/C8TSNPY4T) +* Join the [sig-cluster-lifecycle](https://groups.google.com/forum/) Google Group to receive calendar invites and gain access to documents +* Join our [Zoom meeting](https://zoom.us/j/861487554), every Wednesday at 10:00 Pacific Time +* Post to the [Cluster API community forum](https://discuss.kubernetes.io/c/contributors/cluster-api) diff --git a/content/en/blog/_posts/2020-04-21-contributor-communication-upstream-marketing.md b/content/en/blog/_posts/2020-04-21-contributor-communication-upstream-marketing.md new file mode 100644 index 0000000000000..3867c492bc94a --- /dev/null +++ b/content/en/blog/_posts/2020-04-21-contributor-communication-upstream-marketing.md @@ -0,0 +1,127 @@ +--- +layout: blog +title: How Kubernetes contributors are building a better communication process +date: 2020-04-21 +slug: contributor-communication +--- + +**Authors:** Paris Pittman + +> "Perhaps we just need to use a different word. We may need to use community development or project advocacy as a word in the open source realm as opposed to marketing, and perhaps then people will realize that they need to do it." +> ~ [*Nithya Ruff*](https://todogroup.org/www.linkedin.com/in/nithyaruff/) (from [*TODO Group*](https://todogroup.org/guides/marketing-open-source-projects/)) + +A common way to participate in the Kubernetes contributor community is +to be everywhere. + +We have an active [Slack](https://slack.k8s.io), many mailing lists, Twitter account(s), and +dozens of community-driven podcasts and newsletters that highlight all +end-user, contributor, and ecosystem topics. And to add on to that, we also have [repositories of amazing documentation], tons of [meetings] that drive the project forward, and [recorded code deep dives]. All of this information is incredibly valuable, +but it can be too much. + +Keeping up with thousands of contributors can be a challenge for anyone, +but this task of consuming information straight from the firehose is +particularly challenging for new community members. It's no secret that +the project is vast for contributors and users alike. + +To paint a picture with numbers: + +- 43,000 contributors +- 6,579 members in #kubernetes-dev slack channel +- 52 mailing lists (kubernetes-dev@ has thousands of members; sig-networking@ has 1000 alone) +- 40 [community groups](https://github.com/kubernetes/community/blob/master/governance.md#community-groups) +- 30 [upstream meetings](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America%2FLos_Angeles) *this* week alone + +All of these numbers are only growing in scale, and with that comes the need to simplify how contributors get the information right information front-and-center. + +## How we got here + +Kubernetes (K8s for short) communication grew out of a need for people +to connect in our growing community. With the best of intentions, the +community spun up channels for people to connect. This energy was part +of what helped Kubernetes grow so fast, and it also had us in sprawling out far and wide. As adoption grew, [contributors knew there was a need for standardization](https://github.com/kubernetes/community/issues/2466). + +This new attention to how the community communicates led to the discovery +of a complex web of options. There were so many options, and it was a +challenge for anyone to be sure they were in the right place to receive +the right information. We started taking immediate action combining communication streams and thinking about how to reach out best to serve our community. We also asked for feedback from all our +contributors directly via [**annual surveys**](https://github.com/kubernetes/community/tree/master/sig-contributor-experience/surveys) +to see where folks were actually reading the news that influences their +experiences here in our community. + +![Kubernetes channel access](https://user-images.githubusercontent.com/1744971/79478603-3a3a1980-7fd1-11ea-8b7a-d36aac7a097b.png) + +With over 43,000 contributors, our contributor base is larger than many enterprise companies. You can imagine what it's like getting important messages across to make sure they are landing and folks are taking action. + +## Contributing to better communication + +Think about how your company/employer solves for this kind of communication challenge. Many have done so +by building internal marketing and communication focus areas in +marketing departments. So that's what we are doing. This has also been +applied [at Fedora](https://fedoraproject.org/wiki/Marketing) and at a smaller +scale in our very [own release](https://github.com/kubernetes/sig-release/tree/master/release-team) and [contributor summit](https://github.com/kubernetes/community/blob/d0fd6c16f7ee754b08082cc15658eb8db7afeaf8/events/events-team/marketing/README.md) planning +teams as roles. + +We have hit the accelerator on an **upstream marketing group** under SIG +Contributor Experience and we want to tackle this challenge straight on. +We've learned in other contributor areas that creating roles for +contributors is super helpful - onboarding, breaking down work, and +ownership. [Here's our team charting the course](https://github.com/kubernetes/community/tree/master/communication/marketing-team). + +Journey your way through our other documents like our [charter] if you are +interested in our mission and scope. + +Many of you close to the ecosystem might be scratching your head - isn't +this what CNCF does? + +Yes and no. The CNCF has 40+ other projects that need to be marketed to +a countless number of different types of community members in distinct +ways and they aren't responsible for the day to day operations of their +projects. They absolutely do partner with us to highlight what we need +and when we need it, and they do a fantastic job of it (one example is +the [*@kubernetesio Twitter account*](https://twitter.com/kubernetesio) and its 200,000 +followers). + +Where this group differs is in its scope: we are entirely +focused on elevating the hard work being done throughout the Kubernetes +community by its contributors. + +## What to expect from us + +You can expect to see us on the Kubernetes [communication channels](https://github.com/kubernetes/community/tree/master/communication) supporting you by: + + - Finding ways of adding our human touch to potentially overwhelming + quantities of info by storytelling and other methods - we want to + highlight the work you are doing and provide useful information! + - Keeping you in the know of the comings and goings of contributor + community events, activities, mentoring initiatives, KEPs, and more. + - Creating a presence on Twitter specifically for contributors via + @k8scontributors that is all about being a contributor in all its + forms. + +What does this look like in the wild? Our [first post](https://kubernetes.io/blog/2020/03/19/join-sig-scalability/) in a series about our 36 community groups landed recently. Did you see it? +More articles like this and additional themes of stories to flow through +[our storytellers](https://github.com/kubernetes/community/tree/master/communication/marketing-team#purpose). + +We will deliver this with an [ethos](https://github.com/kubernetes/community/blob/master/communication/marketing-team/CHARTER.md#ethosvision) behind us aligned to the Kubernetes project as a whole, and we're +committed to using the same tools as all the other SIGs to do so. Check out our [project board](https://github.com/orgs/kubernetes/projects/39) to view our roadmap of upcoming work. + +## Join us and be part of the story + +This initiative is in an early phase and we still have important roles to fill to make it successful. + +If you are interested in open sourcing marketing functions – it's a fun +ride – join us! Specific immediate roles include storytelling through +blogs and as a designer. We also have plenty of work in progress on our project board. +Add a comment to any open issue to let us know you're interested in getting involved. + +Also, if you're reading this, you're exactly the type of person we are +here to support. We would love to hear about how to improve, feedback, +or how we can work together. + +Reach out at one of the contact methods listed on [our README](https://github.com/kubernetes/community/tree/master/communication/marketing-team#contact-us). We would love to hear from you. + + +[repositories of amazing documentation]: http://github.com/kubernetes/community +[meetings]: https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America%2FLos_Angeles +[recorded code deep dives]: https://www.youtube.com/watch?v=yqB_le-N6EE +[charter]: https://github.com/kubernetes/community/blob/master/communication/marketing-team/CHARTER.md diff --git a/content/en/blog/_posts/2020-04-22-two-phased-canary-rollout-with-gloo.md b/content/en/blog/_posts/2020-04-22-two-phased-canary-rollout-with-gloo.md new file mode 100644 index 0000000000000..bd9c8e2382837 --- /dev/null +++ b/content/en/blog/_posts/2020-04-22-two-phased-canary-rollout-with-gloo.md @@ -0,0 +1,766 @@ +--- +title: "Two-phased Canary Rollout with Open Source Gloo" +date: 2020-04-22 +slug: two-phased-canary-rollout-with-gloo +url: /blog/2020/04/Two-phased-Canary-Rollout-With-Gloo +--- + +**Author:** Rick Ducott | [GitHub](https://github.com/rickducott/) | [Twitter](https://twitter.com/ducott) + +Every day, my colleagues and I are talking to platform owners, architects, and engineers who are using [Gloo](https://github.com/solo-io/gloo) as an API gateway +to expose their applications to end users. These applications may span legacy monoliths, microservices, managed cloud services, and Kubernetes +clusters. Fortunately, Gloo makes it easy to set up routes to manage, secure, and observe application traffic while +supporting a flexible deployment architecture to meet the varying production needs of our users. + +Beyond the initial set up, platform owners frequently ask us to help design the operational workflows within their organization: +How do we bring a new application online? How do we upgrade an application? How do we divide responsibilities across our +platform, ops, and development teams? + +In this post, we're going to use Gloo to design a two-phased canary rollout workflow for application upgrades: + +- In the first phase, we'll do canary testing by shifting a small subset of traffic to the new version. This allows you to safely perform smoke and correctness tests. +- In the second phase, we'll progressively shift traffic to the new version, allowing us to monitor the new version under load, and eventually, decommission the old version. + +To keep it simple, we're going to focus on designing the workflow using [open source Gloo](https://github.com/solo-io/gloo), and we're going to deploy the gateway and +application to Kubernetes. At the end, we'll talk about a few extensions and advanced topics that could be interesting to explore in a follow up. + +## Initial setup + +To start, we need a Kubernetes cluster. This example doesn't take advantage of any cloud specific +features, and can be run against a local test cluster such as [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/). +This post assumes a basic understanding of Kubernetes and how to interact with it using `kubectl`. + +We'll install the latest [open source Gloo](https://github.com/solo-io/gloo) to the `gloo-system` namespace and deploy +version `v1` of an example application to the `echo` namespace. We'll expose this application outside the cluster +by creating a route in Gloo, to end up with a picture like this: + +![Setup](/images/blog/2020-04-22-two-phased-canary-rollout-with-gloo/setup.png) + +### Deploying Gloo + +We'll install gloo with the `glooctl` command line tool, which we can download and add to the `PATH` with the following +commands: + +``` +curl -sL https://run.solo.io/gloo/install | sh +export PATH=$HOME/.gloo/bin:$PATH +``` + +Now, you should be able to run `glooctl version` to see that it is installed correctly: + +``` +➜ glooctl version +Client: {"version":"1.3.15"} +Server: version undefined, could not find any version of gloo running +``` + +Now we can install the gateway to our cluster with a simple command: + +``` +glooctl install gateway +``` + +The console should indicate the install finishes successfully: + +``` +Creating namespace gloo-system... Done. +Starting Gloo installation... + +Gloo was successfully installed! + +``` + +Before long, we can see all the Gloo pods running in the `gloo-system` namespace: + +``` +➜ kubectl get pod -n gloo-system +NAME READY STATUS RESTARTS AGE +discovery-58f8856bd7-4fftg 1/1 Running 0 13s +gateway-66f86bc8b4-n5crc 1/1 Running 0 13s +gateway-proxy-5ff99b8679-tbp65 1/1 Running 0 13s +gloo-66b8dc8868-z5c6r 1/1 Running 0 13s +``` + +### Deploying the application + +Our `echo` application is a simple container (thanks to our friends at HashiCorp) that will +respond with the application version, to help demonstrate our canary workflows as we start testing and +shifting traffic to a `v2` version of the application. + +Kubernetes gives us a lot of flexibility in terms of modeling this application. We'll adopt the following +conventions: + +- We'll include the version in the deployment name so we can run two versions of the application + side-by-side and manage their lifecycle differently. +- We'll label pods with an app label (`app: echo`) and a version label (`version: v1`) to help with our canary rollout. +- We'll deploy a single Kubernetes `Service` for the application to set up networking. Instead of updating + this or using multiple services to manage routing to different versions, we'll manage the rollout with Gloo configuration. + +The following is our `v1` echo application: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: echo-v1 +spec: + replicas: 1 + selector: + matchLabels: + app: echo + version: v1 + template: + metadata: + labels: + app: echo + version: v1 + spec: + containers: + # Shout out to our friends at Hashi for this useful test server + - image: hashicorp/http-echo + args: + - "-text=version:v1" + - -listen=:8080 + imagePullPolicy: Always + name: echo-v1 + ports: + - containerPort: 8080 +``` + +And here is the `echo` Kubernetes `Service` object: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: echo +spec: + ports: + - port: 80 + targetPort: 8080 + protocol: TCP + selector: + app: echo +``` + +For convenience, we've published this yaml in a repo so we can deploy it with the following command: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/1-setup/echo.yaml +``` + +We should see the following output: + +``` +namespace/echo created +deployment.apps/echo-v1 created +service/echo created +``` + +And we should be able to see all the resources healthy in the `echo` namespace: + +``` +➜ kubectl get all -n echo +NAME READY STATUS RESTARTS AGE +pod/echo-v1-66dbfffb79-287s5 1/1 Running 0 6s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/echo ClusterIP 10.55.252.216 80/TCP 6s + +NAME READY UP-TO-DATE AVAILABLE AGE +deployment.apps/echo-v1 1/1 1 1 7s + +NAME DESIRED CURRENT READY AGE +replicaset.apps/echo-v1-66dbfffb79 1 1 1 7s +``` + +### Exposing outside the cluster with Gloo + +We can now expose this service outside the cluster with Gloo. First, we'll model the application as a Gloo +[Upstream](https://docs.solo.io/gloo/latest/introduction/architecture/concepts/#upstreams), which is Gloo's abstraction +for a traffic destination: + +```yaml +apiVersion: gloo.solo.io/v1 +kind: Upstream +metadata: + name: echo + namespace: gloo-system +spec: + kube: + selector: + app: echo + serviceName: echo + serviceNamespace: echo + servicePort: 8080 + subsetSpec: + selectors: + - keys: + - version +``` + +Here, we're setting up subsets based on the `version` label. We don't have to use this in our routes, but later +we'll start to use it to support our canary workflow. + +We can now create a route to this upstream in Gloo by defining a +[Virtual Service](https://docs.solo.io/gloo/latest/introduction/architecture/concepts/#virtual-services): + +```yaml +apiVersion: gateway.solo.io/v1 +kind: VirtualService +metadata: + name: echo + namespace: gloo-system +spec: + virtualHost: + domains: + - "*" + routes: + - matchers: + - prefix: / + routeAction: + single: + upstream: + name: echo + namespace: gloo-system +``` + +We can apply these resources with the following commands: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/1-setup/upstream.yaml +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/1-setup/vs.yaml +``` + +Once we apply these two resources, we can start to send traffic to the application through Gloo: + +``` +➜ curl $(glooctl proxy url)/ +version:v1 +``` + +Our setup is complete, and our cluster now looks like this: + +![Setup](/images/blog/2020-04-22-two-phased-canary-rollout-with-gloo/setup.png) + +## Two-Phased Rollout Strategy + +Now we have a new version `v2` of the echo application that we wish to roll out. We know that when the +rollout is complete, we are going to end up with this picture: + +![End State](/images/blog/2020-04-22-two-phased-canary-rollout-with-gloo/end-state.png) + +However, to get there, we may want to perform a few rounds of testing to ensure the new version of the application +meets certain correctness and/or performance acceptance criteria. In this post, we'll introduce a two-phased approach to +canary rollout with Gloo, that could be used to satisfy the vast majority of acceptance tests. + +In the first phase, we'll perform smoke and correctness tests by routing a small segment of the traffic to the new version +of the application. In this demo, we'll use a header `stage: canary` to trigger routing to the new service, though in +practice it may be desirable to make this decision based on another part of the request, such as a claim in a verified JWT. + +In the second phase, we've already established correctness, so we are ready to shift all of the traffic over to the new +version of the application. We'll configure weighted destinations, and shift the traffic while monitoring certain business +metrics to ensure the service quality remains at acceptable levels. Once 100% of the traffic is shifted to the new version, +the old version can be decommissioned. + +In practice, it may be desirable to only use one of the phases for testing, in which case the other phase can be +skipped. + +## Phase 1: Initial canary rollout of v2 + +In this phase, we'll deploy `v2`, and then use a header `stage: canary` to start routing a small amount of specific +traffic to the new version. We'll use this header to perform some basic smoke testing and make sure `v2` is working the +way we'd expect: + +![Subset Routing](/images/blog/2020-04-22-two-phased-canary-rollout-with-gloo/subset-routing.png) + +### Setting up subset routing + +Before deploying our `v2` service, we'll update our virtual service to only route to pods that have the subset label +`version: v1`, using a Gloo feature called [subset routing](https://docs.solo.io/gloo/latest/guides/traffic_management/destination_types/subsets/). + +```yaml +apiVersion: gateway.solo.io/v1 +kind: VirtualService +metadata: + name: echo + namespace: gloo-system +spec: + virtualHost: + domains: + - "*" + routes: + - matchers: + - prefix: / + routeAction: + single: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v1 +``` + +We can apply them to the cluster with the following commands: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/2-initial-subset-routing-to-v2/vs-1.yaml +``` + +The application should continue to function as before: + +``` +➜ curl $(glooctl proxy url)/ +version:v1 +``` + +### Deploying echo v2 + +Now we can safely deploy `v2` of the echo application: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: echo-v2 +spec: + replicas: 1 + selector: + matchLabels: + app: echo + version: v2 + template: + metadata: + labels: + app: echo + version: v2 + spec: + containers: + - image: hashicorp/http-echo + args: + - "-text=version:v2" + - -listen=:8080 + imagePullPolicy: Always + name: echo-v2 + ports: + - containerPort: 8080 +``` + +We can deploy with the following command: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/2-initial-subset-routing-to-v2/echo-v2.yaml +``` + +Since our gateway is configured to route specifically to the `v1` subset, this should have no effect. However, it does enable +`v2` to be routable from the gateway if the `v2` subset is configured for a route. + +Make sure `v2` is running before moving on: + +```bash +➜ kubectl get pod -n echo +NAME READY STATUS RESTARTS AGE +echo-v1-66dbfffb79-2qw86 1/1 Running 0 5m25s +echo-v2-86584fbbdb-slp44 1/1 Running 0 93s +``` + +The application should continue to function as before: + +``` +➜ curl $(glooctl proxy url)/ +version:v1 +``` + +### Adding a route to v2 for canary testing + +We'll route to the `v2` subset when the `stage: canary` header is supplied on the request. If the header isn't +provided, we'll continue to route to the `v1` subset as before. + +```yaml +apiVersion: gateway.solo.io/v1 +kind: VirtualService +metadata: + name: echo + namespace: gloo-system +spec: + virtualHost: + domains: + - "*" + routes: + - matchers: + - headers: + - name: stage + value: canary + prefix: / + routeAction: + single: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v2 + - matchers: + - prefix: / + routeAction: + single: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v1 +``` + +We can deploy with the following command: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/2-initial-subset-routing-to-v2/vs-2.yaml +``` + +### Canary testing + +Now that we have this route, we can do some testing. First let's ensure that the existing route is working as expected: + +``` +➜ curl $(glooctl proxy url)/ +version:v1 +``` + +And now we can start to canary test our new application version: + +``` +➜ curl $(glooctl proxy url)/ -H "stage: canary" +version:v2 +``` + +### Advanced use cases for subset routing + +We may decide that this approach, using user-provided request headers, is too open. Instead, we may +want to restrict canary testing to a known, authorized user. + +A common implementation of this that we've seen is for the canary route to require a valid JWT that contains +a specific claim to indicate the subject is authorized for canary testing. Enterprise Gloo has out of the box +support for verifying JWTs, updating the request headers based on the JWT claims, and recomputing the +routing destination based on the updated headers. We'll save that for a future post covering more advanced use +cases in canary testing. + +## Phase 2: Shifting all traffic to v2 and decommissioning v1 + +At this point, we've deployed `v2`, and created a route for canary testing. If we are satisfied with the +results of the testing, we can move on to phase 2 and start shifting the load from `v1` to `v2`. We'll use +[weighted destinations](https://docs.solo.io/gloo/latest/guides/traffic_management/destination_types/multi_destination/) +in Gloo to manage the load during the migration. + +### Setting up the weighted destinations + +We can change the Gloo route to route to both of these destinations, with weights to decide how much of the traffic should +go to the `v1` versus the `v2` subset. To start, we're going to set it up so 100% of the traffic continues to get routed to the +`v1` subset, unless the `stage: canary` header was provided as before. + +```yaml +apiVersion: gateway.solo.io/v1 +kind: VirtualService +metadata: + name: echo + namespace: gloo-system +spec: + virtualHost: + domains: + - "*" + routes: + # We'll keep our route from before if we want to continue testing with this header + - matchers: + - headers: + - name: stage + value: canary + prefix: / + routeAction: + single: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v2 + # Now we'll route the rest of the traffic to the upstream, load balanced across the two subsets. + - matchers: + - prefix: / + routeAction: + multi: + destinations: + - destination: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v1 + weight: 100 + - destination: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v2 + weight: 0 +``` + +We can apply this virtual service update to the cluster with the following commands: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/3-progressive-traffic-shift-to-v2/vs-1.yaml +``` + +Now the cluster looks like this, for any request that doesn't have the `stage: canary` header: + +![Initialize Traffic Shift](/images/blog/2020-04-22-two-phased-canary-rollout-with-gloo/init-traffic-shift.png) + +With the initial weights, we should see the gateway continue to serve `v1` for all traffic. + +```bash +➜ curl $(glooctl proxy url)/ +version:v1 +``` + +### Commence rollout + +To simulate a load test, let's shift half the traffic to `v2`: + +![Load Test](/images/blog/2020-04-22-two-phased-canary-rollout-with-gloo/load-test.png) + +This can be expressed on our virtual service by adjusting the weights: + +```yaml +apiVersion: gateway.solo.io/v1 +kind: VirtualService +metadata: + name: echo + namespace: gloo-system +spec: + virtualHost: + domains: + - "*" + routes: + - matchers: + - headers: + - name: stage + value: canary + prefix: / + routeAction: + single: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v2 + - matchers: + - prefix: / + routeAction: + multi: + destinations: + - destination: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v1 + # Update the weight so 50% of the traffic hits v1 + weight: 50 + - destination: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v2 + # And 50% is routed to v2 + weight: 50 +``` + +We can apply this to the cluster with the following command: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/3-progressive-traffic-shift-to-v2/vs-2.yaml +``` + +Now when we send traffic to the gateway, we should see half of the requests return `version:v1` and the +other half return `version:v2`. + +``` +➜ curl $(glooctl proxy url)/ +version:v1 +➜ curl $(glooctl proxy url)/ +version:v2 +➜ curl $(glooctl proxy url)/ +version:v1 +``` + +In practice, during this process it's likely you'll be monitoring some performance and business metrics +to ensure the traffic shift isn't resulting in a decline in the overall quality of service. We can even +leverage operators like [Flagger](https://github.com/weaveworks/flagger) to help automate this Gloo +workflow. Gloo Enterprise integrates with your metrics backend and provides out of the box and dynamic, +upstream-based dashboards that can be used to monitor the health of the rollout. +We will save these topics for a future post on advanced canary testing use cases with Gloo. + +### Finishing the rollout + +We will continue adjusting weights until eventually, all of the traffic is now being routed to `v2`: + +![Final Shift](/images/blog/2020-04-22-two-phased-canary-rollout-with-gloo/final-shift.png) + +Our virtual service will look like this: + +```yaml +apiVersion: gateway.solo.io/v1 +kind: VirtualService +metadata: + name: echo + namespace: gloo-system +spec: + virtualHost: + domains: + - "*" + routes: + - matchers: + - headers: + - name: stage + value: canary + prefix: / + routeAction: + single: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v2 + - matchers: + - prefix: / + routeAction: + multi: + destinations: + - destination: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v1 + # No traffic will be sent to v1 anymore + weight: 0 + - destination: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v2 + # Now all the traffic will be routed to v2 + weight: 100 +``` + +We can apply that to the cluster with the following command: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/3-progressive-traffic-shift-to-v2/vs-3.yaml +``` + +Now when we send traffic to the gateway, we should see all of the requests return `version:v2`. + +``` +➜ curl $(glooctl proxy url)/ +version:v2 +➜ curl $(glooctl proxy url)/ +version:v2 +➜ curl $(glooctl proxy url)/ +version:v2 +``` + +### Decommissioning v1 + +At this point, we have deployed the new version of our application, conducted correctness tests using subset routing, +conducted load and performance tests by progressively shifting traffic to the new version, and finished +the rollout. The only remaining task is to clean up our `v1` resources. + +First, we'll clean up our routes. We'll leave the subset specified on the route so we are all setup for future upgrades. + +```yaml +apiVersion: gateway.solo.io/v1 +kind: VirtualService +metadata: + name: echo + namespace: gloo-system +spec: + virtualHost: + domains: + - "*" + routes: + - matchers: + - prefix: / + routeAction: + single: + upstream: + name: echo + namespace: gloo-system + subset: + values: + version: v2 +``` + +We can apply this update with the following command: + +``` +kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-ref-arch/blog-30-mar-20/platform/prog-delivery/two-phased-with-os-gloo/4-decommissioning-v1/vs.yaml +``` + +And we can delete the `v1` deployment, which is no longer serving any traffic. + +``` +kubectl delete deploy -n echo echo-v1 +``` + +Now our cluster looks like this: + +![End State](/images/blog/2020-04-22-two-phased-canary-rollout-with-gloo/end-state.png) + +And requests to the gateway return this: + +``` +➜ curl $(glooctl proxy url)/ +version:v2 +``` + +We have now completed our two-phased canary rollout of an application update using Gloo! + +## Other Advanced Topics + +Over the course of this post, we collected a few topics that could be a good starting point for advanced exploration: + +- Using the **JWT** filter to verify JWTs, extract claims onto headers, and route to canary versions depending on a claim value. +- Looking at **Prometheus metrics** and **Grafana dashboards** created by Gloo to monitor the health of the rollout. +- Automating the rollout by integrating **Flagger** with **Gloo**. + +A few other topics that warrant further exploration: + +- Supporting **self-service** upgrades by giving teams ownership over their upstream and route configuration +- Utilizing Gloo's **delegation** feature and Kubernetes **RBAC** to decentralize the configuration management safely +- Fully automating the continuous delivery process by applying **GitOps** principles and using tools like **Flux** to push config to the cluster +- Supporting **hybrid** or **non-Kubernetes** application use-cases by setting up Gloo with a different deployment pattern +- Utilizing **traffic shadowing** to begin testing the new version with realistic data before shifting production traffic to it + +## Get Involved in the Gloo Community + +Gloo has a large and growing community of open source users, in addition to an enterprise customer base. To learn more about +Gloo: + +- Check out the [repo](https://github.com/solo-io/gloo), where you can see the code and file issues +- Check out the [docs](https://docs.solo.io/gloo/latest), which have an extensive collection of guides and examples +- Join the [slack channel](http://slack.solo.io/) and start chatting with the Solo engineering team and user community + +If you'd like to get in touch with me (feedback is always appreciated!), you can find me on the +[Solo slack](http://slack.solo.io/) or email me at **rick.ducott@solo.io**. diff --git a/content/en/community/_index.html b/content/en/community/_index.html index 4a686b04093d0..0f6fb9bc04db8 100644 --- a/content/en/community/_index.html +++ b/content/en/community/_index.html @@ -6,8 +6,8 @@
- Kubernetes Conference Gallery - Kubernetes Conference Gallery + Kubernetes Conference Gallery + Kubernetes Conference Gallery
diff --git a/content/en/docs/concepts/architecture/cloud-controller.md b/content/en/docs/concepts/architecture/cloud-controller.md index 61dc26da7afe4..1891a2d10be74 100644 --- a/content/en/docs/concepts/architecture/cloud-controller.md +++ b/content/en/docs/concepts/architecture/cloud-controller.md @@ -1,114 +1,92 @@ --- -title: Concepts Underlying the Cloud Controller Manager +title: Cloud Controller Manager content_template: templates/concept weight: 40 --- {{% capture overview %}} -The cloud controller manager (CCM) concept (not to be confused with the binary) was originally created to allow cloud specific vendor code and the Kubernetes core to evolve independent of one another. The cloud controller manager runs alongside other master components such as the Kubernetes controller manager, the API server, and scheduler. It can also be started as a Kubernetes addon, in which case it runs on top of Kubernetes. +{{< feature-state state="beta" for_k8s_version="v1.11" >}} -The cloud controller manager's design is based on a plugin mechanism that allows new cloud providers to integrate with Kubernetes easily by using plugins. There are plans in place for on-boarding new cloud providers on Kubernetes and for migrating cloud providers from the old model to the new CCM model. +Cloud infrastructure technologies let you run Kubernetes on public, private, and hybrid clouds. +Kubernetes believes in automated, API-driven infrastructure without tight coupling between +components. -This document discusses the concepts behind the cloud controller manager and gives details about its associated functions. +{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="The cloud-controller-manager is">}} -Here's the architecture of a Kubernetes cluster without the cloud controller manager: - -![Pre CCM Kube Arch](/images/docs/pre-ccm-arch.png) +The cloud-controller-manager is structured using a plugin +mechanism that allows different cloud providers to integrate their platforms with Kubernetes. {{% /capture %}} - {{% capture body %}} ## Design -In the preceding diagram, Kubernetes and the cloud provider are integrated through several different components: - -* Kubelet -* Kubernetes controller manager -* Kubernetes API server - -The CCM consolidates all of the cloud-dependent logic from the preceding three components to create a single point of integration with the cloud. The new architecture with the CCM looks like this: - -![CCM Kube Arch](/images/docs/post-ccm-arch.png) - -## Components of the CCM - -The CCM breaks away some of the functionality of Kubernetes controller manager (KCM) and runs it as a separate process. Specifically, it breaks away those controllers in the KCM that are cloud dependent. The KCM has the following cloud dependent controller loops: +![Kubernetes components](/images/docs/components-of-kubernetes.png) - * Node controller - * Volume controller - * Route controller - * Service controller +The cloud controller manager runs in the control plane as a replicated set of processes +(usually, these are containers in Pods). Each cloud-controller-manager implements +multiple {{< glossary_tooltip text="controllers" term_id="controller" >}} in a single +process. -In version 1.9, the CCM runs the following controllers from the preceding list: - -* Node controller -* Route controller -* Service controller {{< note >}} -Volume controller was deliberately chosen to not be a part of CCM. Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM. +You can also run the cloud controller manager as a Kubernetes +{{< glossary_tooltip text="addon" term_id="addons" >}} rather than as part +of the control plane. {{< /note >}} -The original plan to support volumes using CCM was to use [Flex](/docs/concepts/storage/volumes/#flexVolume) volumes to support pluggable volumes. However, a competing effort known as [CSI](/docs/concepts/storage/volumes/#csi) is being planned to replace Flex. - -Considering these dynamics, we decided to have an intermediate stop gap measure until CSI becomes ready. - -## Functions of the CCM - -The CCM inherits its functions from components of Kubernetes that are dependent on a cloud provider. This section is structured based on those components. - -### 1. Kubernetes controller manager - -The majority of the CCM's functions are derived from the KCM. As mentioned in the previous section, the CCM runs the following control loops: - -* Node controller -* Route controller -* Service controller +## Cloud controller manager functions {#functions-of-the-ccm} -#### Node controller +The controllers inside the cloud controller manager include: -The Node controller is responsible for initializing a node by obtaining information about the nodes running in the cluster from the cloud provider. The node controller performs the following functions: +### Node controller -1. Initialize a node with cloud specific zone/region labels. -2. Initialize a node with cloud specific instance details, for example, type and size. -3. Obtain the node's network addresses and hostname. -4. In case a node becomes unresponsive, check the cloud to see if the node has been deleted from the cloud. -If the node has been deleted from the cloud, delete the Kubernetes Node object. +The node controller is responsible for creating {{< glossary_tooltip text="Node" term_id="node" >}} objects +when new servers are created in your cloud infrastructure. The node controller obtains information about the +hosts running inside your tenancy with the cloud provider. The node controller performs the following functions: -#### Route controller +1. Initialize a Node object for each server that the controller discovers through the cloud provider API. +2. Annotating and labelling the Node object with cloud-specific information, such as the region the node + is deployed into and the resources (CPU, memory, etc) that it has available. +3. Obtain the node's hostname and network addresses. +4. Verifying the node's health. In case a node becomes unresponsive, this controller checks with + your cloud provider's API to see if the server has been deactivated / deleted / terminated. + If the node has been deleted from the cloud, the controller deletes the Node object from your Kubernetes + cluster. -The Route controller is responsible for configuring routes in the cloud appropriately so that containers on different nodes in the Kubernetes cluster can communicate with each other. The route controller is only applicable for Google Compute Engine clusters. +Some cloud provider implementations split this into a node controller and a separate node +lifecycle controller. -#### Service Controller - -The Service controller is responsible for listening to service create, update, and delete events. Based on the current state of the services in Kubernetes, it configures cloud load balancers (such as ELB , Google LB, or Oracle Cloud Infrastructure LB) to reflect the state of the services in Kubernetes. Additionally, it ensures that service backends for cloud load balancers are up to date. - -### 2. Kubelet - -The Node controller contains the cloud-dependent functionality of the kubelet. Prior to the introduction of the CCM, the kubelet was responsible for initializing a node with cloud-specific details such as IP addresses, region/zone labels and instance type information. The introduction of the CCM has moved this initialization operation from the kubelet into the CCM. - -In this new model, the kubelet initializes a node without cloud-specific information. However, it adds a taint to the newly created node that makes the node unschedulable until the CCM initializes the node with cloud-specific information. It then removes this taint. +### Route controller -## Plugin mechanism +The route controller is responsible for configuring routes in the cloud +appropriately so that containers on different nodes in your Kubernetes +cluster can communicate with each other. -The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the CloudProvider Interface defined [here](https://github.com/kubernetes/cloud-provider/blob/9b77dc1c384685cb732b3025ed5689dd597a5971/cloud.go#L42-L62). +Depending on the cloud provider, the route controller might also allocate blocks +of IP addresses for the Pod network. -The implementation of the four shared controllers highlighted above, and some scaffolding along with the shared cloudprovider interface, will stay in the Kubernetes core. Implementations specific to cloud providers will be built outside of the core and implement interfaces defined in the core. +### Service controller -For more information about developing plugins, see [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/). +{{< glossary_tooltip text="Services" term_id="service" >}} integrate with cloud +infrastructure components such as managed load balancers, IP addresses, network +packet filtering, and target health checking. The service controller interacts with your +cloud provider's APIs to set up load balancers and other infrastructure components +when you declare a Service resource that requires them. ## Authorization -This section breaks down the access required on various API objects by the CCM to perform its operations. +This section breaks down the access that the cloud controller managers requires +on various API objects, in order to perform its operations. -### Node Controller +### Node controller {#authorization-node-controller} -The Node controller only works with Node objects. It requires full access to get, list, create, update, patch, watch, and delete Node objects. +The Node controller only works with Node objects. It requires full access +to read and modify Node objects. -v1/Node: +`v1/Node`: - Get - List @@ -118,23 +96,24 @@ v1/Node: - Watch - Delete -### Route controller +### Route controller {#authorization-route-controller} -The route controller listens to Node object creation and configures routes appropriately. It requires get access to Node objects. +The route controller listens to Node object creation and configures +routes appropriately. It requires Get access to Node objects. -v1/Node: +`v1/Node`: - Get -### Service controller +### Service controller {#authorization-service-controller} -The service controller listens to Service object create, update and delete events and then configures endpoints for those Services appropriately. +The service controller listens to Service object Create, Update and Delete events and then configures Endpoints for those Services appropriately. -To access Services, it requires list, and watch access. To update Services, it requires patch and update access. +To access Services, it requires List, and Watch access. To update Services, it requires Patch and Update access. -To set up endpoints for the Services, it requires access to create, list, get, watch, and update. +To set up Endpoints resources for the Services, it requires access to Create, List, Get, Watch, and Update. -v1/Service: +`v1/Service`: - List - Get @@ -142,21 +121,22 @@ v1/Service: - Patch - Update -### Others +### Others {#authorization-miscellaneous} -The implementation of the core of CCM requires access to create events, and to ensure secure operation, it requires access to create ServiceAccounts. +The implementation of the core of the cloud controller manager requires access to create Event objects, and to ensure secure operation, it requires access to create ServiceAccounts. -v1/Event: +`v1/Event`: - Create - Patch - Update -v1/ServiceAccount: +`v1/ServiceAccount`: - Create -The RBAC ClusterRole for the CCM looks like this: +The {{< glossary_tooltip term_id="rbac" text="RBAC" >}} ClusterRole for the cloud +controller manager looks like: ```yaml apiVersion: rbac.authorization.k8s.io/v1 @@ -220,26 +200,16 @@ rules: - update ``` -## Vendor Implementations - -The following cloud providers have implemented CCMs: - -* [Alibaba Cloud](https://github.com/kubernetes/cloud-provider-alibaba-cloud) -* [AWS](https://github.com/kubernetes/cloud-provider-aws) -* [Azure](https://github.com/kubernetes/cloud-provider-azure) -* [BaiduCloud](https://github.com/baidu/cloud-provider-baiducloud) -* [DigitalOcean](https://github.com/digitalocean/digitalocean-cloud-controller-manager) -* [GCP](https://github.com/kubernetes/cloud-provider-gcp) -* [Hetzner](https://github.com/hetznercloud/hcloud-cloud-controller-manager) -* [Linode](https://github.com/linode/linode-cloud-controller-manager) -* [OpenStack](https://github.com/kubernetes/cloud-provider-openstack) -* [Oracle](https://github.com/oracle/oci-cloud-controller-manager) -* [TencentCloud](https://github.com/TencentCloud/tencentcloud-cloud-controller-manager) +{{% /capture %}} +{{% capture whatsnext %}} +[Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager) +has instructions on running and managing the cloud controller manager. +Want to know how to implement your own cloud controller manager, or extend an existing project? -## Cluster Administration +The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the `CloudProvider` interface defined in [`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.17/cloud.go#L42-L62) from [kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider). -Complete instructions for configuring and running the CCM are provided -[here](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager). +The implementation of the shared controllers highlighted in this document (Node, Route, and Service), and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. Implementations specific to cloud providers are outside the core of Kubernetes and implement the `CloudProvider` interface. +For more information about developing plugins, see [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/). {{% /capture %}} diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md new file mode 100644 index 0000000000000..940b8faacc54d --- /dev/null +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -0,0 +1,70 @@ +--- +reviewers: +- dchen1107 +- roberthbailey +- liggitt +title: Control Plane-Node Communication +content_template: templates/concept +weight: 20 +aliases: +- master-node-communication +--- + +{{% capture overview %}} + +This document catalogs the communication paths between the control plane (really the apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). + +{{% /capture %}} + +{{% capture body %}} + +## Node to Control Plane +All communication paths from the nodes to the control plane terminate at the apiserver (none of the other master components are designed to expose remote services). In a typical deployment, the apiserver is configured to listen for remote connections on a secure HTTPS port (443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. +One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed. + +Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. For example, on a default GKE deployment, the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. + +Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. +The `kubernetes` service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. + +The control plane components also communicate with the cluster apiserver over the secure port. + +As a result, the default operating mode for connections from the nodes and pods running on the nodes to the control plane is secured by default and can run over untrusted and/or public networks. + +## Control Plane to node +There are two primary communication paths from the control plane (apiserver) to the nodes. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver's proxy functionality. + +### apiserver to kubelet +The connections from the apiserver to the kubelet are used for: + +* Fetching logs for pods. +* Attaching (through kubectl) to running pods. +* Providing the kubelet's port-forwarding functionality. + +These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and **unsafe** to run over untrusted and/or public networks. + +To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate. + +If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an +untrusted or public network. + +Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/) should be enabled to secure the kubelet API. + +### apiserver to nodes, pods, and services + +The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials so while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted and/or public networks. + +### SSH tunnels + +Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. +This tunnel ensures that the traffic is not exposed outside of the network in which the nodes are running. + +SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. The Konnectivity service is a replacement for this communication channel. + +### Konnectivity service +{{< feature-state for_k8s_version="v1.18" state="beta" >}} + +As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to Cluster communication. The Konnectivity consists of two parts, the Konnectivity server and the Konnectivity agents, running in the control plane network and the nodes network respectively. The Konnectivity agents initiate connections to the Konnectivity server and maintain the connections. +All control plane to nodes traffic then goes through these connections. + +See [Konnectivity Service Setup](/docs/tasks/setup-konnectivity/) on how to set it up in your cluster. diff --git a/content/en/docs/concepts/architecture/controller.md b/content/en/docs/concepts/architecture/controller.md index e5bee1d0a52a7..2872959bacfa0 100644 --- a/content/en/docs/concepts/architecture/controller.md +++ b/content/en/docs/concepts/architecture/controller.md @@ -52,7 +52,7 @@ Job is a Kubernetes resource that runs a {{< glossary_tooltip term_id="pod" >}}, or perhaps several Pods, to carry out a task and then stop. -(Once [scheduled](/docs/concepts/scheduling/), Pod objects become part of the +(Once [scheduled](/docs/concepts/scheduling-eviction/), Pod objects become part of the desired state for a kubelet). When the Job controller sees a new task it makes sure that, somewhere @@ -113,17 +113,15 @@ useful changes, it doesn't matter if the overall state is or is not stable. As a tenet of its design, Kubernetes uses lots of controllers that each manage a particular aspect of cluster state. Most commonly, a particular control loop (controller) uses one kind of resource as its desired state, and has a different -kind of resource that it manages to make that desired state happen. +kind of resource that it manages to make that desired state happen. For example, +a controller for Jobs tracks Job objects (to discover new work) and Pod objects +(to run the Jobs, and then to see when the work is finished). In this case +something else creates the Jobs, whereas the Job controller creates Pods. It's useful to have simple controllers rather than one, monolithic set of control loops that are interlinked. Controllers can fail, so Kubernetes is designed to allow for that. -For example: a controller for Jobs tracks Job objects (to discover -new work) and Pod object (to run the Jobs, and then to see when the work is -finished). In this case something else creates the Jobs, whereas the Job -controller creates Pods. - {{< note >}} There can be several controllers that create or update the same kind of object. Behind the scenes, Kubernetes controllers make sure that they only pay attention diff --git a/content/en/docs/concepts/architecture/master-node-communication.md b/content/en/docs/concepts/architecture/master-node-communication.md deleted file mode 100644 index 8a4493e49b886..0000000000000 --- a/content/en/docs/concepts/architecture/master-node-communication.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -reviewers: -- dchen1107 -- roberthbailey -- liggitt -title: Master-Node Communication -content_template: templates/concept -weight: 20 ---- - -{{% capture overview %}} - -This document catalogs the communication paths between the master (really the -apiserver) and the Kubernetes cluster. The intent is to allow users to -customize their installation to harden the network configuration such that -the cluster can be run on an untrusted network (or on fully public IPs on a -cloud provider). - -{{% /capture %}} - - -{{% capture body %}} - -## Cluster to Master - -All communication paths from the cluster to the master terminate at the -apiserver (none of the other master components are designed to expose remote -services). In a typical deployment, the apiserver is configured to listen for -remote connections on a secure HTTPS port (443) with one or more forms of -client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. -One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) -should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) -or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) -are allowed. - -Nodes should be provisioned with the public root certificate for the cluster -such that they can connect securely to the apiserver along with valid client -credentials. For example, on a default GKE deployment, the client credentials -provided to the kubelet are in the form of a client certificate. See -[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) -for automated provisioning of kubelet client certificates. - -Pods that wish to connect to the apiserver can do so securely by leveraging a -service account so that Kubernetes will automatically inject the public root -certificate and a valid bearer token into the pod when it is instantiated. -The `kubernetes` service (in all namespaces) is configured with a virtual IP -address that is redirected (via kube-proxy) to the HTTPS endpoint on the -apiserver. - -The master components also communicate with the cluster apiserver over the secure port. - -As a result, the default operating mode for connections from the cluster -(nodes and pods running on the nodes) to the master is secured by default -and can run over untrusted and/or public networks. - -## Master to Cluster - -There are two primary communication paths from the master (apiserver) to the -cluster. The first is from the apiserver to the kubelet process which runs on -each node in the cluster. The second is from the apiserver to any node, pod, -or service through the apiserver's proxy functionality. - -### apiserver to kubelet - -The connections from the apiserver to the kubelet are used for: - - * Fetching logs for pods. - * Attaching (through kubectl) to running pods. - * Providing the kubelet's port-forwarding functionality. - -These connections terminate at the kubelet's HTTPS endpoint. By default, -the apiserver does not verify the kubelet's serving certificate, -which makes the connection subject to man-in-the-middle attacks, and -**unsafe** to run over untrusted and/or public networks. - -To verify this connection, use the `--kubelet-certificate-authority` flag to -provide the apiserver with a root certificate bundle to use to verify the -kubelet's serving certificate. - -If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels) -between the apiserver and kubelet if required to avoid connecting over an -untrusted or public network. - -Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/) -should be enabled to secure the kubelet API. - -### apiserver to nodes, pods, and services - -The connections from the apiserver to a node, pod, or service default to plain -HTTP connections and are therefore neither authenticated nor encrypted. They -can be run over a secure HTTPS connection by prefixing `https:` to the node, -pod, or service name in the API URL, but they will not validate the certificate -provided by the HTTPS endpoint nor provide client credentials so while the -connection will be encrypted, it will not provide any guarantees of integrity. -These connections **are not currently safe** to run over untrusted and/or -public networks. - -### SSH Tunnels - -Kubernetes supports SSH tunnels to protect the Master -> Cluster communication -paths. In this configuration, the apiserver initiates an SSH tunnel to each node -in the cluster (connecting to the ssh server listening on port 22) and passes -all traffic destined for a kubelet, node, pod, or service through the tunnel. -This tunnel ensures that the traffic is not exposed outside of the network in -which the nodes are running. - -SSH tunnels are currently deprecated so you shouldn't opt to use them unless you know what you are doing. A replacement for this communication channel is being designed. - -{{% /capture %}} diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 97188ee9ad9b7..6e62881451643 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -72,7 +72,7 @@ The node condition is represented as a JSON object. For example, the following r ] ``` -If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout`, an argument is passed to the [kube-controller-manager](/docs/admin/kube-controller-manager/) and all the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the apiserver is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node. +If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the [kube-controller-manager](/docs/admin/kube-controller-manager/)), all the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the apiserver is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node. In versions of Kubernetes prior to 1.5, the node controller would [force delete](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) these unreachable pods from the apiserver. However, in 1.5 and higher, the node controller does not force delete pods until it is @@ -83,8 +83,8 @@ Kubernetes causes all the Pod objects running on the node to be deleted from the The node lifecycle controller automatically creates [taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions. -When the scheduler is assigning a Pod to a Node, the scheduler takes the Node's taints -into account, except for any taints that the Pod tolerates. +The scheduler takes the Node's taints into consideration when assigning a Pod to a Node. +Pods can also have tolerations which let them tolerate a Node's taints. ### Capacity and Allocatable {#capacity} diff --git a/content/en/docs/concepts/cluster-administration/certificates.md b/content/en/docs/concepts/cluster-administration/certificates.md index 6ef3f813c527e..052e7b9aa5b66 100644 --- a/content/en/docs/concepts/cluster-administration/certificates.md +++ b/content/en/docs/concepts/cluster-administration/certificates.md @@ -130,11 +130,11 @@ Finally, add the same parameters into the API server start parameters. Note that you may need to adapt the sample commands based on the hardware architecture and cfssl version you are using. - curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o cfssl + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 -o cfssl chmod +x cfssl - curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o cfssljson + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 -o cfssljson chmod +x cfssljson - curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o cfssl-certinfo + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64 -o cfssl-certinfo chmod +x cfssl-certinfo 1. Create a directory to hold the artifacts and initialize cfssl: diff --git a/content/en/docs/concepts/cluster-administration/cloud-providers.md b/content/en/docs/concepts/cluster-administration/cloud-providers.md index b9a320192c5e3..8d62d1b196047 100644 --- a/content/en/docs/concepts/cluster-administration/cloud-providers.md +++ b/content/en/docs/concepts/cluster-administration/cloud-providers.md @@ -134,6 +134,15 @@ If you wish to use the external cloud provider, its repository is [kubernetes/cl The GCE cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object. Note that the first segment of the Kubernetes Node name must match the GCE instance name (e.g. a Node named `kubernetes-node-2.c.my-proj.internal` must correspond to an instance named `kubernetes-node-2`). +## HUAWEI CLOUD + +If you wish to use the external cloud provider, its repository is [kubernetes-sigs/cloud-provider-huaweicloud](https://github.com/kubernetes-sigs/cloud-provider-huaweicloud). + +### Node Name + +The HUAWEI CLOUD provider needs the private IP address of the node as the name of the Kubernetes Node object. +Please make sure indicating `--hostname-override=` when starting kubelet on the node. + ## OpenStack This section describes all the possible configurations which can be used when using OpenStack with Kubernetes. diff --git a/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md index 0512cabc905f8..5ba0bb30d856a 100644 --- a/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/content/en/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -20,7 +20,6 @@ See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and Before choosing a guide, here are some considerations: - Do you just want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs. - - **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/concepts/cluster-administration/federation/). - Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**? - Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters. - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best. diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index c41998f6e9300..6540a05e8f2c3 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -18,12 +18,12 @@ potentially crashing the API server, but these flags are not enough to ensure that the most important requests get through in a period of high traffic. The API Priority and Fairness feature (APF) is an alternative that improves upon -aforementioned max-inflight limitations. APF classifies -and isolates requests in a more fine-grained way. It also introduces +aforementioned max-inflight limitations. APF classifies +and isolates requests in a more fine-grained way. It also introduces a limited amount of queuing, so that no requests are rejected in cases of very brief bursts. Requests are dispatched from queues using a -fair queuing technique so that, for example, a poorly-behaved {{< -glossary_tooltip text="controller" term_id="controller" >}}) need not +fair queuing technique so that, for example, a poorly-behaved +{{< glossary_tooltip text="controller" term_id="controller" >}} need not starve others (even at the same priority level). {{< caution >}} @@ -164,10 +164,10 @@ are built in and may not be overwritten: ## Resources The flow control API involves two kinds of resources. -[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol) +[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io) define the available isolation classes, the share of the available concurrency budget that each can handle, and allow for fine-tuning queuing behavior. -[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol) +[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io) are used to classify individual inbound requests, matching each to a single PriorityLevelConfiguration. diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index eedafce1a3e3e..a6dccdbf93fb3 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -140,7 +140,7 @@ deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created ``` -The `--recursive` flag works with any operation that accepts the `--filename,-f` flag such as: `kubectl {create,get,delete,describe,rollout} etc.` +The `--recursive` flag works with any operation that accepts the `--filename,-f` flag such as: `kubectl {create,get,delete,describe,rollout}` etc. The `--recursive` flag also works when multiple `-f` arguments are provided: @@ -427,12 +427,20 @@ We'll guide you through how to create and update applications with Deployments. Let's say you were running version 1.14.2 of nginx: ```shell -kubectl run my-nginx --image=nginx:1.14.2 --replicas=3 +kubectl create deployment my-nginx --image=nginx:1.14.2 ``` ```shell deployment.apps/my-nginx created ``` +with 3 replicas (so the old and new revisions can coexist): +```shell +kubectl scale deployment my-nginx --current-replicas=1 --replicas=3 +``` +``` +deployment.apps/my-nginx scaled +``` + To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above. ```shell @@ -445,7 +453,7 @@ That's it! The Deployment will declaratively update the deployed nginx applicati {{% capture whatsnext %}} -- [Learn about how to use `kubectl` for application introspection and debugging.](/docs/tasks/debug-application-cluster/debug-application-introspection/) -- [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/) +- Learn about [how to use `kubectl` for application introspection and debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/). +- See [Configuration Best Practices and Tips](/docs/concepts/configuration/overview/). {{% /capture %}} diff --git a/content/en/docs/concepts/cluster-administration/monitoring.md b/content/en/docs/concepts/cluster-administration/monitoring.md index 92b74b6634c22..e02ac8231cad8 100644 --- a/content/en/docs/concepts/cluster-administration/monitoring.md +++ b/content/en/docs/concepts/cluster-administration/monitoring.md @@ -25,6 +25,7 @@ Metrics in Kubernetes control plane are emitted in [prometheus format](https://p In most cases metrics are available on `/metrics` endpoint of the HTTP server. For components that doesn't expose endpoint by default it can be enabled using `--bind-address` flag. Examples of those components: + * {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} * {{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}} * {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} diff --git a/content/en/docs/concepts/configuration/assign-pod-node.md b/content/en/docs/concepts/configuration/assign-pod-node.md index 2323cd76f4f93..3811f7530132b 100644 --- a/content/en/docs/concepts/configuration/assign-pod-node.md +++ b/content/en/docs/concepts/configuration/assign-pod-node.md @@ -5,7 +5,7 @@ reviewers: - bsalamat title: Assigning Pods to Nodes content_template: templates/concept -weight: 30 +weight: 50 --- @@ -160,9 +160,9 @@ You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, o If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod to be scheduled onto a candidate node. -If you specify multiple `nodeSelectorTerms` associated with `nodeAffinity` types, then the pod can be scheduled onto a node **only if all** `nodeSelectorTerms` can be satisfied. +If you specify multiple `nodeSelectorTerms` associated with `nodeAffinity` types, then the pod can be scheduled onto a node **if one of the** `nodeSelectorTerms` can be satisfied. -If you specify multiple `matchExpressions` associated with `nodeSelectorTerms`, then the pod can be scheduled onto a node **if one of** the `matchExpressions` is satisfied. +If you specify multiple `matchExpressions` associated with `nodeSelectorTerms`, then the pod can be scheduled onto a node **only if all** `matchExpressions` is satisfied. If you remove or change the label of the node where the pod is scheduled, the pod won't be removed. In other words, the affinity selection works only at the time of scheduling the pod. diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md new file mode 100644 index 0000000000000..355386f3e7ce0 --- /dev/null +++ b/content/en/docs/concepts/configuration/configmap.md @@ -0,0 +1,169 @@ +--- +title: ConfigMaps +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +{{< glossary_definition term_id="configmap" prepend="A ConfigMap is" length="all" >}} + +{{< caution >}} +ConfigMap does not provide secrecy or encryption. +If the data you want to store are confidential, use a +{{< glossary_tooltip text="Secret" term_id="secret" >}} rather than a ConfigMap, +or use additional (third party) tools to keep your data private. +{{< /caution >}} + +{{% /capture %}} + +{{% capture body %}} +## Motivation + +Use a ConfigMap for setting configuration data separately from application code. + +For example, imagine that you are developing an application that you can run on your +own computer (for development) and in the cloud (to handle real traffic). +You write the code to +look in an environment variable named `DATABASE_HOST`. Locally, you set that variable +to `localhost`. In the cloud, you set it to refer to a Kubernetes +{{< glossary_tooltip text="Service" term_id="service" >}} that exposes the database +component to your cluster. + +This lets you fetch a container image running in the cloud and +debug the exact same code locally if needed. + +## ConfigMap object + +A ConfigMap is an API [object](/docs/concepts/overview/working-with-objects/kubernetes-objects/) +that lets you store configuration for other objects to use. Unlike most +Kubernetes objects that have a `spec`, a ConfigMap has a `data` section to +store items (keys) and their values. + +The name of a ConfigMap must be a valid +[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +## ConfigMaps and Pods + +You can write a Pod `spec` that refers to a ConfigMap and configures the container(s) +in that Pod based on the data in the ConfigMap. The Pod and the ConfigMap must be in +the same {{< glossary_tooltip text="namespace" term_id="namespace" >}}. + +Here's an example ConfigMap that has some keys with single values, +and other keys where the value looks like a fragment of a configuration +format. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: game-demo +data: + # property-like keys; each key maps to a simple value + player_initial_lives: 3 + ui_properties_file_name: "user-interface.properties" + # + # file-like keys + game.properties: | + enemy.types=aliens,monsters + player.maximum-lives=5 + user-interface.properties: | + color.good=purple + color.bad=yellow + allow.textmode=true +``` + +There are four different ways that you can use a ConfigMap to configure +a container inside a Pod: + +1. Command line arguments to the entrypoint of a container +1. Environment variables for a container +1. Add a file in read-only volume, for the application to read +1. Write code to run inside the Pod that uses the Kubernetes API to read a ConfigMap + +These different methods lend themselves to different ways of modeling +the data being consumed. +For the first three methods, the +{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} uses the data from +the Secret when it launches container(s) for a Pod. + +The fourth method means you have to write code to read the Secret and its data. +However, because you're using the Kubernetes API directly, your application can +subscribe to get updates whenever the ConfigMap changes, and react +when that happens. By accessing the Kubernetes API directly, this +technique also lets you access a ConfigMap in a different namespace. + +Here's an example Pod that uses values from `game-demo` to configure a Pod: +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: configmap-demo-pod +spec: + containers: + - name: demo + image: game.example/demo-game + env: + # Define the environment variable + - name: PLAYER_INITIAL_LIVES # Notice that the case is different here + # from the key name in the ConfigMap. + valueFrom: + configMapKeyRef: + name: game-demo # The ConfigMap this value comes from. + key: player_initial_lives # The key to fetch. + - name: UI_PROPERTIES_FILE_NAME + valueFrom: + configMapKeyRef: + name: game-demo + key: ui_properties_file_name + volumeMounts: + - name: config + mountPath: "/config" + readOnly: true + volumes: + # You set volumes at the Pod level, then mount them into containers inside that Pod + - name: config + configMap: + # Provide the name of the ConfigMap you want to mount. + name: game-demo +``` + + +A ConfigMap doesn't differentiate between single line property values and +multi-line file-like values. +What matters how Pods and other objects consume those values. +For this example, defining a volume and mounting it inside the `demo` +container as `/config` creates four files: + +- `/config/player_initial_lives` +- `/config/ui_properties_file_name` +- `/config/game.properties` +- `/config/user-interface.properties` + +If you want to make sure that `/config` only contains files with a +`.properties` extension, use two different ConfigMaps, and refer to both +ConfigMaps in the `spec` for a Pod. The first ConfigMap defines +`player_initial_lives` and `ui_properties_file_name`. The second +ConfigMap defines the files that the kubelet places into `/config`. + +{{< note >}} +The most common way to use ConfigMaps is to configure settings for +containers running in a Pod in the same namespace. You can also use a +ConfigMap separately. + +For example, you +might encounter {{< glossary_tooltip text="addons" term_id="addons" >}} +or {{< glossary_tooltip text="operators" term_id="operator-pattern" >}} that +adjust their behavior based on a ConfigMap. +{{< /note >}} + + +{{% /capture %}} +{{% capture whatsnext %}} + +* Read about [Secrets](/docs/concepts/configuration/secret/). +* Read [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/). +* Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for + separating code from configuration. + +{{% /capture %}} diff --git a/content/en/docs/concepts/configuration/manage-compute-resources-container.md b/content/en/docs/concepts/configuration/manage-resources-containers.md similarity index 64% rename from content/en/docs/concepts/configuration/manage-compute-resources-container.md rename to content/en/docs/concepts/configuration/manage-resources-containers.md index 597d1e796080c..69ea4a255d9bc 100644 --- a/content/en/docs/concepts/configuration/manage-compute-resources-container.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -1,7 +1,7 @@ --- -title: Managing Compute Resources for Containers +title: Managing Resources for Containers content_template: templates/concept -weight: 20 +weight: 40 feature: title: Automatic bin packing description: > @@ -10,23 +10,48 @@ feature: {{% capture overview %}} -When you specify a [Pod](/docs/concepts/workloads/pods/pod/), you can optionally specify how -much CPU and memory (RAM) each Container needs. When Containers have resource -requests specified, the scheduler can make better decisions about which nodes to -place Pods on. And when Containers have their limits specified, contention for -resources on a node can be handled in a specified manner. For more details about -the difference between requests and limits, see -[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md). +When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how +much of each resource a {{< glossary_tooltip text="Container" term_id="container" >}} needs. +The most common resources to specify are CPU and memory (RAM); there are others. + +When you specify the resource _request_ for Containers in a Pod, the scheduler uses this +information to decide which node to place the Pod on. When you specify a resource _limit_ +for a Container, the kubelet enforces those limits so that the running container is not +allowed to use more of that resource than the limit you set. The kubelet also reserves +at least the _request_ amount of that system resource specifically for that container +to use. {{% /capture %}} {{% capture body %}} +## Requests and limits + +If the node where a Pod is running has enough of a resource available, it's possible (and +allowed) for a container to use more resource than its `request` for that resource specifies. +However, a container is not allowed to use more than its resource `limit`. + +For example, if you set a `memory` request of 256 MiB for a container, and that container is in +a Pod scheduled to a Node with 8GiB of memory and no other Pods, then the container can try to use +more RAM. + +If you set a `memory` limit of 4GiB for that Container, the kubelet (and +{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}) enforce the limit. +The runtime prevents the container from using more than the configured resource limit. For example: +when a process in the container tries to consume more than the allowed amount of memory, +the system kernel terminates the process that attempted the allocation, with an out of memory +(OOM) error. + +Limits can be implemented either reactively (the system intervenes once it sees a violation) +or by enforcement (the system prevents the container from ever exceeding the limit). Different +runtimes can have different ways to implement the same restrictions. + ## Resource types *CPU* and *memory* are each a *resource type*. A resource type has a base unit. -CPU is specified in units of cores, and memory is specified in units of bytes. +CPU represents compute processing and is specified in units of [Kubernetes CPUs](#meaning-of-cpu). +Memory is specified in units of bytes. If you're using Kubernetes v1.14 or newer, you can specify _huge page_ resources. Huge pages are a Linux-specific feature where the node kernel allocates blocks of memory that are much larger than the default page size. @@ -64,15 +89,16 @@ is convenient to talk about Pod resource requests and limits. A *Pod resource request/limit* for a particular resource type is the sum of the resource requests/limits of that type for each Container in the Pod. +## Resource units in Kubernetes -## Meaning of CPU +### Meaning of CPU Limits and requests for CPU resources are measured in *cpu* units. One cpu, in Kubernetes, is equivalent to **1 vCPU/Core** for cloud providers and **1 hyperthread** on bare-metal Intel processors. Fractional requests are allowed. A Container with `spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much -CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the +CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the expression `100m`, which can be read as "one hundred millicpu". Some people say "one hundred millicores", and this is understood to mean the same thing. A request with a decimal point, like `0.1`, is converted to `100m` by the API, and @@ -82,7 +108,7 @@ be preferred. CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine. -## Meaning of memory +### Meaning of memory Limits and requests for `memory` are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: @@ -181,7 +207,7 @@ To determine whether a Container cannot be scheduled or is being killed due to resource limits, see the [Troubleshooting](#troubleshooting) section. -## Monitoring compute resource usage +### Monitoring compute & memory resource usage The resource usage of a Pod is reported as part of the Pod status. @@ -190,158 +216,98 @@ are available in your cluster, then Pod resource usage can be retrieved either from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api) directly or from your monitoring tools. -## Troubleshooting +## Local ephemeral storage -### My Pods are pending with event message failedScheduling + +{{< feature-state for_k8s_version="v1.10" state="beta" >}} -If the scheduler cannot find any node where a Pod can fit, the Pod remains -unscheduled until a place can be found. An event is produced each time the -scheduler fails to find a place for the Pod, like this: +Nodes have local ephemeral storage, backed by +locally-attached writeable devices or, sometimes, by RAM. +"Ephemeral" means that there is no long-term guarantee about durability. -```shell -kubectl describe pod frontend | grep -A 3 Events -``` -``` -Events: - FirstSeen LastSeen Count From Subobject PathReason Message - 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others -``` - -In the preceding example, the Pod named "frontend" fails to be scheduled due to -insufficient CPU resource on the node. Similar error messages can also suggest -failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod -is pending with a message of this type, there are several things to try: - -- Add more nodes to the cluster. -- Terminate unneeded Pods to make room for pending Pods. -- Check that the Pod is not larger than all the nodes. For example, if all the - nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will - never be scheduled. +Pods use ephemeral local storage for scratch space, caching, and for logs. +The kubelet can provide scratch space to Pods using local ephemeral storage to +mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) + {{< glossary_tooltip term_id="volume" text="volumes" >}} into containers. -You can check node capacities and amounts allocated with the -`kubectl describe nodes` command. For example: +The kubelet also uses this kind of storage to hold +[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level), +container images, and the writable layers of running containers. -```shell -kubectl describe nodes e2e-test-node-pool-4lw4 -``` -``` -Name: e2e-test-node-pool-4lw4 -[ ... lines removed for clarity ...] -Capacity: - cpu: 2 - memory: 7679792Ki - pods: 110 -Allocatable: - cpu: 1800m - memory: 7474992Ki - pods: 110 -[ ... lines removed for clarity ...] -Non-terminated Pods: (5 in total) - Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits - --------- ---- ------------ ---------- --------------- ------------- - kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%) - kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%) - kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%) - kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%) - kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) -Allocated resources: - (Total limits may be over 100 percent, i.e., overcommitted.) - CPU Requests CPU Limits Memory Requests Memory Limits - ------------ ---------- --------------- ------------- - 680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%) -``` +{{< caution >}} +If a node fails, the data in its ephemeral storage can be lost. +Your applications cannot expect any performance SLAs (disk IOPS for example) +from local ephemeral storage. +{{< /caution >}} -In the preceding output, you can see that if a Pod requests more than 1120m -CPUs or 6.23Gi of memory, it will not fit on the node. +As a beta feature, Kubernetes lets you track, reserve and limit the amount +of ephemeral local storage a Pod can consume. -By looking at the `Pods` section, you can see which Pods are taking up space on -the node. +### Configurations for local ephemeral storage -The amount of resources available to Pods is less than the node capacity, because -system daemons use a portion of the available resources. The `allocatable` field -[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core) -gives the amount of resources that are available to Pods. For more information, see -[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md). +Kubernetes supports two ways to configure local ephemeral storage on a node: +{{< tabs name="local_storage_configurations" >}} +{{% tab name="Single filesystem" %}} +In this configuration, you place all different kinds of ephemeral local data +(`emptyDir` volumes, writeable layers, container images, logs) into one filesystem. +The most effective way to configure the kubelet means dedicating this filesystem +to Kubernetes (kubelet) data. -The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured -to limit the total amount of resources that can be consumed. If used in conjunction -with namespaces, it can prevent one team from hogging all the resources. +The kubelet also writes +[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) +and treats these similarly to ephemeral local storage. -### My Container is terminated +The kubelet writes logs to files inside its configured log directory (`/var/log` +by default); and has a base directory for other locally stored data +(`/var/lib/kubelet` by default). -Your Container might get terminated because it is resource-starved. To check -whether a Container is being killed because it is hitting a resource limit, call -`kubectl describe pod` on the Pod of interest: +Typically, both `/var/lib/kubelet` and `/var/log` are on the system root filesystem, +and the kubelet is designed with that layout in mind. -```shell -kubectl describe pod simmemleak-hra99 -``` -``` -Name: simmemleak-hra99 -Namespace: default -Image(s): saadali/simmemleak -Node: kubernetes-node-tf0f/10.240.216.66 -Labels: name=simmemleak -Status: Running -Reason: -Message: -IP: 10.244.2.75 -Replication Controllers: simmemleak (1/1 replicas created) -Containers: - simmemleak: - Image: saadali/simmemleak - Limits: - cpu: 100m - memory: 50Mi - State: Running - Started: Tue, 07 Jul 2015 12:54:41 -0700 - Last Termination State: Terminated - Exit Code: 1 - Started: Fri, 07 Jul 2015 12:54:30 -0700 - Finished: Fri, 07 Jul 2015 12:54:33 -0700 - Ready: False - Restart Count: 5 -Conditions: - Type Status - Ready False -Events: - FirstSeen LastSeen Count From SubobjectPath Reason Message - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a -``` +Your node can have as many other filesystems, not used for Kubernetes, +as you like. +{{% /tab %}} +{{% tab name="Two filesystems" %}} +You have a filesystem on the node that you're using for ephemeral data that +comes from running Pods: logs, and `emptyDir` volumes. You can use this filesystem +for other data (for example: system logs not related to Kubernetes); it can even +be the root filesystem. -In the preceding example, the `Restart Count: 5` indicates that the `simmemleak` -Container in the Pod was terminated and restarted five times. +The kubelet also writes +[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) +into the first filesystem, and treats these similarly to ephemeral local storage. -You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status -of previously terminated Containers: +You also use a separate filesystem, backed by a different logical storage device. +In this configuration, the directory where you tell the kubelet to place +container image layers and writeable layers is on this second filesystem. -```shell -kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 -``` -``` -Container Name: simmemleak -LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]] -``` +The first filesystem does not hold any image layers or writeable layers. -You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory. +Your node can have as many other filesystems, not used for Kubernetes, +as you like. +{{% /tab %}} +{{< /tabs >}} -## Local ephemeral storage -{{< feature-state state="beta" >}} +The kubelet can measure how much local storage it is using. It does this provided +that: -Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers. +- the `LocalStorageCapacityIsolation` + [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) + is enabled (the feature is on by default), and +- you have set up the node using one of the supported configurations + for local ephemeral storage. -This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope. +If you have a different configuration, then the kubelet does not apply resource +limits for ephemeral local storage. {{< note >}} -If an optional runtime partition is used, root partition will not hold any image layer or writable layers. +The kubelet tracks `tmpfs` emptyDir volumes as container memory use, rather +than as local ephemeral storage. {{< /note >}} -### Requests and limits setting for local ephemeral storage -Each Container of a Pod can specify one or more of the following: +### Setting requests and limits for local ephemeral storage + +You can use _ephemeral-storage_ for managing local ephemeral storage. Each Container of a Pod can specify one or more of the following: * `spec.containers[].resources.limits.ephemeral-storage` * `spec.containers[].resources.requests.ephemeral-storage` @@ -355,7 +321,7 @@ Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi ``` -For example, the following Pod has two Containers. Each Container has a request of 2GiB of local ephemeral storage. Each Container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of storage. +In the following example, the Pod has two Containers. Each Container has a request of 2GiB of local ephemeral storage. Each Container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of local ephemeral storage. ```yaml apiVersion: v1 @@ -386,77 +352,113 @@ spec: ### How Pods with ephemeral-storage requests are scheduled When you create a Pod, the Kubernetes scheduler selects a node for the Pod to -run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see ["Node Allocatable"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable). +run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable). The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node. -### How Pods with ephemeral-storage limits run +### Ephemeral storage consumption management {#resource-emphemeralstorage-consumption} -For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the Pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the limit, the Pod will be evicted. +If the kubelet is managing local ephemeral storage as a resource, then the +kubelet measures storage use in: -### Monitoring ephemeral-storage consumption +- `emptyDir` volumes, except _tmpfs_ `emptyDir` volumes +- directories holding node-level logs +- writeable container layers -When local ephemeral storage is used, it is monitored on an ongoing -basis by the kubelet. The monitoring is performed by scanning each -emptyDir volume, log directories, and writable layers on a periodic -basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log -directories or writable layers) may, at the cluster operator's option, -be managed by use of [project -quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html). -Project quotas were originally implemented in XFS, and have more -recently been ported to ext4fs. Project quotas can be used for both -monitoring and enforcement; as of Kubernetes 1.16, they are available -as alpha functionality for monitoring only. +If a Pod is using more ephemeral storage than you allow it to, the kubelet +sets an eviction signal that triggers Pod eviction. -Quotas are faster and more accurate than directory scanning. When a -directory is assigned to a project, all files created under a -directory are created in that project, and the kernel merely has to -keep track of how many blocks are in use by files in that project. If -a file is created and deleted, but with an open file descriptor, it -continues to consume space. This space will be tracked by the quota, -but will not be seen by a directory scan. +For container-level isolation, if a Container's writable layer and log +usage exceeds its storage limit, the kubelet marks the Pod for eviction. -Kubernetes uses project IDs starting from 1048576. The IDs in use are -registered in `/etc/projects` and `/etc/projid`. If project IDs in -this range are used for other purposes on the system, those project -IDs must be registered in `/etc/projects` and `/etc/projid` to prevent -Kubernetes from using them. +For pod-level isolation the kubelet works out an overall Pod storage limit by +summing the limits for the containers in that Pod. In this case, if the sum of +the local ephemeral storage usage from all containers and also the Pod's `emptyDir` +volumes exceeds the overall Pod storage limit, then the kubelet also marks the Pod +for eviction. -To enable use of project quotas, the cluster operator must do the -following: +{{< caution >}} +If the kubelet is not measuring local ephemeral storage, then a Pod +that exceeds its local storage limit will not be evicted for breaching +local storage resource limits. -* Enable the `LocalStorageCapacityIsolationFSQuotaMonitoring=true` - feature gate in the kubelet configuration. This defaults to `false` - in Kubernetes 1.16, so must be explicitly set to `true`. +However, if the filesystem space for writeable container layers, node-level logs, +or `emptyDir` volumes falls low, the node +{{< glossary_tooltip text="taints" term_id="taint" >}} itself as short on local storage +and this taint triggers eviction for any Pods that don't specifically tolerate the taint. -* Ensure that the root partition (or optional runtime partition) is - built with project quotas enabled. All XFS filesystems support - project quotas, but ext4 filesystems must be built specially. +See the supported [configurations](#configurations-for-local-ephemeral-storage) +for ephemeral local storage. +{{< /caution >}} -* Ensure that the root partition (or optional runtime partition) is - mounted with project quotas enabled. +The kubelet supports different ways to measure Pod storage use: -#### Building and mounting filesystems with project quotas enabled +{{< tabs name="resource-emphemeralstorage-measurement" >}} +{{% tab name="Periodic scanning" %}} +The kubelet performs regular, schedules checks that scan each +`emptyDir` volume, container log directory, and writeable container layer. -XFS filesystems require no special action when building; they are -automatically built with project quotas enabled. +The scan measures how much space is used. -Ext4fs filesystems must be built with quotas enabled, then they must -be enabled in the filesystem: +{{< note >}} +In this mode, the kubelet does not track open file descriptors +for deleted files. -``` -% sudo mkfs.ext4 other_ext4fs_args... -E quotatype=prjquota /dev/block_device -% sudo tune2fs -O project -Q prjquota /dev/block_device +If you (or a container) create a file inside an `emptyDir` volume, +something then opens that file, and you delete the file while it is +still open, then the inode for the deleted file stays until you close +that file but the kubelet does not categorize the space as in use. +{{< /note >}} +{{% /tab %}} +{{% tab name="Filesystem project quota" %}} -``` +{{< feature-state for_k8s_version="v1.15" state="alpha" >}} -To mount the filesystem, both ext4fs and XFS require the `prjquota` -option set in `/etc/fstab`: +Project quotas are an operating-system level feature for managing +storage use on filesystems. With Kubernetes, you can enable project +quotas for monitoring storage use. Make sure that the filesystem +backing the `emptyDir` volumes, on the node, provides project quota support. +For example, XFS and ext4fs offer project quotas. -``` -/dev/block_device /var/kubernetes_data defaults,prjquota 0 0 -``` +{{< note >}} +Project quotas let you monitor storage use; they do not enforce limits. +{{< /note >}} +Kubernetes uses project IDs starting from `1048576`. The IDs in use are +registered in `/etc/projects` and `/etc/projid`. If project IDs in +this range are used for other purposes on the system, those project +IDs must be registered in `/etc/projects` and `/etc/projid` so that +Kubernetes does not use them. + +Quotas are faster and more accurate than directory scanning. When a +directory is assigned to a project, all files created under a +directory are created in that project, and the kernel merely has to +keep track of how many blocks are in use by files in that project. +If a file is created and deleted, but has an open file descriptor, +it continues to consume space. Quota tracking records that space accurately +whereas directory scans overlook the storage used by deleted files. + +If you want to use project quotas, you should: + +* Enable the `LocalStorageCapacityIsolationFSQuotaMonitoring=true` + [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) + in the kubelet configuration. + +* Ensure that the the root filesystem (or optional runtime filesystem) + has project quotas enabled. All XFS filesystems support project quotas. + For ext4 filesystems, you need to enable the project quota tracking feature + while the filesystem is not mounted. + ```bash + # For ext4, with /dev/block-device not mounted + sudo tune2fs -O project -Q prjquota /dev/block-device + ``` + +* Ensure that the root filesystem (or optional runtime filesystem) is + mounted with project quotas enabled. For both XFS and ext4fs, the + mount option is named `prjquota`. + +{{% /tab %}} +{{< /tabs >}} ## Extended resources @@ -597,6 +599,145 @@ spec: example.com/foo: 1 ``` +## Troubleshooting + +### My Pods are pending with event message failedScheduling + +If the scheduler cannot find any node where a Pod can fit, the Pod remains +unscheduled until a place can be found. An event is produced each time the +scheduler fails to find a place for the Pod, like this: + +```shell +kubectl describe pod frontend | grep -A 3 Events +``` +``` +Events: + FirstSeen LastSeen Count From Subobject PathReason Message + 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others +``` + +In the preceding example, the Pod named "frontend" fails to be scheduled due to +insufficient CPU resource on the node. Similar error messages can also suggest +failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod +is pending with a message of this type, there are several things to try: + +- Add more nodes to the cluster. +- Terminate unneeded Pods to make room for pending Pods. +- Check that the Pod is not larger than all the nodes. For example, if all the + nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will + never be scheduled. + +You can check node capacities and amounts allocated with the +`kubectl describe nodes` command. For example: + +```shell +kubectl describe nodes e2e-test-node-pool-4lw4 +``` +``` +Name: e2e-test-node-pool-4lw4 +[ ... lines removed for clarity ...] +Capacity: + cpu: 2 + memory: 7679792Ki + pods: 110 +Allocatable: + cpu: 1800m + memory: 7474992Ki + pods: 110 +[ ... lines removed for clarity ...] +Non-terminated Pods: (5 in total) + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits + --------- ---- ------------ ---------- --------------- ------------- + kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%) + kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%) + kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%) + kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%) + kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) +Allocated resources: + (Total limits may be over 100 percent, i.e., overcommitted.) + CPU Requests CPU Limits Memory Requests Memory Limits + ------------ ---------- --------------- ------------- + 680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%) +``` + +In the preceding output, you can see that if a Pod requests more than 1120m +CPUs or 6.23Gi of memory, it will not fit on the node. + +By looking at the `Pods` section, you can see which Pods are taking up space on +the node. + +The amount of resources available to Pods is less than the node capacity, because +system daemons use a portion of the available resources. The `allocatable` field +[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core) +gives the amount of resources that are available to Pods. For more information, see +[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md). + +The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured +to limit the total amount of resources that can be consumed. If used in conjunction +with namespaces, it can prevent one team from hogging all the resources. + +### My Container is terminated + +Your Container might get terminated because it is resource-starved. To check +whether a Container is being killed because it is hitting a resource limit, call +`kubectl describe pod` on the Pod of interest: + +```shell +kubectl describe pod simmemleak-hra99 +``` +``` +Name: simmemleak-hra99 +Namespace: default +Image(s): saadali/simmemleak +Node: kubernetes-node-tf0f/10.240.216.66 +Labels: name=simmemleak +Status: Running +Reason: +Message: +IP: 10.244.2.75 +Replication Controllers: simmemleak (1/1 replicas created) +Containers: + simmemleak: + Image: saadali/simmemleak + Limits: + cpu: 100m + memory: 50Mi + State: Running + Started: Tue, 07 Jul 2015 12:54:41 -0700 + Last Termination State: Terminated + Exit Code: 1 + Started: Fri, 07 Jul 2015 12:54:30 -0700 + Finished: Fri, 07 Jul 2015 12:54:33 -0700 + Ready: False + Restart Count: 5 +Conditions: + Type Status + Ready False +Events: + FirstSeen LastSeen Count From SubobjectPath Reason Message + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a +``` + +In the preceding example, the `Restart Count: 5` indicates that the `simmemleak` +Container in the Pod was terminated and restarted five times. + +You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status +of previously terminated Containers: + +```shell +kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 +``` +``` +Container Name: simmemleak +LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]] +``` + +You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory. + {{% /capture %}} @@ -608,8 +749,13 @@ spec: * Get hands-on experience [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/). -* [Container API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) +* For more details about the difference between requests and limits, see + [Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md). + +* Read the [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) API reference + +* Read the [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) API reference -* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) +* Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS {{% /capture %}} diff --git a/content/en/docs/concepts/configuration/pod-overhead.md b/content/en/docs/concepts/configuration/pod-overhead.md index 0e796df9ffc49..9661264820354 100644 --- a/content/en/docs/concepts/configuration/pod-overhead.md +++ b/content/en/docs/concepts/configuration/pod-overhead.md @@ -5,7 +5,7 @@ reviewers: - tallclair title: Pod Overhead content_template: templates/concept -weight: 20 +weight: 50 --- {{% capture overview %}} diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md index 1d24c2f094e67..c9bddd7e3eb90 100644 --- a/content/en/docs/concepts/configuration/pod-priority-preemption.md +++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md @@ -9,7 +9,7 @@ weight: 70 {{% capture overview %}} -{{< feature-state for_k8s_version="1.14" state="stable" >}} +{{< feature-state for_k8s_version="v1.14" state="stable" >}} [Pods](/docs/user-guide/pods) can have _priority_. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the @@ -51,13 +51,6 @@ Kubernetes already ships with two PriorityClasses: These are common classes and are used to [ensure that critical components are always scheduled first](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/). {{< /note >}} -If you try the feature and then decide to disable it, you must remove the -PodPriority command-line flag or set it to `false`, and then restart the API -server and scheduler. After the feature is disabled, the existing Pods keep -their priority fields, but preemption is disabled, and priority fields are -ignored. If the feature is disabled, you cannot set `priorityClassName` in new -Pods. - ## How to disable preemption {{< caution >}} @@ -120,7 +113,7 @@ cluster when they should use this PriorityClass. ### Notes about PodPriority and existing clusters -- If you upgrade your existing cluster and enable this feature, the priority +- If you upgrade an existing cluster without this feature, the priority of your existing Pods is effectively zero. - Addition of a PriorityClass with `globalDefault` set to `true` does not @@ -145,7 +138,7 @@ description: "This priority class should be used for XYZ service pods only." ## Non-preempting PriorityClass {#non-preempting-priority-class} -{{< feature-state for_k8s_version="1.15" state="alpha" >}} +{{< feature-state for_k8s_version="v1.15" state="alpha" >}} Pods with `PreemptionPolicy: Never` will be placed in the scheduling queue ahead of lower-priority pods, @@ -409,7 +402,7 @@ See [evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods) for more details. -kubelet out-of-resource eviction does not evict Pods wheir their +kubelet out-of-resource eviction does not evict Pods when their usage does not exceed their requests. If a Pod with lower priority is not exceeding its requests, it won't be evicted. Another Pod with higher priority that exceeds its requests may be evicted. diff --git a/content/en/docs/concepts/configuration/resource-bin-packing.md b/content/en/docs/concepts/configuration/resource-bin-packing.md index 207460d7ad66b..0d475791ce5ba 100644 --- a/content/en/docs/concepts/configuration/resource-bin-packing.md +++ b/content/en/docs/concepts/configuration/resource-bin-packing.md @@ -5,12 +5,12 @@ reviewers: - ahg-g title: Resource Bin Packing for Extended Resources content_template: templates/concept -weight: 10 +weight: 50 --- {{% capture overview %}} -{{< feature-state for_k8s_version="1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.16" state="alpha" >}} The kube-scheduler can be configured to enable bin packing of resources along with extended resources using `RequestedToCapacityRatioResourceAllocation` priority function. Priority functions can be used to fine-tune the kube-scheduler as per custom needs. diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index dc9a214fa320d..0fa5b13efa5eb 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -7,7 +7,7 @@ feature: title: Secret and configuration management description: > Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration. -weight: 50 +weight: 30 --- {{% capture overview %}} @@ -81,13 +81,19 @@ The output is similar to: secret "db-user-pass" created ``` +Default key name is the filename. You may optionally set the key name using `[--from-file=[key=]source]`. + +```shell +kubectl create secret generic db-user-pass --from-file=username=./username.txt --from-file=password=./password.txt +``` + {{< note >}} -Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. +Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. In most shells, the easiest way to escape the password is to surround it with single quotes (`'`). -For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: +For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command this way: ```shell -kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb=' ``` You do not need to escape special characters in passwords from files (`--from-file`). @@ -854,6 +860,43 @@ start until all the Pod's volumes are mounted. ## Use cases +### Use-Case: As container environment variables + +Create a secret +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + USER_NAME: YWRtaW4= + PASSWORD: MWYyZDFlMmU2N2Rm +``` + +Create the Secret: +```shell +kubectl apply -f mysecret.yaml +``` + +Use `envFrom` to define all of the Secret’s data as container environment variables. The key from the Secret becomes the environment variable name in the Pod. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-test-pod +spec: + containers: + - name: test-container + image: k8s.gcr.io/busybox + command: [ "/bin/sh", "-c", "env" ] + envFrom: + - secretRef: + name: mysecret + restartPolicy: Never +``` + ### Use-Case: Pod with ssh keys Create a secret containing some ssh keys: @@ -937,12 +980,12 @@ secret "test-db-secret" created ``` {{< note >}} -Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. +Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. In most shells, the easiest way to escape the password is to surround it with single quotes (`'`). -For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: +For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command this way: ```shell -kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb=' ``` You do not need to escape special characters in passwords from files (`--from-file`). diff --git a/content/en/docs/concepts/configuration/taint-and-toleration.md b/content/en/docs/concepts/configuration/taint-and-toleration.md index 2026390eff955..f2a0befec8078 100644 --- a/content/en/docs/concepts/configuration/taint-and-toleration.md +++ b/content/en/docs/concepts/configuration/taint-and-toleration.md @@ -10,7 +10,7 @@ weight: 40 {{% capture overview %}} -Node affinity, described [here](/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature), +Node affinity, described [here](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity), is a property of *pods* that *attracts* them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a *node* to *repel* a set of pods. @@ -202,7 +202,7 @@ when there are node problems, which is described in the next section. ## Taint based Evictions -{{< feature-state for_k8s_version="1.18" state="stable" >}} +{{< feature-state for_k8s_version="v1.18" state="stable" >}} Earlier we mentioned the `NoExecute` taint effect, which affects pods that are already running on the node as follows diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index f2d4b814ad09e..976449d0cbf81 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -19,7 +19,7 @@ methods for adding custom resources and how to choose between them. ## Custom resources A *resource* is an endpoint in the [Kubernetes API](/docs/reference/using-api/api-overview/) that stores a collection of -[API objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) of a certain kind. For example, the built-in *pods* resource contains a collection of Pod objects. +[API objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) of a certain kind; for example, the built-in *pods* resource contains a collection of Pod objects. A *custom resource* is an extension of the Kubernetes API that is not necessarily available in a default Kubernetes installation. It represents a customization of a particular Kubernetes installation. However, @@ -81,7 +81,7 @@ Signs that your API might not be declarative include: - The client says "do this", and then gets a synchronous response back when it is done. - The client says "do this", and then gets an operation ID back, and has to check a separate Operation object to determine completion of the request. - You talk about Remote Procedure Calls (RPCs). - - Directly storing large amounts of data (e.g. > a few kB per object, or >1000s of objects). + - Directly storing large amounts of data; for example, > a few kB per object, or > 1000s of objects. - High bandwidth access (10s of requests per second sustained) needed. - Store end-user data (such as images, PII, etc.) or other large-scale data processed by applications. - The natural operations on the objects are not CRUD-y. @@ -105,7 +105,7 @@ Use a [secret](/docs/concepts/configuration/secret/) for sensitive data, which i Use a custom resource (CRD or Aggregated API) if most of the following apply: * You want to use Kubernetes client libraries and CLIs to create and update the new resource. -* You want top-level support from kubectl (for example: `kubectl get my-object object-name`). +* You want top-level support from `kubectl`; for example, `kubectl get my-object object-name`. * You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa. * You want to write automation that handles updates to the object. * You want to use Kubernetes API conventions like `.spec`, `.status`, and `.metadata`. @@ -120,9 +120,9 @@ Kubernetes provides two ways to add custom resources to your cluster: Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised. -Aggregated APIs are subordinate APIServers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended. +Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended. -CRDs allow users to create new types of resources without adding another APIserver. You do not need to understand API Aggregation to use CRDs. +CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs. Regardless of how they are installed, the new resources are referred to as Custom Resources to distinguish them from built-in Kubernetes resources (like pods). @@ -168,42 +168,42 @@ CRDs are easier to create than Aggregated APIs. | CRDs | Aggregated API | | --------------------------- | -------------- | | Do not require programming. Users can choose any language for a CRD controller. | Requires programming in Go and building binary and image. | -| No additional service to run; CRs are handled by API Server. | An additional service to create and that could fail. | -| No ongoing support once the CRD is created. Any bug fixes are picked up as part of normal Kubernetes Master upgrades. | May need to periodically pickup bug fixes from upstream and rebuild and update the Aggregated APIserver. | -| No need to handle multiple versions of your API. For example: when you control the client for this resource, you can upgrade it in sync with the API. | You need to handle multiple versions of your API, for example: when developing an extension to share with the world. | +| No additional service to run; CRDs are handled by API server. | An additional service to create and that could fail. | +| No ongoing support once the CRD is created. Any bug fixes are picked up as part of normal Kubernetes Master upgrades. | May need to periodically pickup bug fixes from upstream and rebuild and update the Aggregated API server. | +| No need to handle multiple versions of your API; for example, when you control the client for this resource, you can upgrade it in sync with the API. | You need to handle multiple versions of your API; for example, when developing an extension to share with the world. | ### Advanced features and flexibility -Aggregated APIs offer more advanced API features and customization of other features, for example: the storage layer. +Aggregated APIs offer more advanced API features and customization of other features; for example, the storage layer. | Feature | Description | CRDs | Aggregated API | | ------- | ----------- | ---- | -------------- | | Validation | Help users prevent errors and allow you to evolve your API independently of your clients. These features are most useful when there are many clients who can't all update at the same time. | Yes. Most validation can be specified in the CRD using [OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation). Any other validations supported by addition of a [Validating Webhook](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook-alpha-in-1-8-beta-in-1-9). | Yes, arbitrary validation checks | -| Defaulting | See above | Yes, either via [OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#defaulting) `default` keyword (GA in 1.17), or via a [Mutating Webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook) (though this will not be run when reading from etcd for old objects) | Yes | +| Defaulting | See above | Yes, either via [OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#defaulting) `default` keyword (GA in 1.17), or via a [Mutating Webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook) (though this will not be run when reading from etcd for old objects). | Yes | | Multi-versioning | Allows serving the same object through two API versions. Can help ease API changes like renaming fields. Less important if you control your client versions. | [Yes](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning) | Yes | -| Custom Storage | If you need storage with a different performance mode (for example, time-series database instead of key-value store) or isolation for security (for example, encryption secrets or different | No | Yes | +| Custom Storage | If you need storage with a different performance mode (for example, a time-series database instead of key-value store) or isolation for security (for example, encryption of sensitive information, etc.) | No | Yes | | Custom Business Logic | Perform arbitrary checks or actions when creating, reading, updating or deleting an object | Yes, using [Webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks). | Yes | | Scale Subresource | Allows systems like HorizontalPodAutoscaler and PodDisruptionBudget interact with your new resource | [Yes](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#scale-subresource) | Yes | | Status Subresource | Allows fine-grained access control where user writes the spec section and the controller writes the status section. Allows incrementing object Generation on custom resource data mutation (requires separate spec and status sections in the resource) | [Yes](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#status-subresource) | Yes | | Other Subresources | Add operations other than CRUD, such as "logs" or "exec". | No | Yes | | strategic-merge-patch | The new endpoints support PATCH with `Content-Type: application/strategic-merge-patch+json`. Useful for updating objects that may be modified both locally, and by the server. For more information, see ["Update API Objects in Place Using kubectl patch"](/docs/tasks/run-application/update-api-object-kubectl-patch/) | No | Yes | | Protocol Buffers | The new resource supports clients that want to use Protocol Buffers | No | Yes | -| OpenAPI Schema | Is there an OpenAPI (swagger) schema for the types that can be dynamically fetched from the server? Is the user protected from misspelling field names by ensuring only allowed fields are set? Are types enforced (in other words, don't put an `int` in a `string` field?) | Yes, based on the [OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) schema (GA in 1.16) | Yes | +| OpenAPI Schema | Is there an OpenAPI (swagger) schema for the types that can be dynamically fetched from the server? Is the user protected from misspelling field names by ensuring only allowed fields are set? Are types enforced (in other words, don't put an `int` in a `string` field?) | Yes, based on the [OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) schema (GA in 1.16). | Yes | ### Common Features -When you create a custom resource, either via a CRDs or an AA, you get many features for your API, compared to implementing it outside the Kubernetes platform: +When you create a custom resource, either via a CRD or an AA, you get many features for your API, compared to implementing it outside the Kubernetes platform: | Feature | What it does | | ------- | ------------ | | CRUD | The new endpoints support CRUD basic operations via HTTP and `kubectl` | | Watch | The new endpoints support Kubernetes Watch operations via HTTP | -| Discovery | Clients like kubectl and dashboard automatically offer list, display, and field edit operations on your resources | +| Discovery | Clients like `kubectl` and dashboard automatically offer list, display, and field edit operations on your resources | | json-patch | The new endpoints support PATCH with `Content-Type: application/json-patch+json` | | merge-patch | The new endpoints support PATCH with `Content-Type: application/merge-patch+json` | | HTTPS | The new endpoints uses HTTPS | -| Built-in Authentication | Access to the extension uses the core apiserver (aggregation layer) for authentication | -| Built-in Authorization | Access to the extension can reuse the authorization used by the core apiserver (e.g. RBAC) | +| Built-in Authentication | Access to the extension uses the core API server (aggregation layer) for authentication | +| Built-in Authorization | Access to the extension can reuse the authorization used by the core API server; for example, RBAC. | | Finalizers | Block deletion of extension resources until external cleanup happens. | | Admission Webhooks | Set default values and validate extension resources during any create/update/delete operation. | | UI/CLI Display | Kubectl, dashboard can display extension resources. | @@ -219,7 +219,7 @@ There are several points to be aware of before adding a custom resource to your While creating a CRD does not automatically add any new points of failure (for example, by causing third party code to run on your API server), packages (for example, Charts) or other installation bundles often include CRDs as well as a Deployment of third-party code that implements the business logic for a new custom resource. -Installing an Aggregated APIserver always involves running a new Deployment. +Installing an Aggregated API server always involves running a new Deployment. ### Storage @@ -229,7 +229,7 @@ Aggregated API servers may use the same storage as the main API server, in which ### Authentication, authorization, and auditing -CRDs always use the same authentication, authorization, and audit logging as the built-in resources of your API Server. +CRDs always use the same authentication, authorization, and audit logging as the built-in resources of your API server. If you use RBAC for authorization, most RBAC roles will not grant access to the new resources (except the cluster-admin role or any role created with wildcard rules). You'll need to explicitly grant access to the new resources. CRDs and Aggregated APIs often come bundled with new role definitions for the types they add. @@ -237,11 +237,11 @@ Aggregated API servers may or may not use the same authentication, authorization ## Accessing a custom resource -Kubernetes [client libraries](/docs/reference/using-api/client-libraries/) can be used to access custom resources. Not all client libraries support custom resources. The go and python client libraries do. +Kubernetes [client libraries](/docs/reference/using-api/client-libraries/) can be used to access custom resources. Not all client libraries support custom resources. The _Go_ and _Python_ client libraries do. When you add a custom resource, you can access it using: -- kubectl +- `kubectl` - The kubernetes dynamic client. - A REST client that you write. - A client generated using [Kubernetes client generation tools](https://github.com/kubernetes/code-generator) (generating one is an advanced undertaking, but some projects may provide a client along with the CRD or AA). diff --git a/content/en/docs/concepts/overview/components.md b/content/en/docs/concepts/overview/components.md index c6a00b3a43782..71b95fc7878dd 100644 --- a/content/en/docs/concepts/overview/components.md +++ b/content/en/docs/concepts/overview/components.md @@ -25,10 +25,10 @@ Here's the diagram of a Kubernetes cluster with all the components tied together {{% capture body %}} ## Control Plane Components -The Control Plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new {{< glossary_tooltip text="pod" term_id="pod">}} when a deployment's `replicas` field is unsatisfied). +The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new {{< glossary_tooltip text="pod" term_id="pod">}} when a deployment's `replicas` field is unsatisfied). -Control Plane components can be run on any machine in the cluster. However, -for simplicity, set up scripts typically start all Control Plane components on +Control plane components can be run on any machine in the cluster. However, +for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine. See [Building High-Availability Clusters](/docs/admin/high-availability/) for an example multi-master-VM setup. @@ -50,26 +50,29 @@ the same machine, and do not run user containers on this machine. See These controllers include: - * Node Controller: Responsible for noticing and responding when nodes go down. - * Replication Controller: Responsible for maintaining the correct number of pods for every replication + * Node controller: Responsible for noticing and responding when nodes go down. + * Replication controller: Responsible for maintaining the correct number of pods for every replication controller object in the system. - * Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods). - * Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces. + * Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods). + * Service Account & Token controllers: Create default accounts and API access tokens for new namespaces. ### cloud-controller-manager -[cloud-controller-manager](/docs/tasks/administer-cluster/running-cloud-controller/) runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6. +{{< glossary_definition term_id="cloud-controller-manager" length="short" >}} -cloud-controller-manager runs cloud-provider-specific controller loops only. You must disable these controller loops in the kube-controller-manager. You can disable the controller loops by setting the `--cloud-provider` flag to `external` when starting the kube-controller-manager. +The cloud-controller-manager only runs controllers that are specific to your cloud provider. +If you are running Kubernetes on your own premises, or in a learning environment inside your +own PC, the cluster does not have a cloud controller manager. -cloud-controller-manager allows the cloud vendor's code and the Kubernetes code to evolve independently of each other. In prior releases, the core Kubernetes code was dependent upon cloud-provider-specific code for functionality. In future releases, code specific to cloud vendors should be maintained by the cloud vendor themselves, and linked to cloud-controller-manager while running Kubernetes. +As with the kube-controller-manager, the cloud-controller-manager combines several logically +independent control loops into a single binary that you run as a single process. You can +scale horizontally (run more than one copy) to improve performance or to help tolerate failures. -The following controllers have cloud provider dependencies: +The following controllers can have cloud provider dependencies: - * Node Controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding - * Route Controller: For setting up routes in the underlying cloud infrastructure - * Service Controller: For creating, updating and deleting cloud provider load balancers - * Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes + * Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding + * Route controller: For setting up routes in the underlying cloud infrastructure + * Service controller: For creating, updating and deleting cloud provider load balancers ## Node Components @@ -123,6 +126,6 @@ saving container logs to a central log store with search/browsing interface. {{% capture whatsnext %}} * Learn about [Nodes](/docs/concepts/architecture/nodes/) * Learn about [Controllers](/docs/concepts/architecture/controller/) -* Learn about [kube-scheduler](/docs/concepts/scheduling/kube-scheduler/) +* Learn about [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/) * Read etcd's official [documentation](https://etcd.io/docs/) {{% /capture %}} diff --git a/content/en/docs/concepts/overview/what-is-kubernetes.md b/content/en/docs/concepts/overview/what-is-kubernetes.md index 4dbb673382266..fbe74e4337921 100644 --- a/content/en/docs/concepts/overview/what-is-kubernetes.md +++ b/content/en/docs/concepts/overview/what-is-kubernetes.md @@ -2,7 +2,7 @@ reviewers: - bgrant0607 - mikedanese -title: What is Kubernetes +title: What is Kubernetes? description: > Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. content_template: templates/concept diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md index 7364596306f63..b9df009db7cf4 100644 --- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -33,7 +33,7 @@ providing a description of the characteristics you want the resource to have: its _desired state_. The `status` describes the _current state_ of the object, supplied and updated -by the Kubernetes and its components. The Kubernetes +by the Kubernetes system and its components. The Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} continually and actively manages every object's actual state to match the desired state you supplied. diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index 60543604e4b1b..c4415e8a1b97b 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -55,6 +55,7 @@ NAME STATUS AGE default Active 1d kube-system Active 1d kube-public Active 1d +kube-node-lease Active 1d ``` Kubernetes starts with three initial namespaces: diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md index 971a7a372c9f8..8bea6c88e7670 100644 --- a/content/en/docs/concepts/policy/limit-range.md +++ b/content/en/docs/concepts/policy/limit-range.md @@ -9,7 +9,7 @@ weight: 10 {{% capture overview %}} By default, containers run with unbounded [compute resources](/docs/user-guide/compute-resources) on a Kubernetes cluster. -With resource quotas, cluster administrators can restrict resource consumption and creation on a namespace basis. +With resource quotas, cluster administrators can restrict resource consumption and creation on a {{< glossary_tooltip text="namespace" term_id="namespace" >}} basis. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace. {{% /capture %}} @@ -38,7 +38,7 @@ The name of a LimitRange object must be a valid ### Overview of Limit Range -- The administrator creates one `LimitRange` in one namespace. +- The administrator creates one LimitRange in one namespace. - Users create resources like Pods, Containers, and PersistentVolumeClaims in the namespace. - The `LimitRanger` admission controller enforces defaults and limits for all Pods and Containers that do not set compute resource requirements and tracks usage to ensure it does not exceed resource minimum, maximum and ratio defined in any LimitRange present in the namespace. - If creating or updating a resource (Pod, Container, PersistentVolumeClaim) that violates a LimitRange constraint, the request to the API server will fail with an HTTP status code `403 FORBIDDEN` and a message explaining the constraint that have been violated. @@ -48,7 +48,7 @@ The name of a LimitRange object must be a valid Examples of policies that could be created using limit range are: -- In a 2 node cluster with a capacity of 8 GiB RAM and 16 cores, constrain Pods in a namespace to request 100m of CPU with a max limit of 500m for CPU and request 200Mi for Memory with a max limit of 600Mi for Memory. +- In a 2 node cluster with a capacity of 8 GiB RAM and 16 cores, constrain Pods in a namespace to request 100m of CPU with a max limit of 500m for CPU and request 200Mi for Memory with a max limit of 600Mi for Memory. - Define default CPU limit and request to 150m and memory default request to 300Mi for Containers started with no cpu and memory requests in their specs. In the case where the total limits of the namespace is less than the sum of the limits of the Pods/Containers, @@ -56,342 +56,20 @@ there may be contention for resources. In this case, the Containers or Pods will Neither contention nor changes to a LimitRange will affect already created resources. -## Limiting Container compute resources - -The following section discusses the creation of a LimitRange acting at Container Level. -A Pod with 04 Containers is first created. Each Container within the Pod has a specific `spec.resource` configuration. -Each Container within the Pod is handled differently by the `LimitRanger` admission controller. - -Create a namespace `limitrange-demo` using the following kubectl command: - -```shell -kubectl create namespace limitrange-demo -``` - -To avoid passing the target limitrange-demo in your kubectl commands, change your context with the following command: - -```shell -kubectl config set-context --current --namespace=limitrange-demo -``` - -Here is the configuration file for a LimitRange object: -{{< codenew file="admin/resource/limit-mem-cpu-container.yaml" >}} - -This object defines minimum and maximum CPU/Memory limits, default CPU/Memory requests, and default limits for CPU/Memory resources to be apply to containers. - -Create the `limit-mem-cpu-per-container` LimitRange with the following kubectl command: - -```shell -kubectl create -f https://k8s.io/examples/admin/resource/limit-mem-cpu-container.yaml -``` - -```shell -kubectl describe limitrange/limit-mem-cpu-per-container -``` - -```shell -Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ----- -------- --- --- --------------- ------------- ----------------------- -Container cpu 100m 800m 110m 700m - -Container memory 99Mi 1Gi 111Mi 900Mi - -``` - -Here is the configuration file for a Pod with 04 Containers to demonstrate LimitRange features: -{{< codenew file="admin/resource/limit-range-pod-1.yaml" >}} - -Create the `busybox1` Pod: - -```shell -kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-1.yaml -``` - -### Container spec with valid CPU/Memory requests and limits - -View the `busybox-cnt01` resource configuration: - -```shell -kubectl get po/busybox1 -o json | jq ".spec.containers[0].resources" -``` - -```json -{ - "limits": { - "cpu": "500m", - "memory": "200Mi" - }, - "requests": { - "cpu": "100m", - "memory": "100Mi" - } -} -``` - -- The `busybox-cnt01` Container inside `busybox` Pod defined `requests.cpu=100m` and `requests.memory=100Mi`. -- `100m <= 500m <= 800m` , The Container cpu limit (500m) falls inside the authorized CPU LimitRange. -- `99Mi <= 200Mi <= 1Gi` , The Container memory limit (200Mi) falls inside the authorized Memory LimitRange. -- No request/limits ratio validation for CPU/Memory, so the Container is valid and created. - - -### Container spec with a valid CPU/Memory requests but no limits - -View the `busybox-cnt02` resource configuration - -```shell -kubectl get po/busybox1 -o json | jq ".spec.containers[1].resources" -``` - -```json -{ - "limits": { - "cpu": "700m", - "memory": "900Mi" - }, - "requests": { - "cpu": "100m", - "memory": "100Mi" - } -} -``` -- The `busybox-cnt02` Container inside `busybox1` Pod defined `requests.cpu=100m` and `requests.memory=100Mi` but not limits for cpu and memory. -- The Container does not have a limits section. The default limits defined in the `limit-mem-cpu-per-container` LimitRange object are injected in to this Container: `limits.cpu=700mi` and `limits.memory=900Mi`. -- `100m <= 700m <= 800m` , The Container cpu limit (700m) falls inside the authorized CPU limit range. -- `99Mi <= 900Mi <= 1Gi` , The Container memory limit (900Mi) falls inside the authorized Memory limit range. -- No request/limits ratio set, so the Container is valid and created. - - -### Container spec with a valid CPU/Memory limits but no requests - -View the `busybox-cnt03` resource configuration: - -```shell -kubectl get po/busybox1 -o json | jq ".spec.containers[2].resources" -``` -```json -{ - "limits": { - "cpu": "500m", - "memory": "200Mi" - }, - "requests": { - "cpu": "500m", - "memory": "200Mi" - } -} -``` - -- The `busybox-cnt03` Container inside `busybox1` Pod defined `limits.cpu=500m` and `limits.memory=200Mi` but no `requests` for cpu and memory. -- The Container does not define a request section. The default request defined in the limit-mem-cpu-per-container LimitRange is not used to fill its limits section, but the limits defined by the Container are set as requests `limits.cpu=500m` and `limits.memory=200Mi`. -- `100m <= 500m <= 800m` , The Container cpu limit (500m) falls inside the authorized CPU limit range. -- `99Mi <= 200Mi <= 1Gi` , The Container memory limit (200Mi) falls inside the authorized Memory limit range. -- No request/limits ratio set, so the Container is valid and created. - -### Container spec with no CPU/Memory requests/limits - -View the `busybox-cnt04` resource configuration: - -```shell -kubectl get po/busybox1 -o json | jq ".spec.containers[3].resources" -``` - -```json -{ - "limits": { - "cpu": "700m", - "memory": "900Mi" - }, - "requests": { - "cpu": "110m", - "memory": "111Mi" - } -} -``` - -- The `busybox-cnt04` Container inside `busybox1` define neither `limits` nor `requests`. -- The Container do not define a limit section, the default limit defined in the limit-mem-cpu-per-container LimitRange is used to fill its request - `limits.cpu=700m and` `limits.memory=900Mi` . -- The Container do not define a request section, the defaultRequest defined in the `limit-mem-cpu-per-container` LimitRange is used to fill its request section requests.cpu=110m and requests.memory=111Mi -- `100m <= 700m <= 800m` , The Container cpu limit (700m) falls inside the authorized CPU limit range. -- `99Mi <= 900Mi <= 1Gi` , The Container memory limit (900Mi) falls inside the authorized Memory limit range . -- No request/limits ratio set, so the Container is valid and created. - -All Containers defined in the `busybox` Pod passed LimitRange validations, so this the Pod is valid and created in the namespace. - -## Limiting Pod compute resources - -The following section discusses how to constrain resources at the Pod level. - -{{< codenew file="admin/resource/limit-mem-cpu-pod.yaml" >}} - -Without having to delete the `busybox1` Pod, create the `limit-mem-cpu-pod` LimitRange in the `limitrange-demo` namespace: - -```shell -kubectl apply -f https://k8s.io/examples/admin/resource/limit-mem-cpu-pod.yaml -``` -The LimitRange is created and limits CPU to 2 Core and Memory to 2Gi per Pod: - -```shell -limitrange/limit-mem-cpu-per-pod created -``` - -Describe the `limit-mem-cpu-per-pod` limit object using the following kubectl command: - -```shell -kubectl describe limitrange/limit-mem-cpu-per-pod -``` - -```shell -Name: limit-mem-cpu-per-pod -Namespace: limitrange-demo -Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ----- -------- --- --- --------------- ------------- ----------------------- -Pod cpu - 2 - - - -Pod memory - 2Gi - - - -``` - -Now create the `busybox2` Pod: - -{{< codenew file="admin/resource/limit-range-pod-2.yaml" >}} - -```shell -kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-2.yaml -``` - -The `busybox2` Pod definition is identical to `busybox1`, but an error is reported since the Pod's resources are now limited: - -```shell -Error from server (Forbidden): error when creating "limit-range-pod-2.yaml": pods "busybox2" is forbidden: [maximum cpu usage per Pod is 2, but limit is 2400m., maximum memory usage per Pod is 2Gi, but limit is 2306867200.] -``` - -```shell -kubectl get po/busybox1 -o json | jq ".spec.containers[].resources.limits.memory" -"200Mi" -"900Mi" -"200Mi" -"900Mi" -``` - -`busybox2` Pod will not be admitted on the cluster since the total memory limit of its Container is greater than the limit defined in the LimitRange. -`busybox1` will not be evicted since it was created and admitted on the cluster before the LimitRange creation. - -## Limiting Storage resources - -You can enforce minimum and maximum size of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested by each PersistentVolumeClaim in a namespace using a LimitRange: - -{{< codenew file="admin/resource/storagelimits.yaml" >}} - -Apply the YAML using `kubectl create`: - -```shell -kubectl create -f https://k8s.io/examples/admin/resource/storagelimits.yaml -``` - -```shell -limitrange/storagelimits created -``` - -Describe the created object: - -```shell -kubectl describe limits/storagelimits -``` - -The output should look like: - -```shell -Name: storagelimits -Namespace: limitrange-demo -Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ----- -------- --- --- --------------- ------------- ----------------------- -PersistentVolumeClaim storage 1Gi 2Gi - - - -``` - -{{< codenew file="admin/resource/pvc-limit-lower.yaml" >}} - -```shell -kubectl create -f https://k8s.io/examples/admin/resource/pvc-limit-lower.yaml -``` - -While creating a PVC with `requests.storage` lower than the Min value in the LimitRange, an Error thrown by the server: - -```shell -Error from server (Forbidden): error when creating "pvc-limit-lower.yaml": persistentvolumeclaims "pvc-limit-lower" is forbidden: minimum storage usage per PersistentVolumeClaim is 1Gi, but request is 500Mi. -``` - -Same behaviour is noted if the `requests.storage` is greater than the Max value in the LimitRange: - -{{< codenew file="admin/resource/pvc-limit-greater.yaml" >}} - -```shell -kubectl create -f https://k8s.io/examples/admin/resource/pvc-limit-greater.yaml -``` - -```shell -Error from server (Forbidden): error when creating "pvc-limit-greater.yaml": persistentvolumeclaims "pvc-limit-greater" is forbidden: maximum storage usage per PersistentVolumeClaim is 2Gi, but request is 5Gi. -``` - -## Limits/Requests Ratio - -If `LimitRangeItem.maxLimitRequestRatio` is specified in the `LimitRangeSpec`, the named resource must have a request and limit that are both non-zero where limit divided by request is less than or equal to the enumerated value. - -The following LimitRange enforces memory limit to be at most twice the amount of the memory request for any Pod in the namespace: - -{{< codenew file="admin/resource/limit-memory-ratio-pod.yaml" >}} - -```shell -kubectl apply -f https://k8s.io/examples/admin/resource/limit-memory-ratio-pod.yaml -``` - -Describe the LimitRange with the following kubectl command: - -```shell -kubectl describe limitrange/limit-memory-ratio-pod -``` - -```shell -Name: limit-memory-ratio-pod -Namespace: limitrange-demo -Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ----- -------- --- --- --------------- ------------- ----------------------- -Pod memory - - - - 2 -``` - -Create a pod with `requests.memory=100Mi` and `limits.memory=300Mi`: - -{{< codenew file="admin/resource/limit-range-pod-3.yaml" >}} - -```shell -kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-3.yaml -``` - -The pod creation failed as the ratio here (`3`) is greater than the enforced limit (`2`) in `limit-memory-ratio-pod` LimitRange: - -``` -Error from server (Forbidden): error when creating "limit-range-pod-3.yaml": pods "busybox3" is forbidden: memory max limit to request ratio per Pod is 2, but provided ratio is 3.000000. -``` - -## Clean up - -Delete the `limitrange-demo` namespace to free all resources: - -```shell -kubectl delete ns limitrange-demo -``` -Change your context to `default` namespace with the following command: +{{% /capture %}} -```shell -kubectl config set-context --current --namespace=default -``` +{{% capture whatsnext %}} -## Examples +Refer to the [LimitRanger design document](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information. -- See [a tutorial on how to limit compute resources per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) . -- Check [how to limit storage consumption](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage). -- See a [detailed example on quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/). +For examples on using limits, see: -{{% /capture %}} - -{{% capture whatsnext %}} +- [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/). +- [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/). +- [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/). +- [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). +- [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage). +- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/). -See [LimitRanger design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information. {{% /capture %}} diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index d48c2db88ade8..8ae3111323878 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -197,7 +197,7 @@ The `Terminating`, `NotTerminating`, and `NotBestEffort` scopes restrict a quota ### Resource Quota Per PriorityClass -{{< feature-state for_k8s_version="1.12" state="beta" >}} +{{< feature-state for_k8s_version="v1.12" state="beta" >}} Pods can be created at a specific [priority](/docs/concepts/configuration/pod-priority-preemption/#pod-priority). You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector` diff --git a/content/en/docs/concepts/scheduling-eviction/_index.md b/content/en/docs/concepts/scheduling-eviction/_index.md new file mode 100644 index 0000000000000..a30a80a451a15 --- /dev/null +++ b/content/en/docs/concepts/scheduling-eviction/_index.md @@ -0,0 +1,5 @@ +--- +title: "Scheduling and Eviction" +weight: 90 +--- + diff --git a/content/en/docs/concepts/scheduling/kube-scheduler.md b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md similarity index 99% rename from content/en/docs/concepts/scheduling/kube-scheduler.md rename to content/en/docs/concepts/scheduling-eviction/kube-scheduler.md index 7e3074f1148c5..c258a2d46a30a 100644 --- a/content/en/docs/concepts/scheduling/kube-scheduler.md +++ b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -88,7 +88,7 @@ of the scheduler: {{% /capture %}} {{% capture whatsnext %}} -* Read about [scheduler performance tuning](/docs/concepts/scheduling/scheduler-perf-tuning/) +* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) * Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) * Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler * Learn about [configuring multiple schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/) diff --git a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md new file mode 100644 index 0000000000000..e3d4b168617b1 --- /dev/null +++ b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md @@ -0,0 +1,167 @@ +--- +reviewers: +- bsalamat +title: Scheduler Performance Tuning +content_template: templates/concept +weight: 70 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.14" state="beta" >}} + +[kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) +is the Kubernetes default scheduler. It is responsible for placement of Pods +on Nodes in a cluster. + +Nodes in a cluster that meet the scheduling requirements of a Pod are +called _feasible_ Nodes for the Pod. The scheduler finds feasible Nodes +for a Pod and then runs a set of functions to score the feasible Nodes, +picking a Node with the highest score among the feasible ones to run +the Pod. The scheduler then notifies the API server about this decision +in a process called _Binding_. + +This page explains performance tuning optimizations that are relevant for +large Kubernetes clusters. + +{{% /capture %}} + +{{% capture body %}} + +In large clusters, you can tune the scheduler's behaviour balancing +scheduling outcomes between latency (new Pods are placed quickly) and +accuracy (the scheduler rarely makes poor placement decisions). + +You configure this tuning setting via kube-scheduler setting +`percentageOfNodesToScore`. This KubeSchedulerConfiguration setting determines +a threshold for scheduling nodes in your cluster. + +### Setting the threshold + +The `percentageOfNodesToScore` option accepts whole numeric values between 0 +and 100. The value 0 is a special number which indicates that the kube-scheduler +should use its compiled-in default. +If you set `percentageOfNodesToScore` above 100, kube-scheduler acts as if you +had set a value of 100. + +To change the value, edit the kube-scheduler configuration file (this is likely +to be `/etc/kubernetes/config/kube-scheduler.yaml`), then restart the scheduler. + +After you have made this change, you can run +```bash +kubectl get componentstatuses +``` +to verify that the kube-scheduler component is healthy. The output is similar to: +``` +NAME STATUS MESSAGE ERROR +controller-manager Healthy ok +scheduler Healthy ok +... +``` + +## Node scoring threshold {#percentage-of-nodes-to-score} + +To improve scheduling performance, the kube-scheduler can stop looking for +feasible nodes once it has found enough of them. In large clusters, this saves +time compared to a naive approach that would consider every node. + +You specify a threshold for how many nodes are enough, as a whole number percentage +of all the nodes in your cluster. The kube-scheduler converts this into an +integer number of nodes. During scheduling, if the kube-scheduler has identified +enough feasible nodes to exceed the configured percentage, the kube-scheduler +stops searching for more feasible nodes and moves on to the +[scoring phase](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation). + +[How the scheduler iterates over Nodes](#how-the-scheduler-iterates-over-nodes) +describes the process in detail. + +### Default threshold + +If you don't specify a threshold, Kubernetes calculates a figure using a +linear formula that yields 50% for a 100-node cluster and yields 10% +for a 5000-node cluster. The lower bound for the automatic value is 5%. + +This means that, the kube-scheduler always scores at least 5% of your cluster no +matter how large the cluster is, unless you have explicitly set +`percentageOfNodesToScore` to be smaller than 5. + +If you want the scheduler to score all nodes in your cluster, set +`percentageOfNodesToScore` to 100. + +## Example + +Below is an example configuration that sets `percentageOfNodesToScore` to 50%. + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1alpha1 +kind: KubeSchedulerConfiguration +algorithmSource: + provider: DefaultProvider + +... + +percentageOfNodesToScore: 50 +``` + + +## Tuning percentageOfNodesToScore + +`percentageOfNodesToScore` must be a value between 1 and 100 with the default +value being calculated based on the cluster size. There is also a hardcoded +minimum value of 50 nodes. + +{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still +checks all the nodes, simply because there are not enough feasible nodes to stop +the scheduler's search early. + +In a small cluster, if you set a low value for `percentageOfNodesToScore`, your +change will have no or little effect, for a similar reason. + +If your cluster has several hundred Nodes or fewer, leave this configuration option +at its default value. Making changes is unlikely to improve the +scheduler's performance significantly. +{{< /note >}} + +An important detail to consider when setting this value is that when a smaller +number of nodes in a cluster are checked for feasibility, some nodes are not +sent to be scored for a given Pod. As a result, a Node which could possibly +score a higher value for running the given Pod might not even be passed to the +scoring phase. This would result in a less than ideal placement of the Pod. + +You should avoid setting `percentageOfNodesToScore` very low so that kube-scheduler +does not make frequent, poor Pod placement decisions. Avoid setting the +percentage to anything below 10%, unless the scheduler's throughput is critical +for your application and the score of nodes is not important. In other words, you +prefer to run the Pod on any Node as long as it is feasible. + +## How the scheduler iterates over Nodes + +This section is intended for those who want to understand the internal details +of this feature. + +In order to give all the Nodes in a cluster a fair chance of being considered +for running Pods, the scheduler iterates over the nodes in a round robin +fashion. You can imagine that Nodes are in an array. The scheduler starts from +the start of the array and checks feasibility of the nodes until it finds enough +Nodes as specified by `percentageOfNodesToScore`. For the next Pod, the +scheduler continues from the point in the Node array that it stopped at when +checking feasibility of Nodes for the previous Pod. + +If Nodes are in multiple zones, the scheduler iterates over Nodes in various +zones to ensure that Nodes from different zones are considered in the +feasibility checks. As an example, consider six nodes in two zones: + +``` +Zone 1: Node 1, Node 2, Node 3, Node 4 +Zone 2: Node 5, Node 6 +``` + +The Scheduler evaluates feasibility of the nodes in this order: + +``` +Node 1, Node 5, Node 2, Node 6, Node 3, Node 4 +``` + +After going over all the Nodes, it goes back to Node 1. + +{{% /capture %}} diff --git a/content/en/docs/concepts/scheduling/scheduling-framework.md b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md similarity index 99% rename from content/en/docs/concepts/scheduling/scheduling-framework.md rename to content/en/docs/concepts/scheduling-eviction/scheduling-framework.md index ddc2225cac28a..0d4c6333917f1 100644 --- a/content/en/docs/concepts/scheduling/scheduling-framework.md +++ b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md @@ -8,7 +8,7 @@ weight: 60 {{% capture overview %}} -{{< feature-state for_k8s_version="1.15" state="alpha" >}} +{{< feature-state for_k8s_version="v1.15" state="alpha" >}} The scheduling framework is a pluggable architecture for Kubernetes Scheduler that makes scheduler customizations easy. It adds a new set of "plugin" APIs to diff --git a/content/en/docs/concepts/scheduling/_index.md b/content/en/docs/concepts/scheduling/_index.md deleted file mode 100644 index b21e8d0c3331d..0000000000000 --- a/content/en/docs/concepts/scheduling/_index.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "Scheduling" -weight: 90 ---- - diff --git a/content/en/docs/concepts/scheduling/scheduler-perf-tuning.md b/content/en/docs/concepts/scheduling/scheduler-perf-tuning.md deleted file mode 100644 index 24ebe059e67fd..0000000000000 --- a/content/en/docs/concepts/scheduling/scheduler-perf-tuning.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -reviewers: -- bsalamat -title: Scheduler Performance Tuning -content_template: templates/concept -weight: 70 ---- - -{{% capture overview %}} - -{{< feature-state for_k8s_version="1.14" state="beta" >}} - -[kube-scheduler](/docs/concepts/scheduling/kube-scheduler/#kube-scheduler) -is the Kubernetes default scheduler. It is responsible for placement of Pods -on Nodes in a cluster. - -Nodes in a cluster that meet the scheduling requirements of a Pod are -called _feasible_ Nodes for the Pod. The scheduler finds feasible Nodes -for a Pod and then runs a set of functions to score the feasible Nodes, -picking a Node with the highest score among the feasible ones to run -the Pod. The scheduler then notifies the API server about this decision -in a process called _Binding_. - -This page explains performance tuning optimizations that are relevant for -large Kubernetes clusters. - -{{% /capture %}} - -{{% capture body %}} - -## Percentage of Nodes to Score - -Before Kubernetes 1.12, Kube-scheduler used to check the feasibility of all -nodes in a cluster and then scored the feasible ones. Kubernetes 1.12 added a -new feature that allows the scheduler to stop looking for more feasible nodes -once it finds a certain number of them. This improves the scheduler's -performance in large clusters. The number is specified as a percentage of the -cluster size. The percentage can be controlled by a configuration option called -`percentageOfNodesToScore`. The range should be between 1 and 100. Larger values -are considered as 100%. Zero is equivalent to not providing the config option. -Kubernetes 1.14 has logic to find the percentage of nodes to score based on the -size of the cluster if it is not specified in the configuration. It uses a -linear formula which yields 50% for a 100-node cluster. The formula yields 10% -for a 5000-node cluster. The lower bound for the automatic value is 5%. In other -words, the scheduler always scores at least 5% of the cluster no matter how -large the cluster is, unless the user provides the config option with a value -smaller than 5. - -Below is an example configuration that sets `percentageOfNodesToScore` to 50%. - -```yaml -apiVersion: kubescheduler.config.k8s.io/v1alpha1 -kind: KubeSchedulerConfiguration -algorithmSource: - provider: DefaultProvider - -... - -percentageOfNodesToScore: 50 -``` - -{{< note >}} In clusters with less than 50 feasible nodes, the scheduler still -checks all the nodes, simply because there are not enough feasible nodes to stop -the scheduler's search early. {{< /note >}} - -**To disable this feature**, you can set `percentageOfNodesToScore` to 100. - -### Tuning percentageOfNodesToScore - -`percentageOfNodesToScore` must be a value between 1 and 100 with the default -value being calculated based on the cluster size. There is also a hardcoded -minimum value of 50 nodes. This means that changing -this option to lower values in clusters with several hundred nodes will not have -much impact on the number of feasible nodes that the scheduler tries to find. -This is intentional as this option is unlikely to improve performance noticeably -in smaller clusters. In large clusters with over a 1000 nodes setting this value -to lower numbers may show a noticeable performance improvement. - -An important note to consider when setting this value is that when a smaller -number of nodes in a cluster are checked for feasibility, some nodes are not -sent to be scored for a given Pod. As a result, a Node which could possibly -score a higher value for running the given Pod might not even be passed to the -scoring phase. This would result in a less than ideal placement of the Pod. For -this reason, the value should not be set to very low percentages. A general rule -of thumb is to never set the value to anything lower than 10. Lower values -should be used only when the scheduler's throughput is critical for your -application and the score of nodes is not important. In other words, you prefer -to run the Pod on any Node as long as it is feasible. - -If your cluster has several hundred Nodes or fewer, we do not recommend lowering -the default value of this configuration option. It is unlikely to improve the -scheduler's performance significantly. - -### How the scheduler iterates over Nodes - -This section is intended for those who want to understand the internal details -of this feature. - -In order to give all the Nodes in a cluster a fair chance of being considered -for running Pods, the scheduler iterates over the nodes in a round robin -fashion. You can imagine that Nodes are in an array. The scheduler starts from -the start of the array and checks feasibility of the nodes until it finds enough -Nodes as specified by `percentageOfNodesToScore`. For the next Pod, the -scheduler continues from the point in the Node array that it stopped at when -checking feasibility of Nodes for the previous Pod. - -If Nodes are in multiple zones, the scheduler iterates over Nodes in various -zones to ensure that Nodes from different zones are considered in the -feasibility checks. As an example, consider six nodes in two zones: - -``` -Zone 1: Node 1, Node 2, Node 3, Node 4 -Zone 2: Node 5, Node 6 -``` - -The Scheduler evaluates feasibility of the nodes in this order: - -``` -Node 1, Node 5, Node 2, Node 6, Node 3, Node 4 -``` - -After going over all the Nodes, it goes back to Node 1. - -{{% /capture %}} diff --git a/content/en/docs/concepts/security/overview.md b/content/en/docs/concepts/security/overview.md index 8c06308afc721..20ba25503979e 100644 --- a/content/en/docs/concepts/security/overview.md +++ b/content/en/docs/concepts/security/overview.md @@ -143,8 +143,8 @@ Area of Concern for Code | Recommendation | Access over TLS only | If your code needs to communicate via TCP, ideally it would be performing a TLS handshake with the client ahead of time. With the exception of a few cases, the default behavior should be to encrypt everything in transit. Going one step further, even "behind the firewall" in our VPC's it's still a good idea to encrypt network traffic between services. This can be done through a process known as mutual or [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) which performs a two sided verification of communication between two certificate holding services. There are numerous tools that can be used to accomplish this in Kubernetes such as [Linkerd](https://linkerd.io/) and [Istio](https://istio.io/). | Limiting port ranges of communication | This recommendation may be a bit self-explanatory, but wherever possible you should only expose the ports on your service that are absolutely essential for communication or metric gathering. | 3rd Party Dependency Security | Since our applications tend to have dependencies outside of our own codebases, it is a good practice to regularly scan the code's dependencies to ensure that they are still secure with no vulnerabilities currently filed against them. Each language has a tool for performing this check automatically. | -Static Code Analysis | Most languages provide a way for a snippet of code to be analyzed for any potentially unsafe coding practices. Whenever possible you should perform checks using automated tooling that can scan codebases for common security errors. Some of the tools can be found here: https://www.owasp.org/index.php/Source_Code_Analysis_Tools | -Dynamic probing attacks | There are a few automated tools that are able to be run against your service to try some of the well known attacks that commonly befall services. These include SQL injection, CSRF, and XSS. One of the most popular dynamic analysis tools is the OWASP Zed Attack proxy https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project | +Static Code Analysis | Most languages provide a way for a snippet of code to be analyzed for any potentially unsafe coding practices. Whenever possible you should perform checks using automated tooling that can scan codebases for common security errors. Some of the tools can be found here: https://owasp.org/www-community/Source_Code_Analysis_Tools | +Dynamic probing attacks | There are a few automated tools that are able to be run against your service to try some of the well known attacks that commonly befall services. These include SQL injection, CSRF, and XSS. One of the most popular dynamic analysis tools is the OWASP Zed Attack proxy https://owasp.org/www-project-zap/ | ## Robust automation diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 8e790151f28da..ba77587457272 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -164,7 +164,7 @@ following pod-specific DNS policies. These policies are specified in the domain suffix, such as "`www.kubernetes.io`", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured. - See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers/#impacts-on-pods) + See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers/#effects-on-pods) for details on how DNS queries are handled in those cases. - "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should explicitly set its DNS policy "`ClusterFirstWithHostNet`". diff --git a/content/en/docs/concepts/services-networking/endpoint-slices.md b/content/en/docs/concepts/services-networking/endpoint-slices.md index 4b347f47ec5aa..7c39f05086616 100644 --- a/content/en/docs/concepts/services-networking/endpoint-slices.md +++ b/content/en/docs/concepts/services-networking/endpoint-slices.md @@ -8,7 +8,7 @@ feature: Scalable tracking of network endpoints in a Kubernetes cluster. content_template: templates/concept -weight: 10 +weight: 15 --- @@ -24,6 +24,21 @@ Endpoints. {{% capture body %}} +## Motivation + +The Endpoints API has provided a simple and straightforward way of +tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters +and Services have gotten larger, limitations of that API became more visible. +Most notably, those included challenges with scaling to larger numbers of +network endpoints. + +Since all network endpoints for a Service were stored in a single Endpoints +resource, those resources could get quite large. That affected the performance +of Kubernetes components (notably the master control plane) and resulted in +significant amounts of network traffic and processing when Endpoints changed. +EndpointSlices help you mitigate those issues as well as provide an extensible +platform for additional features such as topological routing. + ## EndpointSlice resources {#endpointslice-resource} In Kubernetes, an EndpointSlice contains references to a set of network @@ -165,21 +180,6 @@ necessary soon anyway. Rolling updates of Deployments also provide a natural repacking of EndpointSlices with all pods and their corresponding endpoints getting replaced. -## Motivation - -The Endpoints API has provided a simple and straightforward way of -tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters -and Services have gotten larger, limitations of that API became more visible. -Most notably, those included challenges with scaling to larger numbers of -network endpoints. - -Since all network endpoints for a Service were stored in a single Endpoints -resource, those resources could get quite large. That affected the performance -of Kubernetes components (notably the master control plane) and resulted in -significant amounts of network traffic and processing when Endpoints changed. -EndpointSlices help you mitigate those issues as well as provide an extensible -platform for additional features such as topological routing. - {{% /capture %}} {{% capture whatsnext %}} diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index 346b9164656fe..39e57ffdb0311 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -124,7 +124,7 @@ Each path in an Ingress has a corresponding path type. There are three supported path types: * _`ImplementationSpecific`_ (default): With this path type, matching is up to - the IngressClass. Implementations can treat this as a separate `pathType or + the IngressClass. Implementations can treat this as a separate `pathType` or treat it identically to `Prefix` or `Exact` path types. * _`Exact`_: Matches the URL path exactly and with case sensitivity. diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index 3de1c87076629..795969757d65c 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -28,7 +28,7 @@ By default, pods are non-isolated; they accept traffic from any source. Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.) -Network policies do not conflict, they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result. +Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result. ## The NetworkPolicy resource {#networkpolicy-resource} diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index a62faf1e0f32f..92654fc6128ee 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -534,6 +534,25 @@ to just expose one or more nodes' IPs directly. Note that this Service is visible as `:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) +For example: +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + type: NodePort + selector: + app: MyApp + ports: + # By default and for convenience, the `targetPort` is set to the same value as the `port` field. + - port: 80 + targetPort: 80 + # Optional field + # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767) + nodePort: 30007 +``` + ### Type LoadBalancer {#loadbalancer} On cloud providers which support external load balancers, setting the `type` @@ -634,6 +653,16 @@ metadata: [...] ``` {{% /tab %}} +{{% tab name="IBM Cloud" %}} +```yaml +[...] +metadata: + name: my-service + annotations: + service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private" +[...] +``` +{{% /tab %}} {{% tab name="OpenStack" %}} ```yaml [...] @@ -1050,7 +1079,7 @@ fail with a message indicating an IP address could not be allocated. In the control plane, a background controller is responsible for creating that map (needed to support migrating from older versions of Kubernetes that used -in-memory locking). Kubernetes also uses controllers to checking for invalid +in-memory locking). Kubernetes also uses controllers to check for invalid assignments (eg due to administrator intervention) and for cleaning up allocated IP addresses that are no longer used by any Services. diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index 99c21105a934d..3d444584eee68 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -28,9 +28,9 @@ This document describes the current state of _persistent volumes_ in Kubernetes. Managing storage is a distinct problem from managing compute instances. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: PersistentVolume and PersistentVolumeClaim. -A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. +A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. -A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only). +A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only). While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource. diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 694ea7742fa75..2680185785f52 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -185,7 +185,7 @@ The following plugins support `WaitForFirstConsumer` with pre-created Persistent * All of the above * [Local](#local) -{{< feature-state state="stable" for_k8s_version="1.17" >}} +{{< feature-state state="stable" for_k8s_version="v1.17" >}} [CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver to see its supported topology keys and examples. @@ -281,6 +281,7 @@ metadata: provisioner: kubernetes.io/gce-pd parameters: type: pd-standard + fstype: ext4 replication-type: none ``` @@ -292,18 +293,21 @@ parameters: is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. `zone` and `zones` parameters must not be used at the same time. +* `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system. + * `replication-type`: `none` or `regional-pd`. Default: `none`. If `replication-type` is set to `none`, a regular (zonal) PD will be provisioned. If `replication-type` is set to `regional-pd`, a [Regional Persistent Disk](https://cloud.google.com/compute/docs/disks/#repds) -will be provisioned. In this case, users must use `zones` instead of `zone` to -specify the desired replication zones. If exactly two zones are specified, the -Regional PD will be provisioned in those zones. If more than two zones are -specified, Kubernetes will arbitrarily choose among the specified zones. If the -`zones` parameter is omitted, Kubernetes will arbitrarily choose among zones -managed by the cluster. +will be provisioned. It's highly recommended to have +`volumeBindingMode: WaitForFirstConsumer` set, in which case when you create +a Pod that consumes a PersistentVolumeClaim which uses this StorageClass, a +Regional Persistent Disk is provisioned with two zones. One zone is the same +as the zone that the Pod is scheduled in. The other zone is randomly picked +from the zones available to the cluster. Disk zones can be further constrained +using `allowedTopologies`. {{< note >}} `zone` and `zones` parameters are deprecated and replaced with @@ -407,7 +411,7 @@ parameters: round-robin-ed across all active zones where Kubernetes cluster has a node. {{< note >}} -{{< feature-state state="deprecated" for_k8s_version="1.11" >}} +{{< feature-state state="deprecated" for_k8s_version="v1.11" >}} This internal provisioner of OpenStack is deprecated. Please use [the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack). {{< /note >}} @@ -810,10 +814,10 @@ volumeBindingMode: WaitForFirstConsumer ``` Local volumes do not currently support dynamic provisioning, however a StorageClass -should still be created to delay volume binding until pod scheduling. This is +should still be created to delay volume binding until Pod scheduling. This is specified by the `WaitForFirstConsumer` volume binding mode. -Delaying volume binding allows the scheduler to consider all of a pod's +Delaying volume binding allows the scheduler to consider all of a Pod's scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim. diff --git a/content/en/docs/concepts/storage/volume-snapshots.md b/content/en/docs/concepts/storage/volume-snapshots.md index b68c83d8f95e4..0ad66e75ae7f3 100644 --- a/content/en/docs/concepts/storage/volume-snapshots.md +++ b/content/en/docs/concepts/storage/volume-snapshots.md @@ -13,7 +13,7 @@ weight: 20 {{% capture overview %}} -{{< feature-state for_k8s_version="1.17" state="beta" >}} +{{< feature-state for_k8s_version="v1.17" state="beta" >}} In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/). {{% /capture %}} @@ -29,7 +29,7 @@ A `VolumeSnapshotContent` is a snapshot taken from a volume in the cluster that A `VolumeSnapshot` is a request for snapshot of a volume by a user. It is similar to a PersistentVolumeClaim. -`VolumeSnapshotClass` allows you to specify different attributes belonging to a `VolumeSnapshot`. These attibutes may differ among snapshots taken from the same volume on the storage system and therefore cannot be expressed by using the same `StorageClass` of a `PersistentVolumeClaim`. +`VolumeSnapshotClass` allows you to specify different attributes belonging to a `VolumeSnapshot`. These attributes may differ among snapshots taken from the same volume on the storage system and therefore cannot be expressed by using the same `StorageClass` of a `PersistentVolumeClaim`. Users need to be aware of the following when using this feature: @@ -61,7 +61,9 @@ In the case of pre-provisioned binding, the VolumeSnapshot will remain unbound u ### Persistent Volume Claim as Snapshot Source Protection -The purpose of this protection is to ensure that in-use PersistentVolumeClaim API objects are not removed from the system while a snapshot is being taken from it (as this may result in data loss). +The purpose of this protection is to ensure that in-use +{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} +API objects are not removed from the system while a snapshot is being taken from it (as this may result in data loss). While a snapshot is being taken of a PersistentVolumeClaim, that PersistentVolumeClaim is in-use. If you delete a PersistentVolumeClaim API object in active use as a snapshot source, the PersistentVolumeClaim object is not removed immediately. Instead, removal of the PersistentVolumeClaim object is postponed until the snapshot is readyToUse or aborted. diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index 6464b6ed040aa..233e0ca66147f 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -12,14 +12,19 @@ weight: 80 {{< feature-state for_k8s_version="v1.8" state="beta" >}} -A _Cron Job_ creates [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) on a time-based schedule. +A _CronJob_ creates {{< glossary_tooltip term_id="job" text="Jobs" >}} on a repeating schedule. One CronJob object is like one line of a _crontab_ (cron table) file. It runs a job periodically on a given schedule, written in [Cron](https://en.wikipedia.org/wiki/Cron) format. -{{< note >}} -All **CronJob** `schedule:` times are denoted in UTC. -{{< /note >}} +{{< caution >}} +All **CronJob** `schedule:` times are based on the timezone of the +{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}. + +If your control plane runs the kube-controller-manager in Pods or bare +containers, the timezone set for the kube-controller-manager container determines the timezone +that the cron job controller uses. +{{< /caution >}} When creating the manifest for a CronJob resource, make sure the name you provide is a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). @@ -27,14 +32,26 @@ The name must be no longer than 52 characters. This is because the CronJob contr append 11 characters to the job name provided and there is a constraint that the maximum length of a Job name is no more than 63 characters. -For instructions on creating and working with cron jobs, and for an example of a spec file for a cron job, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs). {{% /capture %}} +{{% capture body %}} +## CronJob -{{% capture body %}} +CronJobs are useful for creating periodic and recurring tasks, like running backups or +sending emails. CronJobs can also schedule individual tasks for a specific time, such as +scheduling a Job for when your cluster is likely to be idle. + +### Example + +This example CronJob manifest prints the current time and a hello message every minute: -## Cron Job Limitations +{{< codenew file="application/job/cronjob.yaml" >}} + +([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/) +takes you through this example in more detail). + +## CronJob limitations {#cron-job-limitations} A cron job creates a job object _about_ once per execution time of its schedule. We say "about" because there are certain circumstances where two jobs might be created, or no job might be created. We attempt to make these rare, @@ -66,3 +83,11 @@ The CronJob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents. {{% /capture %}} +{{% capture whatsnext %}} +[Cron expression format](https://pkg.go.dev/github.com/robfig/cron?tab=doc#hdr-CRON_Expression_Format) +documents the format of CronJob `schedule` fields. + +For instructions on creating and working with cron jobs, and for an example of CronJob +manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs). + +{{% /capture %}} diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index a66cac7bb333f..1fd6c5c9d6d6b 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -48,24 +48,24 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up In this example: * A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field. -* The Deployment creates three replicated Pods, indicated by the `replicas` field. -* The `selector` field defines how the Deployment finds which Pods to manage. +* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field. +* The `.spec.selector` field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (`app: nginx`). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule. {{< note >}} - The `matchLabels` field is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map + The `.spec.selector.matchLabels` field is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose key field is "key" the operator is "In", and the values array contains only "value". All of the requirements, from both `matchLabels` and `matchExpressions`, must be satisfied in order to match. {{< /note >}} * The `template` field contains the following sub-fields: - * The Pods are labeled `app: nginx`using the `labels` field. + * The Pods are labeled `app: nginx`using the `.metadata.labels` field. * The Pod template's specification, or `.template.spec` field, indicates that the Pods run one container, `nginx`, which runs the `nginx` [Docker Hub](https://hub.docker.com/) image at version 1.14.2. - * Create one container and name it `nginx` using the `name` field. + * Create one container and name it `nginx` using the `.spec.template.spec.containers[0].name` field. Follow the steps given below to create the above Deployment: @@ -89,9 +89,8 @@ In this example: ``` When you inspect the Deployments in your cluster, the following fields are displayed: - * `NAME` lists the names of the Deployments in the cluster. - * `DESIRED` displays the desired number of _replicas_ of the application, which you define when you create the Deployment. This is the _desired state_. - * `CURRENT` displays how many replicas are currently running. + * `NAME` lists the names of the Deployments in the namespace. + * `READY` displays how many replicas of the application are available to your users. It follows the pattern ready/desired. * `UP-TO-DATE` displays the number of replicas that have been updated to achieve the desired state. * `AVAILABLE` displays how many replicas of the application are available to your users. * `AGE` displays the amount of time that the application has been running. @@ -116,8 +115,16 @@ In this example: NAME DESIRED CURRENT READY AGE nginx-deployment-75675f5897 3 3 3 18s ``` + ReplicaSet output shows the following fields: + + * `NAME` lists the names of the ReplicaSets in the namespace. + * `DESIRED` displays the desired number of _replicas_ of the application, which you define when you create the Deployment. This is the _desired state_. + * `CURRENT` displays how many replicas are currently running. + * `READY` displays how many replicas of the application are available to your users. + * `AGE` displays the amount of time that the application has been running. + Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. The random string is - randomly generated and uses the pod-template-hash as a seed. + randomly generated and uses the `pod-template-hash` as a seed. 6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`. The following output is returned: ```shell @@ -510,7 +517,7 @@ Follow the steps given below to rollback the Deployment from the current version The output is similar to this: ``` - deployment.apps/nginx-deployment + deployment.apps/nginx-deployment rolled back ``` Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`: @@ -520,7 +527,7 @@ Follow the steps given below to rollback the Deployment from the current version The output is similar to this: ``` - deployment.apps/nginx-deployment + deployment.apps/nginx-deployment rolled back ``` For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout). @@ -1017,7 +1024,7 @@ can create multiple Deployments, one for each release, following the canary patt ## Writing a Deployment Spec -As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and `metadata` fields. +As with all other Kubernetes configs, a Deployment needs `.apiVersion`, `.kind`, and `.metadata` fields. For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/), configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. The name of a Deployment object must be a valid @@ -1110,7 +1117,7 @@ total number of Pods running at any time during the update is at most 130% of de to wait for your Deployment to progress before the system reports back that the Deployment has [failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`. and `Reason=ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep -retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment +retrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deployment controller will roll back a Deployment as soon as it observes such a condition. If specified, this field needs to be greater than `.spec.minReadySeconds`. diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 5e8c6d67f399a..fe7a96c1387d5 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -26,7 +26,7 @@ it should create to meet the number of replicas criteria. A ReplicaSet then fulf and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template. -The link a ReplicaSet has to its Pods is via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) +A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly. diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index 14e7054a86908..2ecbdd702a954 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -71,8 +71,8 @@ have some advantages for start-up related code: a mechanism to block or delay app container startup until a set of preconditions are met. Once preconditions are met, all of the app containers in a Pod can start in parallel. * Init containers can securely run utilities or custom code that would otherwise make an app - container image less secure. By keeping unnecessary tools separate you can limit the attack - surface of your app container image. + container image less secure. By keeping unnecessary tools separate you can limit the attack + surface of your app container image. ### Examples @@ -245,8 +245,11 @@ init containers. [What's next](#what-s-next) contains a link to a more detailed ## Detailed behavior -During the startup of a Pod, each init container starts in order, after the -network and volumes are initialized. Each container must exit successfully before +During Pod startup, the kubelet delays running init containers until the networking +and storage are ready. Then the kubelet runs the Pod's init containers in the order +they appear in the Pod's spec. + +Each init container must exit successfully before the next container starts. If a container fails to start due to the runtime or exits with failure, it is retried according to the Pod `restartPolicy`. However, if the Pod `restartPolicy` is set to Always, the init containers use diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index b54a8b6ca8bcc..74031a372292d 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -181,7 +181,7 @@ Once Pod is assigned to a node by scheduler, kubelet starts creating containers ... State: Waiting Reason: ErrImagePull - ... + ... ``` * `Running`: Indicates that the container is executing without issues. The `postStart` hook (if any) is executed prior to the container entering a Running state. This state also displays the time when the container entered Running state. @@ -205,31 +205,34 @@ Once Pod is assigned to a node by scheduler, kubelet starts creating containers ... ``` -## Pod readiness gate +## Pod readiness {#pod-readiness-gate} {{< feature-state for_k8s_version="v1.14" state="stable" >}} -In order to add extensibility to Pod readiness by enabling the injection of -extra feedback or signals into `PodStatus`, Kubernetes 1.11 introduced a -feature named [Pod ready++](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md). -You can use the new field `ReadinessGate` in the `PodSpec` to specify additional -conditions to be evaluated for Pod readiness. If Kubernetes cannot find such a +Your application can inject extra feedback or signals into PodStatus: +_Pod readiness_. To use this, set `readinessGates` in the PodSpec to specify +a list of additional conditions that the kubelet evaluates for Pod readiness. + +Readiness gates are determined by the current state of `status.condition` +fields for the Pod. If Kubernetes cannot find such a condition in the `status.conditions` field of a Pod, the status of the condition -is default to "`False`". Below is an example: +is defaulted to "`False`". Below is an example: + +Here is an example: ```yaml -Kind: Pod +kind: Pod ... spec: readinessGates: - conditionType: "www.example.com/feature-1" status: conditions: - - type: Ready # this is a builtin PodCondition + - type: Ready # a built in PodCondition status: "False" lastProbeTime: null lastTransitionTime: 2018-01-01T00:00:00Z - - type: "www.example.com/feature-1" # an extra PodCondition + - type: "www.example.com/feature-1" # an extra PodCondition status: "False" lastProbeTime: null lastTransitionTime: 2018-01-01T00:00:00Z @@ -239,19 +242,26 @@ status: ... ``` -The new Pod conditions must comply with Kubernetes [label key format](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). -Since the `kubectl patch` command still doesn't support patching object status, -the new Pod conditions have to be injected through the `PATCH` action using -one of the [KubeClient libraries](/docs/reference/using-api/client-libraries/). +The Pod conditions you add must have names that meet the Kubernetes [label key format](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). + + +### Status for Pod readiness {#pod-readiness-status} -With the introduction of new Pod conditions, a Pod is evaluated to be ready **only** -when both the following statements are true: +The `kubectl patch` command does not support patching object status. +To set these `status.conditions` for the pod, applications and +{{< glossary_tooltip term_id="operator-pattern" text="operators">}} should use +the `PATCH` action. +You can use a [Kubernetes client library](/docs/reference/using-api/client-libraries/) to +write code that sets custom Pod conditions for Pod readiness. + +For a Pod that uses custom conditions, that Pod is evaluated to be ready **only** +when both the following statements apply: * All containers in the Pod are ready. -* All conditions specified in `ReadinessGates` are "`True`". +* All conditions specified in `ReadinessGates` are `True`. -To facilitate this change to Pod readiness evaluation, a new Pod condition -`ContainersReady` is introduced to capture the old Pod `Ready` condition. +When a Pod's containers are Ready but at least one custom condition is missing or +`False`, the kubelet sets the Pod's condition to `ContainersReady`. ## Restart policy @@ -268,32 +278,31 @@ once bound to a node, a Pod will never be rebound to another node. ## Pod lifetime -In general, Pods remain until a human or controller process explicitly removes them. -The control plane cleans up terminated Pods (with a phase of `Succeeded` or +In general, Pods remain until a human or +{{< glossary_tooltip term_id="controller" text="controller" >}} process +explicitly removes them. +The control plane cleans up terminated Pods (with a phase of `Succeeded` or `Failed`), when the number of Pods exceeds the configured threshold (determined by `terminated-pod-gc-threshold` in the kube-controller-manager). This avoids a resource leak as Pods are created and terminated over time. -Three types of controllers are available: +There are different kinds of resources for creating Pods: + +- Use a {{< glossary_tooltip term_id="deployment" >}}, + {{< glossary_tooltip term_id="replica-set" >}} or {{< glossary_tooltip term_id="statefulset" >}} + for Pods that are not expected to terminate, for example, web servers. -- Use a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) for Pods that are expected to terminate, +- Use a {{< glossary_tooltip term_id="job" >}} + for Pods that are expected to terminate once their work is complete; for example, batch computations. Jobs are appropriate only for Pods with `restartPolicy` equal to OnFailure or Never. -- Use a [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/), - [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/), or - [Deployment](/docs/concepts/workloads/controllers/deployment/) - for Pods that are not expected to terminate, for example, web servers. - ReplicationControllers are appropriate only for Pods with a `restartPolicy` of - Always. - -- Use a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) for Pods that need to run one per - machine, because they provide a machine-specific system service. +- Use a {{< glossary_tooltip term_id="daemonset" >}} + for Pods that need to run one per eligible node. -All three types of controllers contain a PodTemplate. It -is recommended to create the appropriate controller and let -it create Pods, rather than directly create Pods yourself. That is because Pods -alone are not resilient to machine failures, but controllers are. +All workload resources contain a PodSpec. It is recommended to create the +appropriate workload resource and let the resource's controller create Pods +for you, rather than directly create Pods yourself. If a node dies or is disconnected from the rest of the cluster, Kubernetes applies a policy for setting the `phase` of all Pods on the lost node to Failed. diff --git a/content/en/docs/concepts/workloads/pods/pod-overview.md b/content/en/docs/concepts/workloads/pods/pod-overview.md index 53ccd17d203d1..2bc29512596b4 100644 --- a/content/en/docs/concepts/workloads/pods/pod-overview.md +++ b/content/en/docs/concepts/workloads/pods/pod-overview.md @@ -17,9 +17,9 @@ This page provides an overview of `Pod`, the smallest deployable object in the K {{% capture body %}} ## Understanding Pods -A *Pod* is the basic execution unit of a Kubernetes application--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your {{< glossary_tooltip term_id="cluster" >}}. +A *Pod* is the basic execution unit of a Kubernetes application--the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your {{< glossary_tooltip term_id="cluster" text="cluster" >}}. -A Pod encapsulates an application's container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: *a single instance of an application in Kubernetes*, which might consist of either a single {{< glossary_tooltip text="container" term_id="container" >}} or a small number of containers that are tightly coupled and that share resources. +A Pod encapsulates an application's container (or, in some cases, multiple containers), storage resources, a unique network identity (IP address), as well as options that govern how the container(s) should run. A Pod represents a unit of deployment: *a single instance of an application in Kubernetes*, which might consist of either a single {{< glossary_tooltip text="container" term_id="container" >}} or a small number of containers that are tightly coupled and that share resources. [Docker](https://www.docker.com) is the most common container runtime used in a Kubernetes Pod, but Pods support other [container runtimes](/docs/setup/production-environment/container-runtimes/) as well. @@ -28,14 +28,12 @@ Pods in a Kubernetes cluster can be used in two main ways: * **Pods that run a single container**. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly. * **Pods that run multiple containers that need to work together**. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service--one container serving files from a shared volume to the public, while a separate "sidecar" container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity. -The [Kubernetes Blog](https://kubernetes.io/blog) has some additional information on Pod use cases. For more information, see: - * [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) - * [Container Design Patterns](https://kubernetes.io/blog/2016/06/container-design-patterns) +Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (to provide more overall resources by running more instances), you should use multiple Pods, one for each instance. In Kubernetes, this is typically referred to as _replication_. +Replicated Pods are usually created and managed as a group by a workload resource and its {{< glossary_tooltip text="_controller_" term_id="controller" >}}. +See [Pods and controllers](#pods-and-controllers) for more information on how Kubernetes uses controllers to implement workload scaling and healing. -Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (e.g., run multiple instances), you should use multiple Pods, one for each instance. In Kubernetes, this is generally referred to as _replication_. Replicated Pods are usually created and managed as a group by an abstraction called a Controller. See [Pods and Controllers](#pods-and-controllers) for more information. - -### How Pods manage multiple Containers +### How Pods manage multiple containers Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated. @@ -49,62 +47,75 @@ Pods provide two kinds of shared resources for their constituent containers: *ne #### Networking -Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports. Containers *inside a Pod* can communicate with one another using `localhost`. When containers in a Pod communicate with entities *outside the Pod*, they must coordinate how they use the shared network resources (such as ports). +Each Pod is assigned a unique IP address for each address family. Every container in a Pod shares the network namespace, including the IP address and network ports. Containers *inside a Pod* can communicate with one another using `localhost`. When containers in a Pod communicate with entities *outside the Pod*, they must coordinate how they use the shared network resources (such as ports). #### Storage -A Pod can specify a set of shared storage {{< glossary_tooltip text="Volumes" term_id="volume" >}}. All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted. See [Volumes](/docs/concepts/storage/volumes/) for more information on how Kubernetes implements shared storage in a Pod. +A Pod can specify a set of shared storage {{< glossary_tooltip text="volumes" term_id="volume" >}}. All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted. See [Volumes](/docs/concepts/storage/volumes/) for more information on how Kubernetes implements shared storage in a Pod. ## Working with Pods -You'll rarely create individual Pods directly in Kubernetes--even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a Controller), it is scheduled to run on a {{< glossary_tooltip term_id="node" >}} in your cluster. The Pod remains on that Node until the process is terminated, the pod object is deleted, the Pod is *evicted* for lack of resources, or the Node fails. +You'll rarely create individual Pods directly in Kubernetes--even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a {{< glossary_tooltip text="_controller_" term_id="controller" >}}), it is scheduled to run on a {{< glossary_tooltip term_id="node" >}} in your cluster. The Pod remains on that node until the process is terminated, the pod object is deleted, the Pod is *evicted* for lack of resources, or the node fails. {{< note >}} -Restarting a container in a Pod should not be confused with restarting the Pod. The Pod itself does not run, but is an environment the containers run in and persists until it is deleted. +Restarting a container in a Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for running a container. A Pod persists until it is deleted. {{< /note >}} -Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fails, or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a *Controller*, that handles the work of managing the relatively disposable Pod instances. Thus, while it is possible to use Pod directly, it's far more common in Kubernetes to manage your pods using a Controller. See [Pods and Controllers](#pods-and-controllers) for more information on how Kubernetes uses Controllers to implement Pod scaling and healing. +Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fails, or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a controller, that handles the work of managing the relatively disposable Pod instances. Thus, while it is possible to use Pod directly, it's far more common in Kubernetes to manage your pods using a controller. + +### Pods and controllers + +You can use workload resources to create and manage multiple Pods for you. A controller for the resource handles replication and rollout and automatic healing in case of Pod failure. For example, if a Node fails, a controller notices that Pods on that Node have stopped working and creates a replacement Pod. The scheduler places the replacement Pod onto a healthy Node. -### Pods and Controllers +Here are some examples of workload resources that manage one or more Pods: -A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node. +* {{< glossary_tooltip text="Deployment" term_id="deployment" >}} +* {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} +* {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}} -Some examples of Controllers that contain one or more pods include: -* [Deployment](/docs/concepts/workloads/controllers/deployment/) -* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) -* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) +## Pod templates -In general, Controllers use a Pod Template that you provide to create the Pods for which it is responsible. +Controllers for {{< glossary_tooltip text="workload" term_id="workload" >}} resources create Pods +from a pod template and manage those Pods on your behalf. -## Pod Templates +PodTemplates are specifications for creating Pods, and are included in workload resources such as +[Deployments](/docs/concepts/workloads/controllers/deployment/), +[Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and +[DaemonSets](/docs/concepts/workloads/controllers/daemonset/). -Pod templates are pod specifications which are included in other objects, such as -[Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/), [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and -[DaemonSets](/docs/concepts/workloads/controllers/daemonset/). Controllers use Pod Templates to make actual pods. -The sample below is a simple manifest for a Pod which contains a container that prints -a message. +Each controller for a workload resource uses the PodTemplate inside the workload object to make actual Pods. The PodTemplate is part of the desired state of whatever workload resource you used to run your app. + +The sample below is a manifest for a simple Job with a `template` that starts one container. The container in that Pod prints a message then pauses. ```yaml -apiVersion: v1 -kind: Pod +apiVersion: batch/v1 +kind: Job metadata: - name: myapp-pod - labels: - app: myapp + name: hello spec: - containers: - - name: myapp-container - image: busybox - command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] + template: + # This is the pod template + spec: + containers: + - name: hello + image: busybox + command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600'] + restartPolicy: OnFailure + # The pod template ends here ``` -Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no "quantum entanglement". Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive. +Modifying the pod template or switching to a new pod template has no effect on the Pods that already exist. Pods do not receive template updates directly; instead, a new Pod is created to match the revised pod template. + +For example, a Deployment controller ensures that the running Pods match the current pod template. If the template is updated, the controller has to remove the existing Pods and create new Pods based on the updated template. Each workload controller implements its own rules for handling changes to the Pod template. + +On Nodes, the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} does not directly observe or manage any of the details around pod templates and updates; those details are abstracted away. That abstraction and separation of concerns simplifies system semantics, and makes it feasible to extend the cluster's behavior without changing existing code. {{% /capture %}} {{% capture whatsnext %}} * Learn more about [Pods](/docs/concepts/workloads/pods/pod/) +* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explains common layouts for Pods with more than one container * Learn more about Pod behavior: * [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods) * [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/) diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 35a373473b514..1c68ba4ebd963 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -18,7 +18,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te ### Enable Feature Gate -The `EvenPodsSpread` [feature gate] (/docs/reference/command-line-tools-reference/feature-gates/) +The `EvenPodsSpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) must be enabled for the {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} **and** {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}. @@ -62,10 +62,10 @@ metadata: name: mypod spec: topologySpreadConstraints: - - maxSkew: - topologyKey: - whenUnsatisfiable: - labelSelector: + - maxSkew: + topologyKey: + whenUnsatisfiable: + labelSelector: ``` You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are: @@ -73,8 +73,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s - **maxSkew** describes the degree to which Pods may be unevenly distributed. It's the maximum permitted difference between the number of matching Pods in any two topology domains of a given topology type. It must be greater than zero. - **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain. - **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint: - - `DoNotSchedule` (default) tells the scheduler not to schedule it. - - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. + - `DoNotSchedule` (default) tells the scheduler not to schedule it. + - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. - **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details. You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. @@ -160,8 +160,9 @@ There are some implicit conventions worth noting here: - Only the Pods holding the same namespace as the incoming Pod can be matching candidates. - Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that: - 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA". - 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". + + 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA". + 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". - Be aware of what will happen if the incomingPod’s `topologySpreadConstraints[*].labelSelector` doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s `topologySpreadConstraints[*].labelSelector` to match its own labels. @@ -182,7 +183,7 @@ There are some implicit conventions worth noting here: and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} - + ### Cluster-level default constraints {{< feature-state for_k8s_version="v1.18" state="alpha" >}} @@ -207,16 +208,16 @@ kind: KubeSchedulerConfiguration profiles: pluginConfig: - - name: PodTopologySpread - args: - defaultConstraints: - - maxSkew: 1 - topologyKey: failure-domain.beta.kubernetes.io/zone - whenUnsatisfiable: ScheduleAnyway + - name: PodTopologySpread + args: + defaultConstraints: + - maxSkew: 1 + topologyKey: failure-domain.beta.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway ``` {{< note >}} -The score produced by default scheduling constraints might conflict with the +The score produced by default scheduling constraints might conflict with the score produced by the [`DefaultPodTopologySpread` plugin](/docs/reference/scheduling/profiles/#scheduling-plugins). It is recommended that you disable this plugin in the scheduling profile when @@ -229,14 +230,14 @@ In Kubernetes, directives related to "Affinity" control how Pods are scheduled - more packed or more scattered. - For `PodAffinity`, you can try to pack any number of Pods into qualifying -topology domain(s) + topology domain(s) - For `PodAntiAffinity`, only one Pod can be scheduled into a -single topology domain. + single topology domain. The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different topology domains - to achieve high availability or cost-saving. This can also help on rolling update workloads and scaling out replicas smoothly. -See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-even-pods-spreading.md#motivation) for more details. +See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details. ## Known Limitations diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md index c58c72f28f818..c6aa34812560d 100644 --- a/content/en/docs/contribute/_index.md +++ b/content/en/docs/contribute/_index.md @@ -4,59 +4,75 @@ title: Contribute to Kubernetes docs linktitle: Contribute main_menu: true weight: 80 +card: + name: contribute + weight: 10 + title: Start contributing --- {{% capture overview %}} -If you would like to help contribute to the Kubernetes documentation or website, -we're happy to have your help! Anyone can contribute, whether you're new to the -project or you've been around a long time, and whether you self-identify as a -developer, an end user, or someone who just can't stand seeing typos. +This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs). + +Kubernetes documentation contributors: + +- Improve existing content +- Create new content +- Translate the documentation +- Manage and publish the documentation parts of the Kubernetes release cycle + +Kubernetes documentation welcomes improvements from all contributors, new and experienced! {{% /capture %}} {{% capture body %}} -## Getting Started +## Getting started -Anyone can open an issue describing problems or desired improvements with documentation, or contribute a change with a pull request (PR). -Some tasks require more trust and need more access in the Kubernetes organization. -See [Participating in SIG Docs](/docs/contribute/participating/) for more details about -of roles and permissions. - -Kubernetes documentation resides in a GitHub repository. While we welcome -contributions from anyone, you do need basic comfort with git and GitHub to -operate effectively in the Kubernetes community. +Anyone can open an issue about documentation, or contribute a change with a pull request (PR) to the [`kubernetes/website` GitHub repository](https://github.com/kubernetes/website). You need to be comfortable with [git](https://git-scm.com/) and [GitHub](https://lab.github.com/) to operate effectively in the Kubernetes community. To get involved with documentation: 1. Sign the CNCF [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md). 2. Familiarize yourself with the [documentation repository](https://github.com/kubernetes/website) and the website's [static site generator](https://gohugo.io). -3. Make sure you understand the basic processes for [improving content](https://kubernetes.io/docs/contribute/start/#improve-existing-content) and [reviewing changes](https://kubernetes.io/docs/contribute/start/#review-docs-pull-requests). +3. Make sure you understand the basic processes for [opening a pull request](/docs/contribute/new-content/new-content/) and [reviewing changes](/docs/contribute/review/reviewing-prs/). -## Contributions best practices +Some tasks require more trust and more access in the Kubernetes organization. +See [Participating in SIG Docs](/docs/contribute/participating/) for more details about +roles and permissions. -- Do write clear and meaningful GIT commit messages. -- Make sure to include _Github Special Keywords_ which references the issue and automatically closes the issue when PR is merged. -- When you make a small change to a PR like fixing a typo, any style change, or changing grammar. Make sure you squash your commits so that you dont get a large number of commits for a relatively small change. -- Make sure you include a nice PR description depicting the code you have changes, why to change a following piece of code and ensuring there is sufficient information for the reviewer to understand your PR. -- Additional Readings : - - [chris.beams.io/posts/git-commit/](https://chris.beams.io/posts/git-commit/) - - [github.com/blog/1506-closing-issues-via-pull-requests ](https://github.com/blog/1506-closing-issues-via-pull-requests ) - - [davidwalsh.name/squash-commits-git ](https://davidwalsh.name/squash-commits-git ) +## Your first contribution -## Other ways to contribute +- Read the [Contribution overview](/docs/contribute/new-content/overview/) to learn about the different ways you can contribute. +- See [Contribute to kubernetes/website](https://github.com/kubernetes/website/contribute) to find issues that make good entry points. +- [Open a pull request using GitHub](/docs/contribute/new-content/new-content/#changes-using-github) to existing documentation and learn more about filing issues in GitHub. +- [Review pull requests](/docs/contribute/review/reviewing-prs/) from other Kubernetes community members for accuracy and language. +- Read the Kubernetes [content](/docs/contribute/style/content-guide/) and [style guides](/docs/contribute/style/style-guide/) so you can leave informed comments. +- Learn how to [use page templates](/docs/contribute/style/page-templates/) and [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/) to make bigger changes. -- To contribute to the Kubernetes community through online forums like Twitter or Stack Overflow, or learn about local meetups and Kubernetes events, visit the [Kubernetes community site](/community/). -- To contribute to feature development, read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get started. +## Next steps -{{% /capture %}} +- Learn to [work from a local clone](/docs/contribute/new-content/new-content/#fork-the-repo) of the repository. +- Document [features in a release](/docs/contribute/new-content/new-features/). +- Participate in [SIG Docs](/docs/contribute/participating/), and become a [member or reviewer](/docs/contribute/participating/#roles-and-responsibilities). +- Start or help with a [localization](/docs/contribute/localization/). + +## Get involved with SIG Docs + +[SIG Docs](/docs/contribute/participating/) is the group of contributors who publish and maintain Kubernetes documentation and the website. Getting involved with SIG Docs is a great way for Kubernetes contributors (feature development or otherwise) to have a large impact on the Kubernetes project. -{{% capture whatsnext %}} +SIG Docs communicates with different methods: + +- [Join `#sig-docs` on the Kubernetes Slack instance](http://slack.k8s.io/). Make sure to + introduce yourself! +- [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), + where broader discussions take place and official decisions are recorded. +- Join the [weekly SIG Docs video meeting](https://github.com/kubernetes/community/tree/master/sig-docs). Meetings are always announced on `#sig-docs` and added to the [Kubernetes community meetings calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). You'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone. + +## Other ways to contribute -- For more information about the basics of contributing to documentation, read [Start contributing](/docs/contribute/start/). -- Follow the [Kubernetes documentation style guide](/docs/contribute/style/style-guide/) when proposing changes. -- For more information about SIG Docs, read [Participating in SIG Docs](/docs/contribute/participating/). -- For more information about localizing Kubernetes docs, read [Localizing Kubernetes documentation](/docs/contribute/localization/). +- Visit the [Kubernetes community site](/community/). Participate on Twitter or Stack Overflow, learn about local Kubernetes meetups and events, and more. +- Read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get involved with Kubernetes feature development. +- Submit a [blog post or case study](/docs/contribute/new-content/blogs-case-studies/). {{% /capture %}} diff --git a/content/en/docs/contribute/advanced.md b/content/en/docs/contribute/advanced.md index c8ccd252f4db0..6574bba709607 100644 --- a/content/en/docs/contribute/advanced.md +++ b/content/en/docs/contribute/advanced.md @@ -2,14 +2,14 @@ title: Advanced contributing slug: advanced content_template: templates/concept -weight: 30 +weight: 98 --- {{% capture overview %}} -This page assumes that you've read and mastered the -[Start contributing](/docs/contribute/start/) and -[Intermediate contributing](/docs/contribute/intermediate/) topics and are ready +This page assumes that you understand how to +[contribute to new content](/docs/contribute/new-content/overview) and +[review others' work](/docs/contribute/review/reviewing-prs/), and are ready to learn about more ways to contribute. You need to use the Git command line client and other tools for some of these tasks. @@ -19,7 +19,7 @@ client and other tools for some of these tasks. ## Be the PR Wrangler for a week -SIG Docs [approvers](/docs/contribute/participating/#approvers) take regular turns as the PR wrangler for the repository and are added to the [PR Wrangler rotation scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers#2019-schedule-q1q2) for weekly rotations. +SIG Docs [approvers](/docs/contribute/participating/#approvers) take week-long turns [wrangling PRs](https://github.com/kubernetes/website/wiki/PR-Wranglers) for the repository. The PR wrangler’s duties include: @@ -37,9 +37,9 @@ The PR wrangler’s duties include: - Assign `Docs Review` and `Tech Review` labels to indicate the PR's review status. - Assign`Needs Doc Review` or `Needs Tech Review` for PRs that haven't yet been reviewed. - Assign `Doc Review: Open Issues` or `Tech Review: Open Issues` for PRs that have been reviewed and require further input or action before merging. - - Assign `/lgtm` and `/approve` labels to PRs that can be merged. + - Assign `/lgtm` and `/approve` labels to PRs that can be merged. - Merge PRs when they are ready, or close PRs that shouldn’t be accepted. -- Triage and tag incoming issues daily. See [Intermediate contributing](/docs/contribute/intermediate/) for guidelines on how SIG Docs uses metadata. +- Triage and tag incoming issues daily. See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. ### Helpful GitHub queries for wranglers @@ -60,9 +60,9 @@ reviewed is usually small. These queries specifically exclude localization PRs, ### When to close Pull Requests -Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure. +Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure. -- Close any PR where the CLA hasn’t been signed for two weeks. +- Close any PR where the CLA hasn’t been signed for two weeks. PR authors can reopen the PR after signing the CLA, so this is a low-risk way to make sure nothing gets merged without a signed CLA. - Close any PR where the author has not responded to comments or feedback in 2 or more weeks. @@ -82,7 +82,7 @@ An automated service, [`fejta-bot`](https://github.com/fejta-bot) automatically SIG Docs [members](/docs/contribute/participating/#members) can propose improvements. After you've been contributing to the Kubernetes documentation for a while, you -may have ideas for improvement to the [Style Guide](/docs/contribute/style/style-guide/) +may have ideas for improving the [Style Guide](/docs/contribute/style/style-guide/) , the [Content Guide](/docs/contribute/style/content-guide/), the toolchain used to build the documentation, the website style, the processes for reviewing and merging pull requests, or other aspects of the documentation. For maximum transparency, @@ -134,21 +134,21 @@ rotated among SIG Docs approvers. ## Serve as a New Contributor Ambassador SIG Docs [approvers](/docs/contribute/participating/#approvers) can serve as -New Contributor Ambassadors. +New Contributor Ambassadors. -New Contributor Ambassadors work together to welcome new contributors to SIG-Docs, +New Contributor Ambassadors welcome new contributors to SIG-Docs, suggest PRs to new contributors, and mentor new contributors through their first -few PR submissions. +few PR submissions. -Responsibilities for New Contributor Ambassadors include: +Responsibilities for New Contributor Ambassadors include: -- Being available on the [Kubernetes #sig-docs channel](https://kubernetes.slack.com) to answer questions from new contributors. -- Working with PR wranglers to identify good first issues for new contributors. -- Mentoring new contributors through their first few PRs to the docs repo. +- Monitoring the [#sig-docs Slack channel](https://kubernetes.slack.com) for questions from new contributors. +- Working with PR wranglers to identify good first issues for new contributors. +- Mentoring new contributors through their first few PRs to the docs repo. - Helping new contributors create the more complex PRs they need to become Kubernetes members. -- [Sponsoring contributors](/docs/contribute/advanced/#sponsor-a-new-contributor) on their path to becoming Kubernetes members. +- [Sponsoring contributors](/docs/contribute/advanced/#sponsor-a-new-contributor) on their path to becoming Kubernetes members. -Current New Contributor Ambassadors are announced at each SIG-Docs meeting, and in the [Kubernetes #sig-docs channel](https://kubernetes.slack.com). +Current New Contributor Ambassadors are announced at each SIG-Docs meeting, and in the [Kubernetes #sig-docs channel](https://kubernetes.slack.com). ## Sponsor a new contributor @@ -180,12 +180,12 @@ Approvers must meet the following requirements to be a co-chair: - Have been a SIG Docs approver for at least 6 months - Have [led a Kubernetes docs release](/docs/contribute/advanced/#coordinate-docs-for-a-kubernetes-release) or shadowed two releases - Understand SIG Docs workflows and tooling: git, Hugo, localization, blog subproject -- Understand how other Kubernetes SIGs and repositories affect the SIG Docs workflow, including: [teams in k/org](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml), [process in k/community](https://github.com/kubernetes/community/tree/master/sig-docs), plugins in [k/test-infra](https://github.com/kubernetes/test-infra/), and the role of [SIG Architecture](https://github.com/kubernetes/community/tree/master/sig-architecture). +- Understand how other Kubernetes SIGs and repositories affect the SIG Docs workflow, including: [teams in k/org](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml), [process in k/community](https://github.com/kubernetes/community/tree/master/sig-docs), plugins in [k/test-infra](https://github.com/kubernetes/test-infra/), and the role of [SIG Architecture](https://github.com/kubernetes/community/tree/master/sig-architecture). - Commit at least 5 hours per week (and often more) to the role for a minimum of 6 months ### Responsibilities -The role of co-chair is primarily one of service: co-chairs handle process and policy, schedule and run meetings, schedule PR wranglers, and generally do the things that no one else wants to do in order to build contributor capacity. +The role of co-chair is one of service: co-chairs build contributor capacity, handle process and policy, schedule and run meetings, schedule PR wranglers, advocate for docs in the Kubernetes community, make sure that docs succeed in Kubernetes release cycles, and keep SIG Docs focused on effective priorities. Responsibilities include: @@ -228,7 +228,7 @@ For weekly meetings, copypaste the previous week's notes into the "Past meetings **Honor folks' time**: -- Begin and end meetings punctually +Begin and end meetings on time. **Use Zoom effectively**: @@ -240,9 +240,9 @@ For weekly meetings, copypaste the previous week's notes into the "Past meetings ### Recording meetings on Zoom When you’re ready to start the recording, click Record to Cloud. - + When you’re ready to stop recording, click Stop. The video uploads automatically to YouTube. -{{% /capture %}} +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/intermediate.md b/content/en/docs/contribute/intermediate.md deleted file mode 100644 index 9e477a90a4df1..0000000000000 --- a/content/en/docs/contribute/intermediate.md +++ /dev/null @@ -1,967 +0,0 @@ ---- -title: Intermediate contributing -slug: intermediate -content_template: templates/concept -weight: 20 -card: - name: contribute - weight: 50 ---- - -{{% capture overview %}} - -This page assumes that you've read and mastered the tasks in the -[start contributing](/docs/contribute/start/) topic and are ready to -learn about more ways to contribute. - -{{< note >}} -Some tasks require you to use the Git command line client and other tools. -{{< /note >}} - -{{% /capture %}} - -{{% capture body %}} - -Now that you've gotten your feet wet and helped out with the Kubernetes docs in -the ways outlined in the [start contributing](/docs/contribute/start/) topic, -you may feel ready to do more. These tasks assume that you have, or are willing -to gain, deeper knowledge of the following topic areas: - -- Kubernetes concepts -- Kubernetes documentation workflows -- Where and how to find information about upcoming Kubernetes features -- Strong research skills in general - -These tasks are not as sequential as the beginner tasks. There is no expectation -that one person does all of them all of the time. - -## Learn about Prow - -[Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is -the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow -enables chatbot-style commands to handle GitHub actions across the Kubernetes -organization. You can perform a variety of actions such as [adding and removing -labels](#add-and-remove-labels), closing issues, and assigning an approver. Type -the Prow command into a comment field using the `/` format. Some common -commands are: - -- `/lgtm` (looks good to me): adds the `lgtm` label, signalling that a reviewer has finished reviewing the PR -- `/approve`: approves a PR so it can merge (approver use only) -- `/assign`: assigns a person to review or approve a PR -- `/close`: closes an issue or PR -- `/hold`: adds the `do-not-merge/hold` label, indicating the PR cannot be automatically merged -- `/hold cancel`: removes the `do-not-merge/hold` label - -{{% note %}} -Not all commands are available to every user. The Prow bot will tell you if you -try to execute a command beyond your authorization level. -{{% /note %}} - -Familiarize yourself with the [list of Prow -commands](https://prow.k8s.io/command-help) before you review PRs or triage issues. - - -## Review pull requests - -In any given week, a specific docs approver volunteers to do initial triage -and review of [pull requests and issues](#triage-and-categorize-issues). This -person is the "PR Wrangler" for the week. The schedule is maintained using the -[PR Wrangler scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers). -To be added to this list, attend the weekly SIG Docs meeting and volunteer. Even -if you are not on the schedule for the current week, you can still review pull -requests (PRs) that are not already under active review. - -In addition to the rotation, an automated system comments on each new PR and -suggests reviewers and approvers for the PR, based on the list of approvers and -reviewers in the affected files. The PR author is expected to follow the -guidance of the bot, and this also helps PRs to get reviewed quickly. - -We want to get pull requests (PRs) merged and published as quickly as possible. -To ensure the docs are accurate and up to date, each PR needs to be reviewed by -people who understand the content, as well as people with experience writing -great documentation. - -Reviewers and approvers need to provide actionable and constructive feedback to -keep contributors engaged and help them to improve. Sometimes helping a new -contributor get their PR ready to merge takes more time than just rewriting it -yourself, but the project is better in the long term when we have a diversity of -active participants. - -Before you start reviewing PRs, make sure you are familiar with the -[Documentation Content Guide](/docs/contribute/style/content-guide/), the -[Documentation Style Guide](/docs/contribute/style/style-guide/), -and the [code of conduct](/community/code-of-conduct/). - -### Find a PR to review - -To see all open PRs, go to the **Pull Requests** tab in the GitHub repository. -A PR is eligible for review when it meets all of the following criteria: - -- Has the `cncf-cla:yes` tag -- Does not have WIP in the description -- Does not a have tag including the phrase `do-not-merge` -- Has no merge conflicts -- Is based against the correct branch (usually `master` unless the PR relates to - a feature that has not yet been released) -- Is not being actively reviewed by another docs person (other technical - reviewers are fine), unless that person has explicitly asked for your help. In - particular, leaving lots of new comments after other review cycles have - already been completed on a PR can be discouraging and counter-productive. - -If a PR is not eligible to merge, leave a comment to let the author know about -the problem and offer to help them fix it. If they've been informed and have not -fixed the problem in several weeks or months, eventually their PR will be closed -without merging. - -If you're new to reviewing, or you don't have a lot of bandwidth, look for PRs -with the `size/XS` or `size/S` tag set. The size is automatically determined by -the number of lines the PR changes. - -#### Reviewers and approvers - -The Kubernetes website repo operates differently than some of the Kubernetes -code repositories when it comes to the roles of reviewers and approvers. For -more information about the responsibilities of reviewers and approvers, see -[Participating](/docs/contribute/participating/). Here's an overview. - -- A reviewer reviews pull request content for technical accuracy. A reviewer - indicates that a PR is technically accurate by leaving a `/lgtm` comment on - the PR. - - {{< note >}}Don't add a `/lgtm` unless you are confident in the technical - accuracy of the documentation modified or introduced in the PR.{{< /note >}} - -- An approver reviews pull request content for docs quality and adherence to - SIG Docs guidelines found in the Content and Style guides. Only people listed as - approvers in the - [`OWNERS`](https://github.com/kubernetes/website/blob/master/OWNERS) file can - approve a PR. To approve a PR, leave an `/approve` comment on the PR. - -A PR is merged when it has both a `/lgtm` comment from anyone in the Kubernetes -organization and an `/approve` comment from an approver in the -`sig-docs-maintainers` group, as long as it is not on hold and the PR author -has signed the CLA. - -{{< note >}} - -The ["Participating"](/docs/contribute/participating/#approvers) section contains more information for reviewers and approvers, including specific responsibilities for approvers. - -{{< /note >}} - -### Review a PR - -1. Read the PR description and read any attached issues or links, if - applicable. "Drive-by reviewing" is sometimes more harmful than helpful, so - make sure you have the right knowledge to provide a meaningful review. - -2. If someone else is the best person to review this particular PR, let them - know by adding a comment with `/assign @`. If you have - asked a non-docs person for technical review but still want to review the PR - from a docs point of view, keep going. - -3. Go to the **Files changed** tab. Look over all the changed lines. Removed - content has a red background, and those lines also start with a `-` symbol. - Added content has a green background, and those lines also start with a `+` - symbol. Within a line, the actual modified content has a slightly darker - green background than the rest of the line. - - - Especially if the PR uses tricky formatting or changes CSS, Javascript, - or other site-wide elements, you can preview the website with the PR - applied. Go to the **Conversation** tab and click the **Details** link - for the `deploy/netlify` test, near the bottom of the page. It opens in - the same browser window by default, so open it in a new window so you - don't lose your partial review. Switch back to the **Files changed** tab - to resume your review. - - Make sure the PR complies with the Content and Style guides; link the - author to the relevant part of the guide(s) if it doesn't. - - If you have a question, comment, or other feedback about a given - change, hover over a line and click the blue-and-white `+` symbol that - appears. Type your comment and click **Start a review**. - - If you have more comments, leave them in the same way. - - By convention, if you see a small problem that does not have to do with - the main purpose of the PR, such as a typo or whitespace error, you can - call it out, prefixing your comment with `nit:` so that the author knows - you consider it trivial. They should still address it. - - When you've reviewed everything, or if you didn't have any comments, go - back to the top of the page and click **Review changes**. Choose either - **Comment** or **Request Changes**. Add a summary of your review, and - add appropriate - [Prow commands](https://prow.k8s.io/command-help) to separate lines in - the Review Summary field. SIG Docs follows the - [Kubernetes code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process). - All of your comments will be sent to the PR author in a single - notification. - - - If you think the PR is ready to be merged, add the text `/approve` to - your summary. - - If the PR does not need additional technical review, add the - text `/lgtm` as well. - - If the PR *does* need additional technical review, add the text - `/assign` with the GitHub username of the person who needs to - provide technical review. Look at the `reviewers` field in the - front-matter at the top of a given Markdown file to see who can - provide technical review. - - To prevent the PR from being merged, add `/hold`. This sets the - label `do-not-merge/hold`. - - If a PR has no conflicts and has the `lgtm` and `approve` labels but - no `hold` label, it is merged automatically. - - If a PR has the `lgtm` and/or `approve` labels and new changes are - detected, these labels are removed automatically. - - See - [the list of all available slash commands](https://prow.k8s.io/command-help) - that can be used in PRs. - - - If you previously selected **Request changes** and the PR author has - addressed your concerns, you can change your review status either in the - **Files changed** tab or at the bottom of the **Conversation** tab. Be - sure to add the `/approve` tag and assign technical reviewers if necessary, - so that the PR can be merged. - -### Commit into another person's PR - -Leaving PR comments is helpful, but there may be times when you need to commit -into another person's PR, rather than just leaving a review. - -Resist the urge to "take over" for another person unless they explicitly ask -you to, or you want to resurrect a long-abandoned PR. While it may be faster -in the short term, it deprives the person of the chance to contribute. - -The process you use depends on whether you need to edit a file that is already -in the scope of the PR or a file that the PR has not yet touched. - -You can't commit into someone else's PR if either of the following things is -true: - -- If the PR author pushed their branch directly to the - [https://github.com/kubernetes/website/](https://github.com/kubernetes/website/) - repository, only a reviewer with push access can commit into their PR. - Authors should be encouraged to push their branch to their fork before - opening the PR. -- If the PR author explicitly disallowed edits from approvers, you can't - commit into their PR unless they change this setting. - -#### If the file is already changed by the PR - -This method uses the GitHub UI. If you prefer, you can use the command line -even if the file you want to change is part of the PR, if you are more -comfortable working that way. - -1. Click the **Files changed** tab. -2. Scroll down to the file you want to edit, and click the pencil icon for - that file. -3. Make your changes, add a commit message in the field below the editor, and - click **Commit changes**. - -Your commit is now pushed to the branch the PR represents (probably on the -author's fork) and now shows up in the PR and your changes are reflected in -the **Files changed** tab. Leave a comment letting the PR author know you -changed the PR. - -If the author is using the command line rather than the GitHub UI to work on -this PR, they need to fetch their fork's changes and rebase their local branch -on the branch in their fork, before doing additional work on the PR. - -#### If the file has not yet been changed by the PR - -If changes need to be made to a file that is not yet included in the PR, you -need to use the command line. You can always use this method, if you prefer it -to the GitHub UI. - -1. Get the URL for the author's fork. You can find it near the bottom of the - **Conversation** tab. Look for the text **Add more commits by pushing to**. - The first link after this phrase is to the branch, and the second link is - to the fork. Copy the second link. Note the name of the branch for later. - -2. Add the fork as a remote. In your terminal, go to your clone of the - repository. Decide on a name to give the remote (such as the author's - GitHub username), and add the remote using the following syntax: - - ```bash - git remote add - ``` - -3. Fetch the remote. This doesn't change any local files, but updates your - clone's notion of the remote's objects (such as branches and tags) and - their current state. - - ```bash - git remote fetch - ``` - -4. Check out the remote branch. This command will fail if you already have a - local branch with the same name. - - ```bash - git checkout - ``` - -5. Make your changes, use `git add` to add them, and commit them. - -6. Push your changes to the author's remote. - - ```bash - git push - ``` - -7. Go back to the GitHub IU and refresh the PR. Your changes appear. Leave the - PR author a comment letting them know you changed the PR. - -If the author is using the command line rather than the GitHub UI to work on -this PR, they need to fetch their fork's changes and rebase their local branch -on the branch in their fork, before doing additional work on the PR. - -## Work from a local clone - -For changes that require multiple files or changes that involve creating new -files or moving files around, working from a local Git clone makes more sense -than relying on the GitHub UI. These instructions use the `git` command and -assume that you have it installed locally. You can adapt them to use a local -graphical Git client instead. - -### Clone the repository - -You only need to clone the repository once per physical system where you work -on the Kubernetes documentation. - -1. Create a fork of the `kubernetes/website` repository on GitHub. In your - web browser, go to - [https://github.com/kubernetes/website](https://github.com/kubernetes/website) - and click the **Fork** button. After a few seconds, you are redirected to - the URL for your fork, which is `https://github.com//website`. - -2. In a terminal window, use `git clone` to clone the your fork. - - ```bash - git clone git@github.com//website - ``` - - The new directory `website` is created in your current directory, with - the contents of your GitHub repository. Your fork is your `origin`. - -3. Change to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote. - - ```bash - cd website - - git remote add upstream https://github.com/kubernetes/website.git - ``` - -4. Confirm your `origin` and `upstream` repositories. - - ```bash - git remote -v - ``` - - Output is similar to: - - ```bash - origin git@github.com:/website.git (fetch) - origin git@github.com:/website.git (push) - upstream https://github.com/kubernetes/website (fetch) - upstream https://github.com/kubernetes/website (push) - ``` - -### Work on the local repository - -Before you start a new unit of work on your local repository, you need to figure -out which branch to base your work on. The answer depends on what you are doing, -but the following guidelines apply: - -- For general improvements to existing content, start from `master`. -- For new content that is about features that already exist in a released - version of Kubernetes, start from `master`. -- For long-running efforts that multiple SIG Docs contributors will collaborate on, - such as content reorganization, use a specific feature branch created for that - effort. -- For new content that relates to upcoming but unreleased Kubernetes versions, - use the pre-release feature branch created for that Kubernetes version. - -For more guidance, see -[Choose which branch to use](/docs/contribute/start/#choose-which-git-branch-to-use). - -After you decide which branch to start your work (or _base it on_, in Git -terminology), use the following workflow to be sure your work is based on the -most up-to-date version of that branch. - -1. There are three different copies of the repository when you work locally: - `local`, `upstream`, and `origin`. Fetch both the `origin` and `upstream` remotes. This - updates your cache of the remotes without actually changing any of the copies. - - ```bash - git fetch origin - git fetch upstream - ``` - - This workflow deviates from the one defined in the Community's [GitHub - Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). - In this workflow, you do not need to merge your local copy of `master` with `upstream/master` before - pushing the updates to your fork. That step is not required in - `kubernetes/website` because you are basing your branch on the upstream repository. - -2. Create a local working branch based on the most appropriate upstream branch: - `upstream/dev-1.xx` for feature developers or `upstream/master` for all other - contributors. This example assumes you are basing your work on - `upstream/master`. Because you didn't update your local `master` to match - `upstream/master` in the previous step, you need to explicitly create your - branch off of `upstream/master`. - - ```bash - git checkout -b upstream/master - ``` - -3. With your new branch checked out, make your changes using a text editor. - At any time, use the `git status` command to see what you've changed. - -4. When you are ready to submit a pull request, commit your changes. First - use `git status` to see what changes need to be added to the changeset. - There are two important sections: `Changes staged for commit` and - `Changes not staged for commit`. Any files that show up in the latter - section under `modified` or `untracked` need to be added if you want them to - be part of this commit. For each file that needs to be added, use `git add`. - - ```bash - git add example-file.md - ``` - - When all your intended changes are included, create a commit using the - `git commit` command: - - ```bash - git commit -m "Your commit message" - ``` - - {{< note >}} - Do not reference a GitHub issue or pull request by ID or URL in the - commit message. If you do, it will cause that issue or pull request to get - a notification every time the commit shows up in a new Git branch. You can - link issues and pull requests together later in the GitHub UI. - {{< /note >}} - -5. Optionally, you can test your change by staging the site locally using the - `hugo` command. See [View your changes locally](#view-your-changes-locally). - You'll be able to view your changes after you submit the pull request, as - well. - -6. Before you can create a pull request which includes your local commit, you - need to push the branch to your fork, which is the endpoint for the `origin` - remote. - - ```bash - git push origin - ``` - - Technically, you can omit the branch name from the `push` command, but - the behavior in that case depends upon the version of Git you are using. - The results are more repeatable if you include the branch name. - -7. Go to https://github.com/kubernetes/website in your web browser. GitHub - detects that you pushed a new branch to your fork and offers to create a pull - request. Fill in the pull request template. - - - The title should be no more than 50 characters and summarize the intent - of the change. - - The long-form description should contain more information about the fix, - including a line like `Fixes #12345` if the pull request fixes a GitHub - issue. This will cause the issue to be closed automatically when the - pull request is merged. - - You can add labels or other metadata and assign reviewers. See - [Triage and categorize issues](#triage-and-categorize-issues) for the - syntax. - - Click **Create pull request**. - -8. Several automated tests will run against the state of the website with your - changes applied. If any of the tests fail, click the **Details** link for - more information. If the Netlify test completes successfully, its - **Details** link goes to a staged version of the Kubernetes website with - your changes applied. This is how reviewers will check your changes. - -9. When you need to make more changes, address the feedback locally and amend - your original commit. - - ```bash - git commit -a --amend - ``` - - - `-a`: commit all changes - - `--amend`: amend the previous commit, rather than creating a new one - - An editor will open so you can update your commit message if necessary. - - If you use `git commit -m` as in Step 4, you will create a new commit rather - than amending changes to your original commit. Creating a new commit means - you must squash your commits before your pull request can be merged. - - Follow the instructions in Step 6 to push your commit. The commit is added - to your pull request and the tests run again, including re-staging the - Netlify staged site. - -10. If a reviewer adds changes to your pull request, you need to fetch those - changes from your fork before you can add more changes. Use the following - commands to do this, assuming that your branch is currently checked out. - - ```bash - git fetch origin - git rebase origin/ - ``` - - After rebasing, you need to add the `--force-with-lease` flag to - force push the branch's new changes to your fork. - - ```bash - git push --force-with-lease origin - ``` - -11. If someone else's change is merged into the branch your work is based on, - and you have made changes to the same parts of the same files, a conflict - might occur. If the pull request shows that there are conflicts to resolve, - you can resolve them using the GitHub UI or you can resolve them locally. - - First, do step 10 to be sure that your fork and your local branch are in - the same state. - - Next, fetch `upstream` and rebase your branch on the branch it was - originally based on, like `upstream/master`. - - ```bash - git fetch upstream - git rebase upstream/master - ``` - - If there are conflicts Git can't automatically resolve, you can see the - conflicted files using the `git status` command. For each conflicted file, - edit it and look for the conflict markers `>>>`, `<<<`, and `===`. Resolve - the conflict and remove the conflict markers. Then add the changes to the - changeset using `git add ` and continue the rebase using - `git rebase --continue`. When all commits have been applied and there are - no more conflicts, `git status` will show that you are not in a rebase and - there are no changes that need to be committed. At that point, force-push - the branch to your fork, and the pull request should no longer show any - conflicts. - -12. If your PR still has multiple commits after amending previous commits, you - must squash multiple commits into a single commit before your PR can be merged. - You can check the number of commits on your PR's `Commits` tab or by running - `git log` locally. Squashing commits is a form of rebasing. - - ```bash - git rebase -i HEAD~ - ``` - - The `-i` switch tells git you want to rebase interactively. This enables - you to tell git which commits to squash into the first one. For - example, you have 3 commits on your branch: - - ``` - 12345 commit 4 (2 minutes ago) - 6789d commit 3 (30 minutes ago) - 456df commit 2 (1 day ago) - ``` - - You must squash your last three commits into the first one. - - ``` - git rebase -i HEAD~3 - ``` - - That command opens an editor with the following: - - ``` - pick 456df commit 2 - pick 6789d commit 3 - pick 12345 commit 4 - ``` - - Change `pick` to `squash` on the commits you want to squash, and make sure - the one `pick` commit is at the top of the editor. - - ``` - pick 456df commit 2 - squash 6789d commit 3 - squash 12345 commit 4 - ``` - - Save and close your editor. Then push your squashed - commit with `git push --force-with-lease origin `. - - -If you're having trouble resolving conflicts or you get stuck with -anything else related to your pull request, ask for help on the `#sig-docs` -Slack channel or the -[kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). - -### View your changes locally - -{{< tabs name="tab_with_hugo" >}} -{{% tab name="Hugo in a container" %}} - -If you aren't ready to create a pull request but you want to see what your -changes look like, you can build and run a docker image to generate all the documentation and -serve it locally. - -1. Build the image locally: - - ```bash - make docker-image - ``` - -2. Once the `kubernetes-hugo` image has been built locally, you can build and serve the site: - - ```bash - make docker-serve - ``` - -3. In your browser's address bar, enter `localhost:1313`. Hugo will watch the - filesystem for changes and rebuild the site as needed. - -4. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C` - or just close the terminal window. -{{% /tab %}} -{{% tab name="Hugo locally" %}} - -Alternatively, you can install and use the `hugo` command on your development machine: - -1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/master/netlify.toml). - -2. In a terminal, go to the root directory of your clone of the Kubernetes - docs, and enter this command: - - ```bash - hugo server - ``` - -3. In your browser’s address bar, enter `localhost:1313`. - -4. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C` - or just close the terminal window. -{{% /tab %}} -{{< /tabs >}} - -## Triage and categorize issues - -People in SIG Docs are responsible only for triaging and categorizing -documentation issues. General website issues are also filed in the -`kubernetes/website` repository. - -When you triage an issue, you: - -- Validate the issue - - Make sure the issue is about website documentation. Some issues can be closed quickly by - answering a question or pointing the reporter to a resource. See the - [Support requests or code bug reports](#support-requests-or-code-bug-reports) section for details. - - Assess whether the issue has merit. Add the `triage/needs-information` label if the issue doesn't have enough - detail to be actionable or the template is not filled out adequately. - Close the issue if it has both the `lifecycle/stale` and `triage/needs-information` labels. -- Add a priority label (the - [Issue Triage Guidelines](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#define-priority) - define Priority labels in detail) - - `priority/critical-urgent` - do this right now - - `priority/important-soon` - do this within 3 months - - `priority/important-longterm` - do this within 6 months - - `priority/backlog` - this can be deferred indefinitely; lowest priority; - do this when resources are available - - `priority/awaiting-more-evidence` - placeholder for a potentially good issue - so it doesn't get lost -- Optionally, add a `help` or `good first issue` label if the issue is suitable - for someone with very little Kubernetes or SIG Docs experience. Consult - [Help Wanted and Good First Issue Labels](https://github.com/kubernetes/community/blob/master/contributors/guide/help-wanted.md) - for guidance. -- At your discretion, take ownership of an issue and submit a PR for it - (especially if it is quick or relates to work you were already doing). - -This GitHub Issue [filter](https://github.com/kubernetes/website/issues?q=is%3Aissue+is%3Aopen+-label%3Apriority%2Fbacklog+-label%3Apriority%2Fimportant-longterm+-label%3Apriority%2Fimportant-soon+-label%3Atriage%2Fneeds-information+-label%3Atriage%2Fsupport+sort%3Acreated-asc) -finds all the issues that need to be triaged. - -If you have questions about triaging an issue, ask in `#sig-docs` on Slack or -the [kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). - -### Add and remove labels - -To add a label, leave a comment like `/` or `/ `. The label must -already exist. If you try to add a label that does not exist, the command is -silently ignored. - -Examples: - -- `/triage needs-information` -- `/priority important-soon` -- `/language ja` -- `/help` -- `/good-first-issue` -- `/lifecycle frozen` - -To remove a label, leave a comment like `/remove-` or `/remove- `. - -Examples: - -- `/remove-triage needs-information` -- `/remove-priority important-soon` -- `/remove-language ja` -- `/remove-help` -- `/remove-good-first-issue` -- `/remove-lifecycle frozen` - -The list of all the labels used across Kubernetes is -[here](https://github.com/kubernetes/kubernetes/labels). Not all labels -are used by SIG Docs. - -### More about labels - -- An issue can have multiple labels. -- Some labels use slash notation for grouping, which can be thought of like - "sub-labels". For instance, many `sig/` labels exist, such as `sig/cli` and - `sig/api-machinery` ([full list](https://github.com/kubernetes/website/labels?utf8=%E2%9C%93&q=sig%2F)). -- Some labels are automatically added based on metadata in the files involved - in the issue, slash commands used in the comments of the issue, or - information in the issue text. -- Additional labels are manually added by the person triaging the issue (or the person - reporting the issue) - - `kind/bug`, `kind/feature`, and `kind/documentation`: A bug is a problem with existing content or - functionality, and a feature is a request for new content or functionality. - The `kind/documentation` label is seldom used. - - `language/ja`, `language/ko` and similar [language - labels](https://github.com/kubernetes/website/labels?utf8=%E2%9C%93&q=language) - if the issue is about localized content. - -### Issue lifecycle - -Issues are generally opened and closed within a relatively short time span. -However, sometimes an issue may not have associated activity after it is -created. Other times, an issue may need to remain open for longer than 90 days. - -`lifecycle/stale`: after 90 days with no activity, an issue is automatically -labeled as stale. The issue will be automatically closed if the lifecycle is not -manually reverted using the `/remove-lifecycle stale` command. - -`lifecycle/frozen`: an issue with this label will not become stale after 90 days -of inactivity. A user manually adds this label to issues that need to remain -open for much longer than 90 days, such as those with a -`priority/important-longterm` label. - - -### Handling special issue types - -We encounter the following types of issues often enough to document how -to handle them. - -#### Duplicate issues - -If a single problem has one or more issues open for it, the problem should be -consolidated into a single issue. You should decide which issue to keep open (or -open a new issue), port over all relevant information and link related issues. -Finally, label all other issues that describe the same problem with -`triage/duplicate` and close them. Only having a single issue to work on will -help reduce confusion and avoid duplicating work on the same problem. - -#### Dead link issues - -Depending on where the dead link is reported, different actions are required to -resolve the issue. Dead links in the API and Kubectl docs are automation issues -and should be assigned `/priority critical-urgent` until the problem can be fully understood. All other -dead links are issues that need to be manually fixed and can be assigned `/priority important-longterm`. - -#### Blog issues - -[Kubernetes Blog](https://kubernetes.io/blog/) entries are expected to become -outdated over time, so we maintain only blog entries that are less than one year old. -If an issue is related to a blog entry that is more than one year old, it should be closed -without fixing. - -#### Support requests or code bug reports - -Some issues opened for docs are instead issues with the underlying code, or -requests for assistance when something (like a tutorial) didn’t work. For issues -unrelated to docs, close the issue with the `triage/support` label and a comment -directing the requester to support venues (Slack, Stack Overflow) and, if -relevant, where to file an issue for bugs with features (kubernetes/kubernetes -is a great place to start). - -Sample response to a request for support: - -```none -This issue sounds more like a request for support and less -like an issue specifically for docs. I encourage you to bring -your question to the `#kubernetes-users` channel in -[Kubernetes slack](http://slack.k8s.io/). You can also search -resources like -[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) -for answers to similar questions. - -You can also open issues for Kubernetes functionality in - https://github.com/kubernetes/kubernetes. - -If this is a documentation issue, please re-open this issue. -``` - -Sample code bug report response: - -```none -This sounds more like an issue with the code than an issue with -the documentation. Please open an issue at -https://github.com/kubernetes/kubernetes/issues. - -If this is a documentation issue, please re-open this issue. -``` - -## Document new features - -Each major Kubernetes release includes new features, and many of them need -at least a small amount of documentation to show people how to use them. - -Often, the SIG responsible for a feature submits draft documentation for the -feature as a pull request to the appropriate release branch of -`kubernetes/website` repository, and someone on the SIG Docs team provides -editorial feedback or edits the draft directly. - -### Find out about upcoming features - -To find out about upcoming features, attend the weekly sig-release meeting (see -the [community](https://kubernetes.io/community/) page for upcoming meetings) -and monitor the release-specific documentation -in the [kubernetes/sig-release](https://github.com/kubernetes/sig-release/) -repository. Each release has a sub-directory under the [/sig-release/tree/master/releases/](https://github.com/kubernetes/sig-release/tree/master/releases) -directory. Each sub-directory contains a release schedule, a draft of the release -notes, and a document listing each person on the release team. - -- The release schedule contains links to all other documents, meetings, - meeting minutes, and milestones relating to the release. It also contains - information about the goals and timeline of the release, and any special - processes in place for this release. Near the bottom of the document, several - release-related terms are defined. - - This document also contains a link to the **Feature tracking sheet**, which is - the official way to find out about all new features scheduled to go into the - release. -- The release team document lists who is responsible for each release role. If - it's not clear who to talk to about a specific feature or question you have, - either attend the release meeting to ask your question, or contact the release - lead so that they can redirect you. -- The release notes draft is a good place to find out a little more about - specific features, changes, deprecations, and more about the release. The - content is not finalized until late in the release cycle, so use caution. - -#### The feature tracking sheet - -The feature tracking sheet -[for a given Kubernetes release](https://github.com/kubernetes/sig-release/tree/master/releases) lists each feature that is planned for a release. -Each line item includes the name of the feature, a link to the feature's main -GitHub issue, its stability level (Alpha, Beta, or Stable), the SIG and -individual responsible for implementing it, whether it -needs docs, a draft release note for the feature, and whether it has been -merged. Keep the following in mind: - -- Beta and Stable features are generally a higher documentation priority than - Alpha features. -- It's hard to test (and therefore, document) a feature that hasn't been merged, - or is at least considered feature-complete in its PR. -- Determining whether a feature needs documentation is a manual process and - just because a feature is not marked as needing docs doesn't mean it doesn't - need them. - -### Document a feature - -As stated above, draft content for new features is usually submitted by the SIG -responsible for implementing the new feature. This means that your role may be -more of a shepherding role for a given feature than developing the documentation -from scratch. - -After you've chosen a feature to document/shepherd, ask about it in the `#sig-docs` -Slack channel, in a weekly sig-docs meeting, or directly on the PR filed by the -feature SIG. If you're given the go-ahead, you can edit into the PR using one of -the techniques described in -[Commit into another person's PR](#commit-into-another-persons-pr). - -If you need to write a new topic, the following links are useful: - -- [Writing a New Topic](/docs/contribute/style/write-new-topic/) -- [Using Page Templates](/docs/contribute/style/page-templates/) -- [Documentation Style Guide](/docs/contribute/style/style-guide/) -- [Documentation Content Guide](/docs/contribute/style/content-guide/) - -### SIG members documenting new features - -If you are a member of a SIG developing a new feature for Kubernetes, you need -to work with SIG Docs to be sure your feature is documented in time for the -release. Check the -[feature tracking spreadsheet](https://github.com/kubernetes/sig-release/tree/master/releases) -or check in the #sig-release Slack channel to verify scheduling details and -deadlines. Some deadlines related to documentation are: - -- **Docs deadline - Open placeholder PRs**: Open a pull request against the - `release-X.Y` branch in the `kubernetes/website` repository, with a small - commit that you will amend later. Use the Prow command `/milestone X.Y` to - assign the PR to the relevant milestone. This alerts the docs person managing - this release that the feature docs are coming. If your feature does not need - any documentation changes, make sure the sig-release team knows this, by - mentioning it in the #sig-release Slack channel. If the feature does need - documentation but the PR is not created, the feature may be removed from the - milestone. -- **Docs deadline - PRs ready for review**: Your PR now needs to contain a first - draft of the documentation for your feature. Don't worry about formatting or - polishing. Just describe what the feature does and how to use it. The docs - person managing the release will work with you to get the content into shape - to be published. If your feature needs documentation and the first draft - content is not received, the feature may be removed from the milestone. -- **Docs complete - All PRs reviewed and ready to merge**: If your PR has not - yet been merged into the `release-X.Y` branch by this deadline, work with the - docs person managing the release to get it in. If your feature needs - documentation and the docs are not ready, the feature may be removed from the - milestone. - -If your feature is an Alpha feature and is behind a feature gate, make sure you -add it to [Feature gates](/docs/reference/command-line-tools-reference/feature-gates/) -as part of your pull request. If your feature is moving to Beta -or to General Availability, update the feature gates file. - -## Contribute to other repos - -The [Kubernetes project](https://github.com/kubernetes) contains more than 50 -individual repositories. Many of these repositories contain code or content that -can be considered documentation, such as user-facing help text, error messages, -user-facing text in API references, or even code comments. - -If you see text and you aren't sure where it comes from, you can use GitHub's -search tool at the level of the Kubernetes organization to search through all -repositories for that text. This can help you figure out where to submit your -issue or PR. - -Each repository may have its own processes and procedures. Before you file an -issue or submit a PR, read that repository's `README.md`, `CONTRIBUTING.md`, and -`code-of-conduct.md`, if they exist. - -Most repositories use issue and PR templates. Have a look through some open -issues and PRs to get a feel for that team's processes. Make sure to fill out -the templates with as much detail as possible when you file issues or PRs. - -## Localize content - -The Kubernetes documentation is written in English first, but we want people to -be able to read it in their language of choice. If you are comfortable -writing in another language, especially in the software domain, you can help -localize the Kubernetes documentation or provide feedback on existing localized -content. See [Localization](/docs/contribute/localization/) and ask on the -[kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) -or in `#sig-docs` on Slack if you are interested in helping out. - -### Working with localized content - -Follow these guidelines for working with localized content: - -- Limit PRs to a single language. - - Each language has its own reviewers and approvers. - -- Reviewers, verify that PRs contain changes to only one language. - - If a PR contains changes to source in more than one language, ask the PR contributor to open separate PRs for each language. - -{{% /capture %}} - -{{% capture whatsnext %}} - -When you are comfortable with all of the tasks discussed in this topic and you -want to engage with the Kubernetes docs team in even deeper ways, read the -[advanced docs contributor](/docs/contribute/advanced/) topic. - -{{% /capture %}} diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 2522a832f3d5e..cb3cf0318704e 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -1,13 +1,14 @@ --- -title: Localizing Kubernetes Documentation +title: Localizing Kubernetes documentation content_template: templates/concept approvers: - remyleone - rlenferink - zacharysarah +weight: 50 card: name: contribute - weight: 30 + weight: 50 title: Translating the docs --- @@ -21,9 +22,9 @@ This page shows you how to [localize](https://blog.mozilla.org/l10n/2011/12/14/i ## Getting started -Because contributors can't approve their own pull requests, you need at least two contributors to begin a localization. +Because contributors can't approve their own pull requests, you need at least two contributors to begin a localization. -All localization teams must be self-sustaining with their own resources. We're happy to host your work, but we can't translate it for you. +All localization teams must be self-sustaining with their own resources. The Kubernetes website is happy to host your work, but it's up to you to translate it. ### Find your two-letter language code @@ -31,7 +32,7 @@ First, consult the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/p ### Fork and clone the repo -First, [create your own fork](/docs/contribute/start/#improve-existing-content) of the [kubernetes/website](https://github.com/kubernetes/website) repository. +First, [create your own fork](/docs/contribute/new-content/new-content/#fork-the-repo) of the [kubernetes/website](https://github.com/kubernetes/website) repository. Then, clone your fork and `cd` into it: @@ -42,12 +43,12 @@ cd website ### Open a pull request -Next, [open a pull request](/docs/contribute/start/#submit-a-pull-request) (PR) to add a localization to the `kubernetes/website` repository. +Next, [open a pull request](/docs/contribute/new-content/open-a-pr/#open-a-pr) (PR) to add a localization to the `kubernetes/website` repository. The PR must include all of the [minimum required content](#minimum-required-content) before it can be approved. For an example of adding a new localization, see the PR to enable [docs in French](https://github.com/kubernetes/website/pull/12548). - + ### Join the Kubernetes GitHub organization Once you've opened a localization PR, you can become members of the Kubernetes GitHub organization. Each person on the team needs to create their own [Organization Membership Request](https://github.com/kubernetes/org/issues/new/choose) in the `kubernetes/org` repository. @@ -74,7 +75,7 @@ For an example of adding a label, see the PR for adding the [Italian language la Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/). Other localization teams are happy to help you get started and answer any questions you have. -You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding channels for Indonesian and Portuguese](https://github.com/kubernetes/community/pull/3605). +You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding channels for Indonesian and Portuguese](https://github.com/kubernetes/community/pull/3605). ## Minimum required content @@ -105,11 +106,11 @@ Add a language-specific subdirectory to the [`content`](https://github.com/kuber mkdir content/de ``` -### Localize the Community Code of Conduct +### Localize the community code of conduct Open a PR against the [`cncf/foundation`](https://github.com/cncf/foundation/tree/master/code-of-conduct-languages) repository to add the code of conduct in your language. -### Add a localized README +### Add a localized README file To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of k/website, where `**` is the two-letter language code. For example, a German README file would be `README-de.md`. @@ -192,10 +193,10 @@ mkdir -p content/de/docs/tutorials cp content/en/docs/tutorials/kubernetes-basics.md content/de/docs/tutorials/kubernetes-basics.md ``` -Translation tools can speed up the translation process. For example, some editors offers plugins to quickly translate text. +Translation tools can speed up the translation process. For example, some editors offers plugins to quickly translate text. {{< caution >}} -Machine-generated translation alone does not meet the minimum standard of quality and requires extensive human review to meet that standard. +Machine-generated translation is insufficient on its own. Localization requires extensive human review to meet minimum standards of quality. {{< /caution >}} To ensure accuracy in grammar and meaning, members of your localization team should carefully review all machine-generated translations before publishing. @@ -211,7 +212,7 @@ To find source files for the most recent release: The latest version is {{< latest-version >}}, so the most recent release branch is [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}). -### Site strings in i18n/ +### Site strings in i18n Localizations must include the contents of [`i18n/en.toml`](https://github.com/kubernetes/website/blob/master/i18n/en.toml) in a new language-specific file. Using German as an example: `i18n/de.toml`. @@ -264,7 +265,7 @@ Teams must merge localized content into the same release branch from which the c An approver must maintain a development branch by keeping it current with its source branch and resolving merge conflicts. The longer a development branch stays open, the more maintenance it typically requires. Consider periodically merging development branches and opening new ones, rather than maintaining one extremely long-running development branch. -At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous development branch and the current development branch. +At the beginning of every team milestone, it's helpful to open an issue [comparing upstream changes](https://github.com/kubernetes/website/blob/master/scripts/upstream_changes.py) between the previous development branch and the current development branch. While only approvers can open a new development branch and merge pull requests, anyone can open a pull request for a new development branch. No special permissions are required. @@ -272,11 +273,11 @@ For more information about working from forks or directly from the repository, s ## Upstream contributions -SIG Docs welcomes [upstream contributions and corrections](/docs/contribute/intermediate#localize-content) to the English source. +SIG Docs welcomes upstream contributions and corrections to the English source. ## Help an existing localization -You can also help add or improve content to an existing localization. Join the [Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/) for the localization, and start opening PRs to help. +You can also help add or improve content to an existing localization. Join the [Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/) for the localization, and start opening PRs to help. Please limit pull requests to a single localization since pull requests that change content in multiple localizations could be difficult to review. {{% /capture %}} diff --git a/content/en/docs/contribute/new-content/_index.md b/content/en/docs/contribute/new-content/_index.md new file mode 100644 index 0000000000000..4992a3765442a --- /dev/null +++ b/content/en/docs/contribute/new-content/_index.md @@ -0,0 +1,4 @@ +--- +title: Contributing new content +weight: 20 +--- diff --git a/content/en/docs/contribute/new-content/blogs-case-studies.md b/content/en/docs/contribute/new-content/blogs-case-studies.md new file mode 100644 index 0000000000000..90c50ae6e125b --- /dev/null +++ b/content/en/docs/contribute/new-content/blogs-case-studies.md @@ -0,0 +1,59 @@ +--- +title: Submitting blog posts and case studies +linktitle: Blogs and case studies +slug: blogs-case-studies +content_template: templates/concept +weight: 30 +--- + + +{{% capture overview %}} + +Anyone can write a blog post and submit it for review. +Case studies require extensive review before they're approved. + +{{% /capture %}} + +{{% capture body %}} + +## Write a blog post + +Blog posts should not be +vendor pitches. They must contain content that applies broadly to +the Kubernetes community. The SIG Docs [blog subproject](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject) manages the review process for blog posts. For more information, see [Submit a post](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject#submit-a-post). + +To submit a blog post, you can either: + +- Use the +[Kubernetes blog submission form](https://docs.google.com/forms/d/e/1FAIpQLSdMpMoSIrhte5omZbTE7nB84qcGBy8XnnXhDFoW0h7p2zwXrw/viewform) +- [Open a pull request](/docs/contribute/new-content/new-content/#fork-the-repo) with a new blog post. Create new blog posts in the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts) directory. + +If you open a pull request, ensure that your blog post follows the correct naming conventions and frontmatter information: + +- The markdown file name must follow the format `YYY-MM-DD-Your-Title-Here.md`. For example, `2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md`. +- The front matter must include the following: + +```yaml +--- +layout: blog +title: "Your Title Here" +date: YYYY-MM-DD +slug: text-for-URL-link-here-no-spaces +--- +``` + +## Submit a case study + +Case studies highlight how organizations are using Kubernetes to solve +real-world problems. The Kubernetes marketing team and members of the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} collaborate with you on all case studies. + +Have a look at the source for the +[existing case studies](https://github.com/kubernetes/website/tree/master/content/en/case-studies). + +Refer to the [case study guidelines](https://github.com/cncf/foundation/blob/master/case-study-guidelines.md) and submit your request as outlined in the guidelines. + +{{% /capture %}} + +{{% capture whatsnext %}} + +{{% /capture %}} diff --git a/content/en/docs/contribute/new-content/new-features.md b/content/en/docs/contribute/new-content/new-features.md new file mode 100644 index 0000000000000..68087a2a797c0 --- /dev/null +++ b/content/en/docs/contribute/new-content/new-features.md @@ -0,0 +1,134 @@ +--- +title: Documenting a feature for a release +linktitle: Documenting for a release +content_template: templates/concept +main_menu: true +weight: 20 +card: + name: contribute + weight: 45 + title: Documenting a feature for a release +--- +{{% capture overview %}} + +Each major Kubernetes release introduces new features that require documentation. New releases also bring updates to existing features and documentation (such as upgrading a feature from alpha to beta). + +Generally, the SIG responsible for a feature submits draft documentation of the +feature as a pull request to the appropriate development branch of the +`kubernetes/website` repository, and someone on the SIG Docs team provides +editorial feedback or edits the draft directly. This section covers the branching +conventions and process used during a release by both groups. + +{{% /capture %}} + +{{% capture body %}} + +## For documentation contributors + +In general, documentation contributors don't write content from scratch for a release. +Instead, they work with the SIG creating a new feature to refine the draft documentation and make it release ready. + +After you've chosen a feature to document or assist, ask about it in the `#sig-docs` +Slack channel, in a weekly SIG Docs meeting, or directly on the PR filed by the +feature SIG. If you're given the go-ahead, you can edit into the PR using one of +the techniques described in +[Commit into another person's PR](/docs/contribute/review/for-approvers/#commit-into-another-persons-pr). + +### Find out about upcoming features + +To find out about upcoming features, attend the weekly SIG Release meeting (see +the [community](https://kubernetes.io/community/) page for upcoming meetings) +and monitor the release-specific documentation +in the [kubernetes/sig-release](https://github.com/kubernetes/sig-release/) +repository. Each release has a sub-directory in the [/sig-release/tree/master/releases/](https://github.com/kubernetes/sig-release/tree/master/releases) +directory. The sub-directory contains a release schedule, a draft of the release +notes, and a document listing each person on the release team. + +The release schedule contains links to all other documents, meetings, +meeting minutes, and milestones relating to the release. It also contains +information about the goals and timeline of the release, and any special +processes in place for this release. Near the bottom of the document, several +release-related terms are defined. + +This document also contains a link to the **Feature tracking sheet**, which is +the official way to find out about all new features scheduled to go into the +release. + +The release team document lists who is responsible for each release role. If +it's not clear who to talk to about a specific feature or question you have, +either attend the release meeting to ask your question, or contact the release +lead so that they can redirect you. + +The release notes draft is a good place to find out about +specific features, changes, deprecations, and more about the release. The +content is not finalized until late in the release cycle, so use caution. + +### Feature tracking sheet + +The feature tracking sheet [for a given Kubernetes release](https://github.com/kubernetes/sig-release/tree/master/releases) +lists each feature that is planned for a release. +Each line item includes the name of the feature, a link to the feature's main +GitHub issue, its stability level (Alpha, Beta, or Stable), the SIG and +individual responsible for implementing it, whether it +needs docs, a draft release note for the feature, and whether it has been +merged. Keep the following in mind: + +- Beta and Stable features are generally a higher documentation priority than + Alpha features. +- It's hard to test (and therefore to document) a feature that hasn't been merged, + or is at least considered feature-complete in its PR. +- Determining whether a feature needs documentation is a manual process and + just because a feature is not marked as needing docs doesn't mean it doesn't + need them. + +## For developers or other SIG members + +This section is information for members of other Kubernetes SIGs documenting new features +for a release. + +If you are a member of a SIG developing a new feature for Kubernetes, you need +to work with SIG Docs to be sure your feature is documented in time for the +release. Check the +[feature tracking spreadsheet](https://github.com/kubernetes/sig-release/tree/master/releases) +or check in the `#sig-release` Kubernetes Slack channel to verify scheduling details and +deadlines. + +### Open a placeholder PR + +1. Open a pull request against the +`dev-{{< skew nextMinorVersion >}}` branch in the `kubernetes/website` repository, with a small +commit that you will amend later. +2. Use the Prow command `/milestone {{< skew nextMinorVersion >}}` to +assign the PR to the relevant milestone. This alerts the docs person managing +this release that the feature docs are coming. + +If your feature does not need +any documentation changes, make sure the sig-release team knows this, by +mentioning it in the `#sig-release` Slack channel. If the feature does need +documentation but the PR is not created, the feature may be removed from the +milestone. + +### PR ready for review + +When ready, populate your placeholder PR with feature documentation. + +Do your best to describe your feature and how to use it. If you need help structuring your documentation, ask in the `#sig-docs` slack channel. + +When you complete your content, the documentation person assigned to your feature reviews it. Use their suggestions to get the content to a release ready state. + +If your feature needs documentation and the first draft +content is not received, the feature may be removed from the milestone. + +### All PRs reviewed and ready to merge + +If your PR has not yet been merged into the `dev-{{< skew nextMinorVersion >}}` branch by the release deadline, work with the +docs person managing the release to get it in by the deadline. If your feature needs +documentation and the docs are not ready, the feature may be removed from the +milestone. + +If your feature is an Alpha feature and is behind a feature gate, make sure you +add it to [Alpha/Beta Feature gates](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table +as part of your pull request. If your feature is moving out of Alpha, make sure to +remove it from that table. + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/new-content/open-a-pr.md b/content/en/docs/contribute/new-content/open-a-pr.md new file mode 100644 index 0000000000000..4407568aff09c --- /dev/null +++ b/content/en/docs/contribute/new-content/open-a-pr.md @@ -0,0 +1,484 @@ +--- +title: Opening a pull request +slug: new-content +content_template: templates/concept +weight: 10 +card: + name: contribute + weight: 40 +--- + +{{% capture overview %}} + +{{< note >}} +**Code developers**: If you are documenting a new feature for an +upcoming Kubernetes release, see +[Document a new feature](/docs/contribute/new-content/new-features/). +{{< /note >}} + +To contribute new content pages or improve existing content pages, open a pull request (PR). Make sure you follow all the requirements in the [Before you begin](/docs/contribute/new-content/overview/#before-you-begin) section. + +If your change is small, or you're unfamiliar with git, read [Changes using GitHub](#changes-using-github) to learn how to edit a page. + +If your changes are large, read [Work from a local fork](#fork-the-repo) to learn how to make changes locally on your computer. + +{{% /capture %}} + +{{% capture body %}} + +## Changes using GitHub + +If you're less experienced with git workflows, here's an easier method of +opening a pull request. + +1. On the page where you see the issue, select the pencil icon at the top right. + You can also scroll to the bottom of the page and select **Edit this page**. + +2. Make your changes in the GitHub markdown editor. + +3. Below the editor, fill in the **Propose file change** + form. In the first field, give your commit message a title. In + the second field, provide a description. + + {{< note >}} + Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in your commit message. You can add those to the pull request + description later. + {{< /note >}} + +4. Select **Propose file change**. + +5. Select **Create pull request**. + +6. The **Open a pull request** screen appears. Fill in the form: + + - The **Subject** field of the pull request defaults to the commit summary. + You can change it if needed. + - The **Body** contains your extended commit message, if you have one, + and some template text. Add the + details the template text asks for, then delete the extra template text. + - Leave the **Allow edits from maintainers** checkbox selected. + + {{< note >}} + PR descriptions are a great way to help reviewers understand your change. For more information, see [Opening a PR](#open-a-pr). + {{}} + +7. Select **Create pull request**. + +### Addressing feedback in GitHub + +Before merging a pull request, Kubernetes community members review and +approve it. The `k8s-ci-robot` suggests reviewers based on the nearest +owner mentioned in the pages. If you have someone specific in mind, +leave a comment with their GitHub username in it. + +If a reviewer asks you to make changes: + +1. Go to the **Files changed** tab. +2. Select the pencil (edit) icon on any files changed by the +pull request. +3. Make the changes requested. +4. Commit the changes. + +If you are waiting on a reviewer, reach out once every 7 days. You can also post a message in the `#sig-docs` Slack channel. + +When your review is complete, a reviewer merges your PR and your changes go live a few minutes later. + +## Work from a local fork {#fork-the-repo} + +If you're more experienced with git, or if your changes are larger than a few lines, +work from a local fork. + +Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed on your computer. You can also use a git UI application. + +### Fork the kubernetes/website repository + +1. Navigate to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository. +2. Select **Fork**. + +### Create a local clone and set the upstream + +3. In a terminal window, clone your fork: + + ```bash + git clone git@github.com//website + ``` + +4. Navigate to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote: + + ```bash + cd website + + git remote add upstream https://github.com/kubernetes/website.git + ``` + +5. Confirm your `origin` and `upstream` repositories: + + ```bash + git remote -v + ``` + + Output is similar to: + + ```bash + origin git@github.com:/website.git (fetch) + origin git@github.com:/website.git (push) + upstream https://github.com/kubernetes/website (fetch) + upstream https://github.com/kubernetes/website (push) + ``` + +6. Fetch commits from your fork's `origin/master` and `kubernetes/website`'s `upstream/master`: + + ```bash + git fetch origin + git fetch upstream + ``` + + This makes sure your local repository is up to date before you start making changes. + + {{< note >}} + This workflow is different than the [Kubernetes Community GitHub Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). You do not need to merge your local copy of `master` with `upstream/master` before pushing updates to your fork. + {{< /note >}} + +### Create a branch + +1. Decide which branch base to your work on: + + - For improvements to existing content, use `upstream/master`. + - For new content about existing features, use `upstream/master`. + - For localized content, use the localization's conventions. For more information, see [localizing Kubernetes documentation](/docs/contribute/localization/). + - For new features in an upcoming Kubernetes release, use the feature branch. For more information, see [documenting for a release](/docs/contribute/new-content/new-features/). + - For long-running efforts that multiple SIG Docs contributors collaborate on, + like content reorganization, use a specific feature branch created for that + effort. + + If you need help choosing a branch, ask in the `#sig-docs` Slack channel. + +2. Create a new branch based on the branch identified in step 1. This example assumes the base branch is `upstream/master`: + + ```bash + git checkout -b upstream/master + ``` + +3. Make your changes using a text editor. + +At any time, use the `git status` command to see what files you've changed. + +### Commit your changes + +When you are ready to submit a pull request, commit your changes. + +1. In your local repository, check which files you need to commit: + + ```bash + git status + ``` + + Output is similar to: + + ```bash + On branch + Your branch is up to date with 'origin/'. + + Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git checkout -- ..." to discard changes in working directory) + + modified: content/en/docs/contribute/new-content/contributing-content.md + + no changes added to commit (use "git add" and/or "git commit -a") + ``` + +2. Add the files listed under **Changes not staged for commit** to the commit: + + ```bash + git add + ``` + + Repeat this for each file. + +3. After adding all the files, create a commit: + + ```bash + git commit -m "Your commit message" + ``` + + {{< note >}} + Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in your commit message. You can add those to the pull request + description later. + {{< /note >}} + +4. Push your local branch and its new commit to your remote fork: + + ```bash + git push origin + ``` + +### Preview your changes locally {#preview-locally} + +It's a good idea to preview your changes locally before pushing them or opening a pull request. A preview lets you catch build errors or markdown formatting problems. + +You can either build the website's docker image or run Hugo locally. Building the docker image is slower but displays [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/), which can be useful for debugging. + +{{< tabs name="tab_with_hugo" >}} +{{% tab name="Hugo in a container" %}} + +1. Build the image locally: + + ```bash + make docker-image + ``` + +2. After building the `kubernetes-hugo` image locally, build and serve the site: + + ```bash + make docker-serve + ``` + +3. In a web browser, navigate to `https://localhost:1313`. Hugo watches the + changes and rebuilds the site as needed. + +4. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, + or close the terminal window. + +{{% /tab %}} +{{% tab name="Hugo on the command line" %}} + +Alternately, install and use the `hugo` command on your computer: + +5. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/master/netlify.toml). + +6. In a terminal, go to your Kubernetes website repository and start the Hugo server: + + ```bash + cd /website + hugo server + ``` + +7. In your browser’s address bar, enter `https://localhost:1313`. + +8. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, + or close the terminal window. + +{{% /tab %}} +{{< /tabs >}} + +### Open a pull request from your fork to kubernetes/website {#open-a-pr} + +1. In a web browser, go to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository. +2. Select **New Pull Request**. +3. Select **compare across forks**. +4. From the **head repository** drop-down menu, select your fork. +5. From the **compare** drop-down menu, select your branch. +6. Select **Create Pull Request**. +7. Add a description for your pull request: + - **Title** (50 characters or less): Summarize the intent of the change. + - **Description**: Describe the change in more detail. + - If there is a related GitHub issue, include `Fixes #12345` or `Closes #12345` in the description. GitHub's automation closes the mentioned issue after merging the PR if used. If there are other related PRs, link those as well. + - If you want advice on something specific, include any questions you'd like reviewers to think about in your description. + +8. Select the **Create pull request** button. + + Congratulations! Your pull request is available in [Pull requests](https://github.com/kubernetes/website/pulls). + + +After opening a PR, GitHub runs automated tests and tries to deploy a preview using [Netlify](https://www.netlify.com/). + + - If the Netlify build fails, select **Details** for more information. + - If the Netlify build succeeds, select **Details** opens a staged version of the Kubernetes website with your changes applied. This is how reviewers check your changes. + +GitHub also automatically assigns labels to a PR, to help reviewers. You can add them too, if needed. For more information, see [Adding and removing issue labels](/docs/contribute/review/for-approvers/#adding-and-removing-issue-labels). + +### Addressing feedback locally + +1. After making your changes, amend your previous commit: + + ```bash + git commit -a --amend + ``` + + - `-a`: commits all changes + - `--amend`: amends the previous commit, rather than creating a new one + +2. Update your commit message if needed. + +3. Use `git push origin ` to push your changes and re-run the Netlify tests. + + {{< note >}} + If you use `git commit -m` instead of amending, you must [squash your commits](#squashing-commits) before merging. + {{< /note >}} + +#### Changes from reviewers + +Sometimes reviewers commit to your pull request. Before making any other changes, fetch those commits. + +1. Fetch commits from your remote fork and rebase your working branch: + + ```bash + git fetch origin + git rebase origin/ + ``` + +2. After rebasing, force-push new changes to your fork: + + ```bash + git push --force-with-lease origin + ``` + +#### Merge conflicts and rebasing + +{{< note >}} +For more information, see [Git Branching - Basic Branching and Merging](https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging#_basic_merge_conflicts), [Advanced Merging](https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging), or ask in the `#sig-docs` Slack channel for help. +{{< /note >}} + +If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR. + +1. Update your fork and rebase your local branch: + + ```bash + git fetch origin + git rebase origin/ + ``` + + Then force-push the changes to your fork: + + ```bash + git push --force-with-lease origin + ``` + +2. Fetch changes from `kubernetes/website`'s `upstream/master` and rebase your branch: + + ```bash + git fetch upstream + git rebase upstream/master + ``` + +3. Inspect the results of the rebase: + + ```bash + git status + ``` + + This results in a number of files marked as conflicted. + +4. Open each conflicted file and look for the conflict markers: `>>>`, `<<<`, and `===`. Resolve the conflict and delete the conflict marker. + + {{< note >}} + For more information, see [How conflicts are presented](https://git-scm.com/docs/git-merge#_how_conflicts_are_presented). + {{< /note >}} + +5. Add the files to the changeset: + + ```bash + git add + ``` +6. Continue the rebase: + + ```bash + git rebase --continue + ``` + +7. Repeat steps 2 to 5 as needed. + + After applying all commits, the `git status` command shows that the rebase is complete. + +8. Force-push the branch to your fork: + + ```bash + git push --force-with-lease origin + ``` + + The pull request no longer shows any conflicts. + + +### Squashing commits + +{{< note >}} +For more information, see [Git Tools - Rewriting History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History), or ask in the `#sig-docs` Slack channel for help. +{{< /note >}} + +If your PR has multiple commits, you must squash them into a single commit before merging your PR. You can check the number of commits on your PR's **Commits** tab or by running the `git log` command locally. + +{{< note >}} +This topic assumes `vim` as the command line text editor. +{{< /note >}} + +1. Start an interactive rebase: + + ```bash + git rebase -i HEAD~ + ``` + + Squashing commits is a form of rebasing. The `-i` switch tells git you want to rebase interactively. `HEAD~}} + For more information, see [Interactive Mode](https://git-scm.com/docs/git-rebase#_interactive_mode). + {{< /note >}} + +2. Start editing the file. + + Change the original text: + + ```bash + pick d875112ca Original commit + pick 4fa167b80 Address feedback 1 + pick 7d54e15ee Address feedback 2 + ``` + + To: + + ```bash + pick d875112ca Original commit + squash 4fa167b80 Address feedback 1 + squash 7d54e15ee Address feedback 2 + ``` + + This squashes commits `4fa167b80 Address feedback 1` and `7d54e15ee Address feedback 2` into `d875112ca Original commit`, leaving only `d875112ca Original commit` as a part of the timeline. + +3. Save and exit your file. + +4. Push your squashed commit: + + ```bash + git push --force-with-lease origin + ``` + +## Contribute to other repos + +The [Kubernetes project](https://github.com/kubernetes) contains 50+ repositories. Many of these repositories contain documentation: user-facing help text, error messages, API references or code comments. + +If you see text you'd like to improve, use GitHub to search all repositories in the Kubernetes organization. +This can help you figure out where to submit your issue or PR. + +Each repository has its own processes and procedures. Before you file an +issue or submit a PR, read that repository's `README.md`, `CONTRIBUTING.md`, and +`code-of-conduct.md`, if they exist. + +Most repositories use issue and PR templates. Have a look through some open +issues and PRs to get a feel for that team's processes. Make sure to fill out +the templates with as much detail as possible when you file issues or PRs. + +{{% /capture %}} + +{{% capture whatsnext %}} + +- Read [Reviewing](/docs/contribute/reviewing/revewing-prs) to learn more about the review process. + +{{% /capture %}} diff --git a/content/en/docs/contribute/new-content/overview.md b/content/en/docs/contribute/new-content/overview.md new file mode 100644 index 0000000000000..831ed12a7afeb --- /dev/null +++ b/content/en/docs/contribute/new-content/overview.md @@ -0,0 +1,58 @@ +--- +title: Contributing new content overview +linktitle: Overview +content_template: templates/concept +main_menu: true +weight: 5 +--- + +{{% capture overview %}} + +This section contains information you should know before contributing new content. + + +{{% /capture %}} + +{{% capture body %}} + +## Contributing basics + +- Write Kubernetes documentation in Markdown and build the Kubernetes site using [Hugo](https://gohugo.io/). +- The source is in [GitHub](https://github.com/kubernetes/website). You can find Kubernetes documentation at `/content/en/docs/`. Some of the reference documentation is automatically generated from scripts in the `update-imported-docs/` directory. +- [Page templates](/docs/contribute/style/page-templates/) control the presentation of documentation content in Hugo. +- In addition to the standard Hugo shortcodes, we use a number of [custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/) in our documentation to control the presentation of content. +- Documentation source is available in multiple languages in `/content/`. Each language has its own folder with a two-letter code determined by the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php). For example, English documentation source is stored in `/content/en/docs/`. +- For more information about contributing to documentation in multiple languages or starting a new translation, see [localization](/docs/contribute/localization). + +## Before you begin {#before-you-begin} + +### Sign the CNCF CLA {#sign-the-cla} + +All Kubernetes contributors **must** read the [Contributor guide](https://github.com/kubernetes/community/blob/master/contributors/guide/README.md) and [sign the Contributor License Agreement (CLA)](https://github.com/kubernetes/community/blob/master/CLA.md). + +Pull requests from contributors who haven't signed the CLA fail the automated tests. The name and email you provide must match those found in your `git config`, and your git name and email must match those used for the CNCF CLA. + +### Choose which Git branch to use + +When opening a pull request, you need to know in advance which branch to base your work on. + +Scenario | Branch +:---------|:------------ +Existing or new English language content for the current release | `master` +Content for a feature change release | The branch which corresponds to the major and minor version the feature change is in, using the pattern `dev-release-`. For example, if a feature changes in the `{{< latest-version >}}` release, then add documentation changes to the ``dev-{{< release-branch >}}`` branch. +Content in other languages (localizations) | Use the localization's convention. See the [Localization branching strategy](/docs/contribute/localization/#branching-strategy) for more information. + + +If you're still not sure which branch to choose, ask in `#sig-docs` on Slack. + +{{< note >}} +If you already submitted your pull request and you know that the base branch +was wrong, you (and only you, the submitter) can change it. +{{< /note >}} + +### Languages per PR + +Limit pull requests to one language per PR. If you need to make an identical change to the same code sample in multiple languages, open a separate PR for each language. + + +{{% /capture %}} diff --git a/content/en/docs/contribute/participating.md b/content/en/docs/contribute/participating.md index dbc44b586777f..3f491dc856b5c 100644 --- a/content/en/docs/contribute/participating.md +++ b/content/en/docs/contribute/participating.md @@ -1,9 +1,10 @@ --- title: Participating in SIG Docs content_template: templates/concept +weight: 60 card: name: contribute - weight: 40 + weight: 60 --- {{% capture overview %}} @@ -35,7 +36,7 @@ aspects of Kubernetes -- the Kubernetes website and documentation. ## Roles and responsibilities -- **Anyone** can contribute to Kubernetes documentation. To contribute, you must [sign the CLA](/docs/contribute/start#sign-the-cla) and have a GitHub account. +- **Anyone** can contribute to Kubernetes documentation. To contribute, you must [sign the CLA](/docs/contribute/new-content/overview/#sign-the-cla) and have a GitHub account. - **Members** of the Kubernetes organization are contributors who have spent time and effort on the Kubernetes project, usually by opening pull requests with accepted changes. See [Community membership](https://github.com/kubernetes/community/blob/master/community-membership.md) for membership criteria. - A SIG Docs **Reviewer** is a member of the Kubernetes organization who has expressed interest in reviewing documentation pull requests, and has been @@ -61,7 +62,7 @@ Anyone can do the following: If you are not a member of the Kubernetes organization, using `/lgtm` has no effect on automated systems. {{< /note >}} -After [signing the CLA](/docs/contribute/start#sign-the-cla), anyone can also: +After [signing the CLA](/docs/contribute/new-content/overview/#sign-the-cla), anyone can also: - Open a pull request to improve existing content, add new content, or write a blog post or case study. ## Members @@ -222,7 +223,7 @@ Approvers improve the documentation by reviewing and merging pull requests into - Visit the Netlify page preview for a PR to make sure things look good before approving. -- Participate in the [PR Wrangler rotation scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers) for weekly rotations. SIG Docs expects all approvers to participate in this +- Participate in the [PR Wrangler rotation schedule](https://github.com/kubernetes/website/wiki/PR-Wranglers) for weekly rotations. SIG Docs expects all approvers to participate in this rotation. See [Be the PR Wrangler for a week](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) for more details. @@ -298,7 +299,7 @@ SIG Docs approvers. Here's how it works. - Any Kubernetes member can add the `lgtm` label by adding a `/lgtm` comment. - Only SIG Docs approvers can merge a pull request by adding an `/approve` comment. Some approvers also perform additional - specific roles, such as [PR Wrangler](#pr-wrangler) or + specific roles, such as [PR Wrangler](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) or [SIG Docs chairperson](#sig-docs-chairperson). {{% /capture %}} @@ -307,7 +308,8 @@ SIG Docs approvers. Here's how it works. For more information about contributing to the Kubernetes documentation, see: -- [Start contributing](/docs/contribute/start/) -- [Documentation style](/docs/contribute/style/) +- [Contributing new content](/docs/contribute/overview/) +- [Reviewing content](/docs/contribute/review/reviewing-prs) +- [Documentation style guide](/docs/contribute/style/) {{% /capture %}} diff --git a/content/en/docs/contribute/review/_index.md b/content/en/docs/contribute/review/_index.md new file mode 100644 index 0000000000000..bc70e3c6f1b62 --- /dev/null +++ b/content/en/docs/contribute/review/_index.md @@ -0,0 +1,14 @@ +--- +title: Reviewing changes +weight: 30 +--- + +{{% capture overview %}} + +This section describes how to review content. + +{{% /capture %}} + +{{% capture body %}} + +{{% /capture %}} diff --git a/content/en/docs/contribute/review/for-approvers.md b/content/en/docs/contribute/review/for-approvers.md new file mode 100644 index 0000000000000..f2be84f10d322 --- /dev/null +++ b/content/en/docs/contribute/review/for-approvers.md @@ -0,0 +1,228 @@ +--- +title: Reviewing for approvers and reviewers +linktitle: For approvers and reviewers +slug: for-approvers +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +SIG Docs [Reviewers](/docs/contribute/participating/#reviewers) and [Approvers](/docs/contribute/participating/#approvers) do a few extra things when reviewing a change. + +Every week a specific docs approver volunteers to triage +and review pull requests. This +person is the "PR Wrangler" for the week. See the +[PR Wrangler scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers) for more information. To become a PR Wrangler, attend the weekly SIG Docs meeting and volunteer. Even if you are not on the schedule for the current week, you can still review pull +requests (PRs) that are not already under active review. + +In addition to the rotation, a bot assigns reviewers and approvers +for the PR based on the owners for the affected files. + +{{% /capture %}} + + +{{% capture body %}} + +## Reviewing a PR + +Kubernetes documentation follows the [Kubernetes code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process). + +Everything described in [Reviewing a pull request](/docs/contribute/review/reviewing-prs) applies, but Reviewers and Approvers should also do the following: + +- Using the `/assign` Prow command to assign a specific reviewer to a PR as needed. This is extra important +when it comes to requesting technical review from code contributors. + + {{< note >}} + Look at the `reviewers` field in the front-matter at the top of a Markdown file to see who can + provide technical review. + {{< /note >}} + +- Making sure the PR follows the [Content](/docs/contribute/style/content-guide/) and [Style](/docs/contribute/style/style-guide/) guides; link the author to the relevant part of the guide(s) if it doesn't. +- Using the GitHub **Request Changes** option when applicable to suggest changes to the PR author. +- Changing your review status in GitHub using the `/approve` or `/lgtm` Prow commands, if your suggestions are implemented. + +## Commit into another person's PR + +Leaving PR comments is helpful, but there might be times when you need to commit +into another person's PR instead. + +Do not "take over" for another person unless they explicitly ask +you to, or you want to resurrect a long-abandoned PR. While it may be faster +in the short term, it deprives the person of the chance to contribute. + +The process you use depends on whether you need to edit a file that is already +in the scope of the PR, or a file that the PR has not yet touched. + +You can't commit into someone else's PR if either of the following things is +true: + +- If the PR author pushed their branch directly to the + [https://github.com/kubernetes/website/](https://github.com/kubernetes/website/) + repository. Only a reviewer with push access can commit to another user's PR. + + {{< note >}} + Encourage the author to push their branch to their fork before + opening the PR next time. + {{< /note >}} + +- The PR author explicitly disallows edits from approvers. + +## Prow commands for reviewing + +[Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is +the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow +enables chatbot-style commands to handle GitHub actions across the Kubernetes +organization, like [adding and removing +labels](#add-and-remove-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/` format. + +The most common prow commands reviewers and approvers use are: + +{{< table caption="Prow commands for reviewing" >}} +Prow Command | Role Restrictions | Description +:------------|:------------------|:----------- +`/lgtm` | Anyone, but triggers automation if a Reviewer or Approver uses it | Signals that you've finished reviewing a PR and are satisfied with the changes. +`/approve` | Approvers | Approves a PR for merging. +`/assign` | Reviewers or Approvers | Assigns a person to review or approve a PR +`/close` | Reviewers or Approvers | Closes an issue or PR. +`/hold` | Anyone | Adds the `do-not-merge/hold` label, indicating the PR cannot be automatically merged. +`/hold cancel` | Anyone | Removes the `do-not-merge/hold` label. +{{< /table >}} + +See [the Prow command reference](https://prow.k8s.io/command-help) to see the full list +of commands you can use in a PR. + +## Triage and categorize issues + + +In general, SIG Docs follows the [Kubernetes issue triage](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md) process and uses the same labels. + + +This GitHub Issue [filter](https://github.com/kubernetes/website/issues?q=is%3Aissue+is%3Aopen+-label%3Apriority%2Fbacklog+-label%3Apriority%2Fimportant-longterm+-label%3Apriority%2Fimportant-soon+-label%3Atriage%2Fneeds-information+-label%3Atriage%2Fsupport+sort%3Acreated-asc) +finds issues that might need triage. + +### Triaging an issue + +1. Validate the issue + - Make sure the issue is about website documentation. Some issues can be closed quickly by + answering a question or pointing the reporter to a resource. See the + [Support requests or code bug reports](#support-requests-or-code-bug-reports) section for details. + - Assess whether the issue has merit. + - Add the `triage/needs-information` label if the issue doesn't have enough + detail to be actionable or the template is not filled out adequately. + - Close the issue if it has both the `lifecycle/stale` and `triage/needs-information` labels. + +2. Add a priority label (the + [Issue Triage Guidelines](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#define-priority) define priority labels in detail) + + {{< table caption="Issue labels" >}} + Label | Description + :------------|:------------------ + `priority/critical-urgent` | Do this right now. + `priority/important-soon` | Do this within 3 months. + `priority/important-longterm` | Do this within 6 months. + `priority/backlog` | Deferrable indefinitely. Do when resources are available. + `priority/awaiting-more-evidence` | Placeholder for a potentially good issue so it doesn't get lost. + `help` or `good first issue` | Suitable for someone with very little Kubernetes or SIG Docs experience. See [Help Wanted and Good First Issue Labels](https://github.com/kubernetes/community/blob/master/contributors/guide/help-wanted.md) for more information. + + {{< /table >}} + + At your discretion, take ownership of an issue and submit a PR for it + (especially if it's quick or relates to work you're already doing). + +If you have questions about triaging an issue, ask in `#sig-docs` on Slack or +the [kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). + +## Adding and removing issue labels + +To add a label, leave a comment in one of the following formats: + +- `/` (for example, `/good-first-issue`) +- `/ ` (for example, `/triage needs-information` or `/language ja`) + +To remove a label, leave a comment in one of the following formats: + +- `/remove-` (for example, `/remove-help`) +- `/remove- ` (for example, `/remove-triage needs-information`)` + +In both cases, the label must already exist. If you try to add a label that does not exist, the command is +silently ignored. + +For a list of all labels, see the [website repository's Labels section](https://github.com/kubernetes/website/labels). Not all labels are used by SIG Docs. + +### Issue lifecycle labels + +Issues are generally opened and closed quickly. +However, sometimes an issue is inactive after its opened. +Other times, an issue may need to remain open for longer than 90 days. + +{{< table caption="Issue lifecycle labels" >}} +Label | Description +:------------|:------------------ +`lifecycle/stale` | After 90 days with no activity, an issue is automatically labeled as stale. The issue will be automatically closed if the lifecycle is not manually reverted using the `/remove-lifecycle stale` command. +`lifecycle/frozen` | An issue with this label will not become stale after 90 days of inactivity. A user manually adds this label to issues that need to remain open for much longer than 90 days, such as those with a `priority/important-longterm` label. +{{< /table >}} + +## Handling special issue types + +SIG Docs encounters the following types of issues often enough to document how +to handle them. + +### Duplicate issues + +If a single problem has one or more issues open for it, combine them into a single issue. +You should decide which issue to keep open (or +open a new issue), then move over all relevant information and link related issues. +Finally, label all other issues that describe the same problem with +`triage/duplicate` and close them. Only having a single issue to work on reduces confusion +and avoids duplicate work on the same problem. + +### Dead link issues + +If the dead link issue is in the API or `kubectl` documentation, assign them `/priority critical-urgent` until the problem is fully understood. Assign all other dead link issues `/priority important-longterm`, as they must be manually fixed. + +### Blog issues + +We expect [Kubernetes Blog](https://kubernetes.io/blog/) entries to become +outdated over time. Therefore, we only maintain blog entries less than a year old. +If an issue is related to a blog entry that is more than one year old, +close the issue without fixing. + +### Support requests or code bug reports + +Some docs issues are actually issues with the underlying code, or requests for +assistance when something, for example a tutorial, doesn't work. +For issues unrelated to docs, close the issue with the `triage/support` label and a comment +directing the requester to support venues (Slack, Stack Overflow) and, if +relevant, the repository to file an issue for bugs with features (`kubernetes/kubernetes` +is a great place to start). + +Sample response to a request for support: + +```none +This issue sounds more like a request for support and less +like an issue specifically for docs. I encourage you to bring +your question to the `#kubernetes-users` channel in +[Kubernetes slack](http://slack.k8s.io/). You can also search +resources like +[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) +for answers to similar questions. + +You can also open issues for Kubernetes functionality in +https://github.com/kubernetes/kubernetes. + +If this is a documentation issue, please re-open this issue. +``` + +Sample code bug report response: + +```none +This sounds more like an issue with the code than an issue with +the documentation. Please open an issue at +https://github.com/kubernetes/kubernetes/issues. + +If this is a documentation issue, please re-open this issue. +``` + + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/review/reviewing-prs.md b/content/en/docs/contribute/review/reviewing-prs.md new file mode 100644 index 0000000000000..cb432a97ba510 --- /dev/null +++ b/content/en/docs/contribute/review/reviewing-prs.md @@ -0,0 +1,98 @@ +--- +title: Reviewing pull requests +content_template: templates/concept +main_menu: true +weight: 10 +--- + +{{% capture overview %}} + +Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls) section in the Kubernetes website repository to see open pull requests. + +Reviewing documentation pull requests is a +great way to introduce yourself to the Kubernetes community. +It helps you learn the code base and build trust with other contributors. + +Before reviewing, it's a good idea to: + +- Read the [content guide](/docs/contribute/style/content-guide/) and +[style guide](/docs/contribute/style/style-guide/) so you can leave informed comments. +- Understand the different [roles and responsibilities](/docs/contribute/participating/#roles-and-responsibilities) in the Kubernetes documentation community. + +{{% /capture %}} + +{{% capture body %}} + +## Before you begin + +Before you start a review: + +- Read the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) and ensure that you abide by it at all times. +- Be polite, considerate, and helpful. +- Comment on positive aspects of PRs as well as changes. +- Be empathetic and mindful of how your review may be received. +- Assume good intent and ask clarifying questions. +- Experienced contributors, consider pairing with new contributors whose work requires extensive changes. + +## Review process + +In general, review pull requests for content and style in English. + +1. Go to + [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). + You see a list of every open pull request against the Kubernetes website and + docs. + +2. Filter the open PRs using one or all of the following labels: + - `cncf-cla: yes` (Recommended): PRs submitted by contributors who have not signed the CLA cannot be merged. See [Sign the CLA](/docs/contribute/new-content/overview/#sign-the-cla) for more information. + - `language/en` (Recommended): Filters for english language PRs only. + - `size/`: filters for PRs of a certain size. If you're new, start with smaller PRs. + + Additionally, ensure the PR isn't marked as a work in progress. PRs using the `work in progress` label are not ready for review yet. + +3. Once you've selected a PR to review, understand the change by: + - Reading the PR description to understand the changes made, and read any linked issues + - Reading any comments by other reviewers + - Clicking the **Files changed** tab to see the files and lines changed + - Previewing the changes in the Netlify preview build by scrolling to the PR's build check section at the bottom of the **Conversation** tab and clicking the **deploy/netlify** line's **Details** link. + +4. Go to the **Files changed** tab to start your review. + 1. Click on the `+` symbol beside the line you want to comment on. + 2. Fill in any comments you have about the line and click either **Add single comment** (if you have only one comment to make) or **Start a review** (if you have multiple comments to make). + 3. When finished, click **Review changes** at the top of the page. Here, you can add + add a summary of your review (and leave some positive comments for the contributor!), + approve the PR, comment or request changes as needed. New contributors should always + choose **Comment**. + +## Reviewing checklist + +When reviewing, use the following as a starting point. + +### Language and grammar + +- Are there any obvious errors in language or grammar? Is there a better way to phrase something? +- Are there any complicated or archaic words which could be replaced with a simpler word? +- Are there any words, terms or phrases in use which could be replaced with a non-discriminatory alternative? +- Does the word choice and its capitalization follow the [style guide](/docs/contribute/style/style-guide/)? +- Are there long sentences which could be shorter or less complex? +- Are there any long paragraphs which might work better as a list or table? + +### Content + +- Does similar content exist elsewhere on the Kubernetes site? +- Does the content excessively link to off-site, individual vendor or non-open source documentation? + +### Website + +- Did this PR change or remove a page title, slug/alias or anchor link? If so, are there broken links as a result of this PR? Is there another option, like changing the page title without changing the slug? +- Does the PR introduce a new page? If so: + - Is the page using the right [page template](/docs/contribute/style/page-templates/) and associated Hugo shortcodes? + - Does the page appear correctly in the section's side navigation (or at all)? + - Should the page appear on the [Docs Home](/docs/home/) listing? +- Do the changes show up in the Netlify preview? Be particularly vigilant about lists, code blocks, tables, notes and images. + +### Other + +For small issues with a PR, like typos or whitespace, prefix your comments with `nit:`. This lets the author know the issue is non-critical. + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/contribute/start.md b/content/en/docs/contribute/start.md deleted file mode 100644 index 181e35968229e..0000000000000 --- a/content/en/docs/contribute/start.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -title: Start contributing -slug: start -content_template: templates/concept -weight: 10 -card: - name: contribute - weight: 10 ---- - -{{% capture overview %}} - -If you want to get started contributing to the Kubernetes documentation, this -page and its linked topics can help you get started. You don't need to be a -developer or a technical writer to make a big impact on the Kubernetes -documentation and user experience! All you need for the topics on this page is -a [GitHub account](https://github.com/join) and a web browser. - -If you're looking for information on how to start contributing to Kubernetes -code repositories, refer to -[the Kubernetes community guidelines](https://github.com/kubernetes/community/blob/master/governance.md). - -{{% /capture %}} - - -{{% capture body %}} - -## The basics about our docs - -The Kubernetes documentation is written in Markdown and processed and deployed using Hugo. The source is in GitHub at [https://github.com/kubernetes/website](https://github.com/kubernetes/website). Most of the documentation source is stored in `/content/en/docs/`. Some of the reference documentation is automatically generated from scripts in the `update-imported-docs/` directory. - -You can file issues, edit content, and review changes from others, all from the -GitHub website. You can also use GitHub's embedded history and search tools. - -Not all tasks can be done in the GitHub UI, but these are discussed in the -[intermediate](/docs/contribute/intermediate/) and -[advanced](/docs/contribute/advanced/) docs contribution guides. - -### Participating in SIG Docs - -The Kubernetes documentation is maintained by a -{{< glossary_tooltip text="Special Interest Group" term_id="sig" >}} (SIG) -called SIG Docs. We [communicate](#participate-in-sig-docs-discussions) using a Slack channel, a mailing list, and -weekly video meetings. New participants are welcome. For more information, see -[Participating in SIG Docs](/docs/contribute/participating/). - -### Content guidelines - -The SIG Docs community created guidelines about what kind of content is allowed -in the Kubernetes documentation. Look over the [Documentation Content -Guide](/docs/contribute/style/content-guide/) to determine if the content -contribution you want to make is allowed. You can ask questions about allowed -content in the [#sig-docs](#participate-in-sig-docs-discussions) Slack -channel. - -### Style guidelines - -We maintain a [style guide](/docs/contribute/style/style-guide/) with information -about choices the SIG Docs community has made about grammar, syntax, source -formatting, and typographic conventions. Look over the style guide before you -make your first contribution, and use it when you have questions. - -Changes to the style guide are made by SIG Docs as a group. To propose a change -or addition, [add it to the agenda](https://docs.google.com/document/d/1ddHwLK3kUMX1wVFIwlksjTk0MsqitBnWPe1LRa1Rx5A/edit) for an upcoming SIG Docs meeting, and attend the meeting to participate in the -discussion. See the [advanced contribution](/docs/contribute/advanced/) topic for more -information. - -### Page templates - -We use page templates to control the presentation of our documentation pages. -Be sure to understand how these templates work by reviewing -[Using page templates](/docs/contribute/style/page-templates/). - -### Hugo shortcodes - -The Kubernetes documentation is transformed from Markdown to HTML using Hugo. -We make use of the standard Hugo shortcodes, as well as a few that are custom to -the Kubernetes documentation. See [Custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/) for -information about how to use them. - -### Multiple languages - -Documentation source is available in multiple languages in `/content/`. Each language has its own folder with a two-letter code determined by the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php). For example, English documentation source is stored in `/content/en/docs/`. - -For more information about contributing to documentation in multiple languages, see ["Localize content"](/docs/contribute/intermediate#localize-content) in the intermediate contributing guide. - -If you're interested in starting a new localization, see ["Localization"](/docs/contribute/localization/). - -## File actionable issues - -Anyone with a GitHub account can file an issue (bug report) against the -Kubernetes documentation. If you see something wrong, even if you have no idea -how to fix it, [file an issue](#how-to-file-an-issue). The exception to this -rule is a tiny bug like a typo that you intend to fix yourself. In that case, -you can instead [fix it](#improve-existing-content) without filing a bug first. - -### How to file an issue - -- **On an existing page** - - If you see a problem in an existing page in the [Kubernetes docs](/docs/), - go to the bottom of the page and click the **Create an Issue** button. If - you are not currently logged in to GitHub, log in. A GitHub issue form - appears with some pre-populated content. - - Using Markdown, fill in as many details as you can. In places where you see - empty square brackets (`[ ]`), put an `x` between the set of brackets that - represents the appropriate choice. If you have a proposed solution to fix - the issue, add it. - -- **Request a new page** - - If you think content should exist, but you aren't sure where it should go or - you don't think it fits within the pages that currently exist, you can - still file an issue. You can either choose an existing page near where you think the - new content should go and file the issue from that page, or go straight to - [https://github.com/kubernetes/website/issues/new/](https://github.com/kubernetes/website/issues/new/) - and file the issue from there. - -### How to file great issues - -To ensure that we understand your issue and can act on it, keep these guidelines -in mind: - -- Use the issue template, and fill out as many details as you can. -- Clearly explain the specific impact the issue has on users. -- Limit the scope of a given issue to a reasonable unit of work. For problems - with a large scope, break them down into smaller issues. - - For instance, "Fix the security docs" is not an actionable issue, but "Add - details to the 'Restricting network access' topic" might be. -- If the issue relates to another issue or pull request, you can refer to it - either by its full URL or by the issue or pull request number prefixed - with a `#` character. For instance, `Introduced by #987654`. -- Be respectful and avoid venting. For instance, "The docs about X suck" is not - helpful or actionable feedback. The - [Code of Conduct](/community/code-of-conduct/) also applies to interactions on - Kubernetes GitHub repositories. - -## Participate in SIG Docs discussions - -The SIG Docs team communicates using the following mechanisms: - -- [Join the Kubernetes Slack instance](http://slack.k8s.io/), then join the - `#sig-docs` channel, where we discuss docs issues in real-time. Be sure to - introduce yourself! -- [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), - where broader discussions take place and official decisions are recorded. -- Participate in the [weekly SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) video meeting, which is announced on the Slack channel and the mailing list. Currently, these meetings take place on Zoom, so you'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone. - -{{< note >}} -You can also check the SIG Docs weekly meeting on the [Kubernetes community meetings calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). -{{< /note >}} - -## Improve existing content - -To improve existing content, you file a _pull request (PR)_ after creating a -_fork_. Those two terms are [specific to GitHub](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/). -For the purposes of this topic, you don't need to know everything about them, -because you can do everything using your web browser. When you continue to the -[intermediate docs contributor guide](/docs/contribute/intermediate/), you will -need more background in Git terminology. - -{{< note >}} -**Kubernetes code developers**: If you are documenting a new feature for an -upcoming Kubernetes release, your process is a bit different. See -[Document a feature](/docs/contribute/intermediate/#sig-members-documenting-new-features) for -process guidelines and information about deadlines. -{{< /note >}} - -### Sign the CNCF CLA {#sign-the-cla} - -Before you can contribute code or documentation to Kubernetes, you **must** read -the [Contributor guide](https://github.com/kubernetes/community/blob/master/contributors/guide/README.md) and -[sign the Contributor License Agreement (CLA)](https://github.com/kubernetes/community/blob/master/CLA.md). -Don't worry -- this doesn't take long! - -### Find something to work on - -If you see something you want to fix right away, just follow the instructions -below. You don't need to [file an issue](#file-actionable-issues) (although you -certainly can). - -If you want to start by finding an existing issue to work on, go to -[https://github.com/kubernetes/website/issues](https://github.com/kubernetes/website/issues) -and look for issues with the label `good first issue` (you can use -[this](https://github.com/kubernetes/website/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) shortcut). Read through the comments and make sure there is not an open pull -request against the issue and that nobody has left a comment saying they are -working on the issue recently (3 days is a good rule). Leave a comment saying -that you would like to work on the issue. - -### Choose which Git branch to use - -The most important aspect of submitting pull requests is choosing which branch -to base your work on. Use these guidelines to make the decision: - -- Use `master` for fixing problems in content that is already published, or - making improvements to content that already exists. -- Use `master` to document something that is already part of the current - Kubernetes release, but isn't yet documented. You should write this content - in English first, and then localization teams will pick that change up as a - localization task. -- If you're working on a localization, you should follow the convention for - that particular localization. To find this out, you can look at other - pull requests (tip: search for `is:pr is:merged label:language/xx`) - {{< comment >}}Localization note: when localizing that tip, replace `xx` - with the actual ISO3166 two-letter code for your target locale.{{< /comment >}} - - Some localization teams work with PRs that target `master` - - Some localization teams work with a series of long-lived branches, and - periodically merge these to `master`. This kind of branch has a name like - dev-\-\.\; for example: - `dev-{{< latest-semver >}}-ja.1` -- If you're writing or updating documentation for a feature change release, - then you need to know the major and minor version of Kubernetes that - the change will first appear in. - - For example, if the feature gate JustAnExample is going to move from alpha - to beta in the next minor version, you need to know what the next minor - version number is. - - Find the release branch named for that version. For example, features that - changed in the {{< latest-version >}} release got documented in the branch - named `dev-{{< latest-semver >}}`. - -If you're still not sure which branch to choose, ask in `#sig-docs` on Slack or -attend a weekly SIG Docs meeting to get clarity. - -{{< note >}} -If you already submitted your pull request and you know that the Base Branch -was wrong, you (and only you, the submitter) can change it. -{{< /note >}} - -### Submit a pull request - -Follow these steps to submit a pull request to improve the Kubernetes -documentation. - -1. On the page where you see the issue, click the pencil icon at the top right. - A new GitHub page appears, with some help text. -2. If you have never created a fork of the Kubernetes documentation - repository, you are prompted to do so. Create the fork under your GitHub - username, rather than another organization you may be a member of. The - fork usually has a URL such as `https://github.com//website`, - unless you already have a repository with a conflicting name. - - The reason you are prompted to create a fork is that you do not have - access to push a branch directly to the definitive Kubernetes repository. - -3. The GitHub Markdown editor appears with the source Markdown file loaded. - Make your changes. Below the editor, fill in the **Propose file change** - form. The first field is the summary of your commit message and should be - no more than 50 characters long. The second field is optional, but can - include more detail if appropriate. - - {{< note >}} -Do not include references to other GitHub issues or pull -requests in your commit message. You can add those to the pull request -description later. -{{< /note >}} - - Click **Propose file change**. The change is saved as a commit in a - new branch in your fork, which is automatically named something like - `patch-1`. - -4. The next screen summarizes the changes you made, by comparing your new - branch (the **head fork** and **compare** selection boxes) to the current - state of the **base fork** and **base** branch (`master` on the - `kubernetes/website` repository by default). You can change any of the - selection boxes, but don't do that now. Have a look at the difference - viewer on the bottom of the screen, and if everything looks right, click - **Create pull request**. - - {{< note >}} -If you don't want to create the pull request now, you can do it -later, by browsing to the main URL of the Kubernetes website repository or -your fork's repository. The GitHub website will prompt you to create the -pull request if it detects that you pushed a new branch to your fork. -{{< /note >}} - -5. The **Open a pull request** screen appears. The subject of the pull request - is the same as the commit summary, but you can change it if needed. The - body is populated by your extended commit message (if present) and some - template text. Read the template text and fill out the details it asks for, - then delete the extra template text. If you add to the description `fixes #<000000>` - or `closes #<000000>`, where `#<000000>` is the number of an associated issue, - GitHub will automatically close the issue when the PR merges. - Leave the **Allow edits from maintainers** checkbox selected. Click - **Create pull request**. - - Congratulations! Your pull request is available in - [Pull requests](https://github.com/kubernetes/website/pulls). - - After a few minutes, you can preview the website with your PR's changes - applied. Go to the **Conversation** tab of your PR and click the **Details** - link for the `deploy/netlify` test, near the bottom of the page. It opens in - the same browser window by default. - - {{< note >}} - Please limit pull requests to one language per PR. For example, if you need to make an identical change to the same code sample in multiple languages, open a separate PR for each language. - {{< /note >}} - -6. Wait for review. Generally, reviewers are suggested by the `k8s-ci-robot`. - If a reviewer asks you to make changes, you can go to the **Files changed** - tab and click the pencil icon on any files that have been changed by the - pull request. When you save the changed file, a new commit is created in - the branch being monitored by the pull request. If you are waiting on a - reviewer to review the changes, proactively reach out to the reviewer - once every 7 days. You can also drop into #sig-docs Slack channel, - which is a good place to ask for help regarding PR reviews. - -7. If your change is accepted, a reviewer merges your pull request, and the - change is live on the Kubernetes website a few minutes later. - -This is only one way to submit a pull request. If you are already a Git and -GitHub advanced user, you can use a local GUI or command-line Git client -instead of using the GitHub UI. Some basics about using the command-line Git -client are discussed in the [intermediate](/docs/contribute/intermediate/) docs -contribution guide. - -## Review docs pull requests - -People who are new to documentation can still review pull requests. You can -learn the code base and build trust with your fellow contributors. English docs -are the authoritative source for content. We communicate in English during -weekly meetings and in community announcements. Contributors' English skills -vary, so use simple and direct language in your reviews. Effective reviews focus -on both small details and a change's potential impact. - -The reviews are not considered "binding", which means that your review alone -won't cause a pull request to be merged. However, it can still be helpful. Even -if you don't leave any review comments, you can get a sense of pull request -conventions and etiquette and get used to the workflow. Familiarize yourself with the -[content guide](/docs/contribute/style/content-guide/) and -[style guide](/docs/contribute/style/style-guide/) before reviewing so you -get an idea of what the content should contain and how it should look. - -### Best practices - -- Be polite, considerate, and helpful -- Comment on positive aspects of PRs as well -- Be empathetic and mindful of how your review may be received -- Assume good intent and ask clarifying questions -- Experienced contributors, consider pairing with new contributors whose work requires extensive changes - -### How to find and review a pull request - -1. Go to - [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). - You see a list of every open pull request against the Kubernetes website and - docs. - -2. By default, the only filter that is applied is `open`, so you don't see - pull requests that have already been closed or merged. It's a good idea to - apply the `cncf-cla: yes` filter, and for your first review, it's a good - idea to add `size/S` or `size/XS`. The `size` label is applied automatically - based on how many lines of code the PR modifies. You can apply filters using - the selection boxes at the top of the page, or use - [this shortcut](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+yes%22+label%3Asize%2FS) for only small PRs. All filters are `AND`ed together, so - you can't search for both `size/XS` and `size/S` in the same query. - -3. Go to the **Files changed** tab. Look through the changes introduced in the - PR, and if applicable, also look at any linked issues. If you see a problem - or room for improvement, hover over the line and click the `+` symbol that - appears. - - You can type a comment, and either choose **Add single comment** or **Start - a review**. Typically, starting a review is better because it allows you to - leave multiple comments and notifies the PR owner only when you have - completed the review, rather than a separate notification for each comment. - -4. When finished, click **Review changes** at the top of the page. You can - summarize your review, and you can choose to comment, approve, or request - changes. New contributors should always choose **Comment**. - -Thanks for reviewing a pull request! When you are new to the project, it's a -good idea to ask for feedback on your pull request reviews. The `#sig-docs` -Slack channel is a great place to do this. - -## Write a blog post - -Anyone can write a blog post and submit it for review. Blog posts should not be -commercial in nature and should consist of content that will apply broadly to -the Kubernetes community. - -To submit a blog post, you can either submit it using the -[Kubernetes blog submission form](https://docs.google.com/forms/d/e/1FAIpQLSdMpMoSIrhte5omZbTE7nB84qcGBy8XnnXhDFoW0h7p2zwXrw/viewform), -or follow the steps below. - -1. [Sign the CLA](#sign-the-cla) if you have not yet done so. -2. Have a look at the Markdown format for existing blog posts in the - [website repository](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts). -3. Write out your blog post in a text editor of your choice. -4. On the same link from step 2, click the **Create new file** button. Paste - your content into the editor. Name the file to match the proposed title of - the blog post, but don't put the date in the file name. The blog reviewers - will work with you on the final file name and the date the blog will be - published. -5. When you save the file, GitHub will walk you through the pull request - process. -6. A blog post reviewer will review your submission and work with you on - feedback and final details. When the blog post is approved, the blog will be - scheduled for publication. - -## Submit a case study - -Case studies highlight how organizations are using Kubernetes to solve -real-world problems. They are written in collaboration with the Kubernetes -marketing team, which is handled by the {{< glossary_tooltip text="CNCF" term_id="cncf" >}}. - -Have a look at the source for the -[existing case studies](https://github.com/kubernetes/website/tree/master/content/en/case-studies). -Use the [Kubernetes case study submission form](https://www.cncf.io/people/end-user-community/) -to submit your proposal. - -{{% /capture %}} - -{{% capture whatsnext %}} - -When you are comfortable with all of the tasks discussed in this topic and you -want to engage with the Kubernetes docs team in deeper ways, read the -[intermediate docs contribution guide](/docs/contribute/intermediate/). - -{{% /capture %}} diff --git a/content/en/docs/contribute/style/content-guide.md b/content/en/docs/contribute/style/content-guide.md index b96f28eac7100..bc4b5be64a711 100644 --- a/content/en/docs/contribute/style/content-guide.md +++ b/content/en/docs/contribute/style/content-guide.md @@ -3,10 +3,6 @@ title: Documentation Content Guide linktitle: Content guide content_template: templates/concept weight: 10 -card: - name: contribute - weight: 20 - title: Documentation Content Guide --- {{% capture overview %}} @@ -39,7 +35,7 @@ project](https://github.com/kubernetes/kubernetes). Kubernetes docs permit only some kinds of content. ### Third party content -Kubernetes documentation includes applied examples of projects in the Kubernetes project&emdash;projects that live in the [kubernetes](https://github.com/kubernetes) and +Kubernetes documentation includes applied examples of projects in the Kubernetes project—projects that live in the [kubernetes](https://github.com/kubernetes) and [kubernetes-sigs](https://github.com/kubernetes-sigs) GitHub organizations. Links to active content in the Kubernetes project are always allowed. diff --git a/content/en/docs/contribute/style/hugo-shortcodes/index.md b/content/en/docs/contribute/style/hugo-shortcodes/index.md index 8cd3ed1e0feeb..60479c7fec3eb 100644 --- a/content/en/docs/contribute/style/hugo-shortcodes/index.md +++ b/content/en/docs/contribute/style/hugo-shortcodes/index.md @@ -243,4 +243,4 @@ Renders to: * Learn about [using page templates](/docs/home/contribute/page-templates/). * Learn about [staging your changes](/docs/home/contribute/stage-documentation-changes/) * Learn about [creating a pull request](/docs/home/contribute/create-pull-request/). -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 26722e607f18e..34dce6adaca38 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -3,10 +3,6 @@ title: Documentation Style Guide linktitle: Style guide content_template: templates/concept weight: 10 -card: - name: contribute - weight: 20 - title: Documentation Style Guide --- {{% capture overview %}} @@ -15,10 +11,12 @@ These are guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. For additional information on creating new content for the Kubernetes -documentation, read the [Documentation Content -Guide](/docs/contribute/style/content-guide/) and follow the instructions on -[using page templates](/docs/contribute/style/page-templates/) and [creating a -documentation pull request](/docs/contribute/start/#improve-existing-content). +documentation, read the [Documentation Content Guide](/docs/contribute/style/content-guide/) and follow the instructions on +[using page templates](/docs/contribute/style/page-templates/) and [creating a documentation pull request](/docs/contribute/new-content/open-a-pr). + +Changes to the style guide are made by SIG Docs as a group. To propose a change +or addition, [add it to the agenda](https://docs.google.com/document/d/1ddHwLK3kUMX1wVFIwlksjTk0MsqitBnWPe1LRa1Rx5A/edit) for an upcoming SIG Docs meeting, and attend the meeting to participate in the +discussion. {{% /capture %}} diff --git a/content/en/docs/contribute/style/write-new-topic.md b/content/en/docs/contribute/style/write-new-topic.md index 50e49734a3b57..65dca22f1aef1 100644 --- a/content/en/docs/contribute/style/write-new-topic.md +++ b/content/en/docs/contribute/style/write-new-topic.md @@ -10,7 +10,7 @@ This page shows how to create a new topic for the Kubernetes docs. {{% capture prerequisites %}} Create a fork of the Kubernetes documentation repository as described in -[Start contributing](/docs/contribute/start/). +[Open a PR](/docs/new-content/open-a-pr/). {{% /capture %}} {{% capture steps %}} @@ -24,8 +24,8 @@ Type | Description :--- | :---------- Concept | A concept page explains some aspect of Kubernetes. For example, a concept page might describe the Kubernetes Deployment object and explain the role it plays as an application while it is deployed, scaled, and updated. Typically, concept pages don't include sequences of steps, but instead provide links to tasks or tutorials. For an example of a concept topic, see Nodes. Task | A task page shows how to do a single thing. The idea is to give readers a sequence of steps that they can actually do as they read the page. A task page can be short or long, provided it stays focused on one area. In a task page, it is OK to blend brief explanations with the steps to be performed, but if you need to provide a lengthy explanation, you should do that in a concept topic. Related task and concept topics should link to each other. For an example of a short task page, see Configure a Pod to Use a Volume for Storage. For an example of a longer task page, see Configure Liveness and Readiness Probes -Tutorial | A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied together, but should link to related concept topics for deep explanations of individual features. -{{< /table >}} +Tutorial | A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied together, but should link to related concept topics for deep explanations of individual features. +{{< /table >}} Use a template for each new page. Each page type has a [template](/docs/contribute/style/page-templates/) @@ -162,7 +162,6 @@ image format is SVG. {{% /capture %}} {{% capture whatsnext %}} -* Learn about [using page templates](/docs/home/contribute/page-templates/). -* Learn about [staging your changes](/docs/home/contribute/stage-documentation-changes/). -* Learn about [creating a pull request](/docs/home/contribute/create-pull-request/). +* Learn about [using page templates](/docs/contribute/page-templates/). +* Learn about [creating a pull request](/docs/contribute/new-content/open-a-pr/). {{% /capture %}} diff --git a/content/en/docs/contribute/suggesting-improvements.md b/content/en/docs/contribute/suggesting-improvements.md new file mode 100644 index 0000000000000..19133f379bd08 --- /dev/null +++ b/content/en/docs/contribute/suggesting-improvements.md @@ -0,0 +1,65 @@ +--- +title: Suggesting content improvements +slug: suggest-improvements +content_template: templates/concept +weight: 10 +card: + name: contribute + weight: 20 +--- + +{{% capture overview %}} + +If you notice an issue with Kubernetes documentation, or have an idea for new content, then open an issue. All you need is a [GitHub account](https://github.com/join) and a web browser. + +In most cases, new work on Kubernetes documentation begins with an issue in GitHub. Kubernetes contributors +then review, categorize and tag issues as needed. Next, you or another member +of the Kubernetes community open a pull request with changes to resolve the issue. + +{{% /capture %}} + +{{% capture body %}} + +## Opening an issue + +If you want to suggest improvements to existing content, or notice an error, then open an issue. + +1. Go to the bottom of the page and click the **Create an Issue** button. This redirects you + to a GitHub issue page pre-populated with some headers. +2. Describe the issue or suggestion for improvement. Provide as many details as you can. +3. Click **Submit new issue**. + +After submitting, check in on your issue occasionally or turn on GitHub notifications. +Reviewers and other community members might ask questions before +they can take action on your issue. + +## Suggesting new content + +If you have an idea for new content, but you aren't sure where it should go, you can +still file an issue. Either: + +- Choose an existing page in the section you think the content belongs in and click **Create an issue**. +- Go to [GitHub](https://github.com/kubernetes/website/issues/new/) and file the issue directly. + +## How to file great issues + + +Keep the following in mind when filing an issue: + +- Provide a clear issue description. Describe what specifically is missing, out of date, + wrong, or needs improvement. +- Explain the specific impact the issue has on users. +- Limit the scope of a given issue to a reasonable unit of work. For problems + with a large scope, break them down into smaller issues. For example, "Fix the security docs" + is too broad, but "Add details to the 'Restricting network access' topic" is specific enough + to be actionable. +- Search the existing issues to see if there's anything related or similar to the + new issue. +- If the new issue relates to another issue or pull request, refer to it + either by its full URL or by the issue or pull request number prefixed + with a `#` character. For example, `Introduced by #987654`. +- Follow the [Code of Conduct](/community/code-of-conduct/). Respect your +fellow contributors. For example, "The docs are terrible" is not + helpful or polite feedback. + +{{% /capture %}} diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index dfd9e69428f89..a19d492d54c08 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -89,10 +89,10 @@ To see which admission plugins are enabled: kube-apiserver -h | grep enable-admission-plugins ``` -In 1.16, they are: +In 1.18, they are: ```shell -NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, RuntimeClass, ResourceQuota +NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota ``` ## What does each admission controller do? @@ -544,7 +544,7 @@ region and/or zone. If the admission controller doesn't support automatic labelling your PersistentVolumes, you may need to add the labels manually to prevent pods from mounting volumes from a different zone. PersistentVolumeLabel is DEPRECATED and labeling persistent volumes has been taken over by -[cloud controller manager](/docs/tasks/administer-cluster/running-cloud-controller/). +the {{< glossary_tooltip text="cloud-controller-manager" term_id="cloud-controller-manager" >}}. Starting from 1.11, this admission controller is disabled by default. ### PodNodeSelector {#podnodeselector} diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index 4ba7ddf1263e5..8ab609a0cc6f1 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -299,7 +299,7 @@ To enable the plugin, configure the following flags on the API server: | `--oidc-issuer-url` | URL of the provider which allows the API server to discover public signing keys. Only URLs which use the `https://` scheme are accepted. This is typically the provider's discovery URL without a path, for example "https://accounts.google.com" or "https://login.salesforce.com". This URL should point to the level below .well-known/openid-configuration | If the discovery URL is `https://accounts.google.com/.well-known/openid-configuration`, the value should be `https://accounts.google.com` | Yes | | `--oidc-client-id` | A client id that all tokens must be issued for. | kubernetes | Yes | | `--oidc-username-claim` | JWT claim to use as the user name. By default `sub`, which is expected to be a unique identifier of the end user. Admins can choose other claims, such as `email` or `name`, depending on their provider. However, claims other than `email` will be prefixed with the issuer URL to prevent naming clashes with other plugins. | sub | No | -| `--oidc-username-prefix` | Prefix prepended to username claims to prevent clashes with existing names (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag isn't provided and `--oidc-user-claim` is a value other than `email` the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `--oidc-issuer-url`. The value `-` can be used to disable all prefixing. | `oidc:` | No | +| `--oidc-username-prefix` | Prefix prepended to username claims to prevent clashes with existing names (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag isn't provided and `--oidc-username-claim` is a value other than `email` the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `--oidc-issuer-url`. The value `-` can be used to disable all prefixing. | `oidc:` | No | | `--oidc-groups-claim` | JWT claim to use as the user's group. If the claim is present it must be an array of strings. | groups | No | | `--oidc-groups-prefix` | Prefix prepended to group claims to prevent clashes with existing names (such as `system:` groups). For example, the value `oidc:` will create group names like `oidc:engineering` and `oidc:infra`. | `oidc:` | No | | `--oidc-required-claim` | A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims. | `claim=value` | No | diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index bdb1bfbb9c27e..3e81215dd818b 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -292,10 +292,10 @@ status: type: Denied ``` -It's usual to set `status.condtions.reason` to a machine-friendly reason +It's usual to set `status.conditions.reason` to a machine-friendly reason code using TitleCase; this is a convention but you can set it to anything you like. If you want to add a note just for human consumption, use the -`status.condtions.message` field. +`status.conditions.message` field. ## Signing diff --git a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md index c57bcdaf341fe..c99ff2b35c299 100644 --- a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md @@ -22,10 +22,10 @@ This page describes how to build, configure, use, and monitor admission webhooks Admission webhooks are HTTP callbacks that receive admission requests and do something with them. You can define two types of admission webhooks, -[validating admission Webhook](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook) +[validating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook) and [mutating admission webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook). -Mutating admission Webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults. +Mutating admission webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults. After all object modifications are complete, and after the incoming object is validated by the API server, validating admission webhooks are invoked and can reject requests to enforce custom policies. diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 1ca0b98b7c393..eab71e4560bfa 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -1182,7 +1182,7 @@ allowed by *either* the RBAC or ABAC policies is allowed. When the kube-apiserver is run with a log level of 5 or higher for the RBAC component (`--vmodule=rbac*=5` or `--v=5`), you can see RBAC denials in the API server log -(prefixed with `RBAC DENY:`). +(prefixed with `RBAC`). You can use that information to determine which roles need to be granted to which users, groups, or service accounts. Once you have [granted roles to service accounts](#service-account-permissions) and workloads diff --git a/content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md b/content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md index 17d1bee09cfb0..ee8d7f1ed14b4 100644 --- a/content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md +++ b/content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md @@ -1,7 +1,7 @@ --- title: cloud-controller-manager content_template: templates/tool-reference -weight: 28 +weight: 30 --- {{% capture synopsis %}} @@ -18,518 +18,518 @@ cloud-controller-manager [flags] {{% capture options %}} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--add-dir-header
If true, adds the file directory to the header
--allocate-node-cidrs
Should CIDRs for Pods be allocated and set on the cloud provider.
--alsologtostderr
log to standard error as well as files
--authentication-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
--authentication-token-webhook-cache-ttl duration     Default: 10s
The duration to cache responses from the webhook token authenticator.
--authentication-tolerate-lookup-failure
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
--authorization-always-allow-paths stringSlice     Default: [/healthz]
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
--authorization-webhook-cache-authorized-ttl duration     Default: 10s
The duration to cache 'authorized' responses from the webhook authorizer.
--authorization-webhook-cache-unauthorized-ttl duration     Default: 10s
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--cidr-allocator-type string     Default: "RangeAllocator"
Type of CIDR allocator to use
--client-ca-file string
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
The provider for cloud services. Empty string for no provider.
--cloud-provider-gce-l7lb-src-cidrs cidrs     Default: 130.211.0.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
--cloud-provider-gce-lb-src-cidrs cidrs     Default: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L4 LB traffic proxy & health checks
--cluster-cidr string
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
--cluster-name string     Default: "kubernetes"
The instance prefix for the cluster.
--concurrent-service-syncs int32     Default: 1
The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load
--configure-cloud-routes     Default: true
Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
--contention-profiling
Enable lock contention profiling, if profiling is enabled
--controller-start-interval duration
Interval between starting controller managers.
--controllers stringSlice     Default: [*]
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.
All controllers: cloud-node, cloud-node-lifecycle, route, service
Disabled-by-default controllers:
--external-cloud-volume-plugin string
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
-h, --help
help for cloud-controller-manager
--http2-max-streams-per-connection int
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kube-api-burst int32     Default: 30
Burst to use while talking with kubernetes apiserver.
--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf"
Content type of requests sent to apiserver.
--kube-api-qps float32     Default: 20
QPS to use while talking with kubernetes apiserver.
--kubeconfig string
Path to kubeconfig file with authorization and master location information.
--leader-elect     Default: true
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
--leader-elect-lease-duration duration     Default: 15s
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
--leader-elect-renew-deadline duration     Default: 10s
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
--leader-elect-resource-lock endpoints     Default: "endpointsleases"
The type of resource object that is used for locking during leader election. Supported options are endpoints (default) and `configmaps`.
--leader-elect-resource-name string     Default: "cloud-controller-manager"
The name of resource object that is used for locking during leader election.
--leader-elect-resource-namespace string     Default: "kube-system"
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration     Default: 2s
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--master string
The address of the Kubernetes API server (overrides any value in kubeconfig).
--min-resync-period duration     Default: 12h0m0s
The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.
--node-monitor-period duration     Default: 5s
The period for syncing NodeStatus in NodeController.
--node-status-update-frequency duration     Default: 5m0s
Specifies how often the controller updates nodes' status.
--profiling     Default: true
Enable profiling via web interface host:port/debug/pprof/
--requestheader-allowed-names stringSlice
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix stringSlice     Default: [x-remote-extra-]
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
--requestheader-group-headers stringSlice     Default: [x-remote-group]
List of request headers to inspect for groups. X-Remote-Group is suggested.
--requestheader-username-headers stringSlice     Default: [x-remote-user]
List of request headers to inspect for usernames. X-Remote-User is common.
--route-reconciliation-period duration     Default: 10s
The period for reconciling routes created for Nodes by cloud provider.
--secure-port int     Default: 10258
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--tls-cert-file string
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites stringSlice
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
--tls-min-version string
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey     Default: []
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--use-service-account-credentials
If true, use individual service account credentials for each controller.
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--add-dir-header
If true, adds the file directory to the header
--allocate-node-cidrs
Should CIDRs for Pods be allocated and set on the cloud provider.
--alsologtostderr
log to standard error as well as files
--authentication-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
--authentication-token-webhook-cache-ttl duration     Default: 10s
The duration to cache responses from the webhook token authenticator.
--authentication-tolerate-lookup-failure
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
--authorization-always-allow-paths stringSlice     Default: [/healthz]
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
--authorization-webhook-cache-authorized-ttl duration     Default: 10s
The duration to cache 'authorized' responses from the webhook authorizer.
--authorization-webhook-cache-unauthorized-ttl duration     Default: 10s
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--cidr-allocator-type string     Default: "RangeAllocator"
Type of CIDR allocator to use
--client-ca-file string
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
The provider for cloud services. Empty string for no provider.
--cloud-provider-gce-l7lb-src-cidrs cidrs     Default: 130.211.0.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
--cloud-provider-gce-lb-src-cidrs cidrs     Default: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L4 LB traffic proxy & health checks
--cluster-cidr string
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
--cluster-name string     Default: "kubernetes"
The instance prefix for the cluster.
--concurrent-service-syncs int32     Default: 1
The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load
--configure-cloud-routes     Default: true
Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
--contention-profiling
Enable lock contention profiling, if profiling is enabled
--controller-start-interval duration
Interval between starting controller managers.
--controllers stringSlice     Default: [*]
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.
All controllers: cloud-node, cloud-node-lifecycle, route, service
Disabled-by-default controllers:
--external-cloud-volume-plugin string
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
-h, --help
help for cloud-controller-manager
--http2-max-streams-per-connection int
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kube-api-burst int32     Default: 30
Burst to use while talking with kubernetes apiserver.
--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf"
Content type of requests sent to apiserver.
--kube-api-qps float32     Default: 20
QPS to use while talking with kubernetes apiserver.
--kubeconfig string
Path to kubeconfig file with authorization and master location information.
--leader-elect     Default: true
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
--leader-elect-lease-duration duration     Default: 15s
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
--leader-elect-renew-deadline duration     Default: 10s
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
--leader-elect-resource-lock endpoints     Default: "endpointsleases"
The type of resource object that is used for locking during leader election. Supported options are endpoints (default) and `configmaps`.
--leader-elect-resource-name string     Default: "cloud-controller-manager"
The name of resource object that is used for locking during leader election.
--leader-elect-resource-namespace string     Default: "kube-system"
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration     Default: 2s
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--master string
The address of the Kubernetes API server (overrides any value in kubeconfig).
--min-resync-period duration     Default: 12h0m0s
The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.
--node-monitor-period duration     Default: 5s
The period for syncing NodeStatus in NodeController.
--node-status-update-frequency duration     Default: 5m0s
Specifies how often the controller updates nodes' status.
--profiling     Default: true
Enable profiling via web interface host:port/debug/pprof/
--requestheader-allowed-names stringSlice
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix stringSlice     Default: [x-remote-extra-]
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
--requestheader-group-headers stringSlice     Default: [x-remote-group]
List of request headers to inspect for groups. X-Remote-Group is suggested.
--requestheader-username-headers stringSlice     Default: [x-remote-user]
List of request headers to inspect for usernames. X-Remote-User is common.
--route-reconciliation-period duration     Default: 10s
The period for reconciling routes created for Nodes by cloud provider.
--secure-port int     Default: 10258
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--tls-cert-file string
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites stringSlice
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
--tls-min-version string
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey     Default: []
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--use-service-account-credentials
If true, use individual service account credentials for each controller.
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
diff --git a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md index e952a53ce392b..6e9454dc492ae 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-apiserver.md @@ -1,7 +1,7 @@ --- title: kube-apiserver content_template: templates/tool-reference -weight: 28 +weight: 30 --- {{% capture synopsis %}} @@ -20,1064 +20,1064 @@ kube-apiserver [flags] {{% capture options %}} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--add-dir-header
If true, adds the file directory to the header
--admission-control-config-file string
File with admission control configuration.
--advertise-address ip
The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
--allow-privileged
If true, allow privileged containers. [default=false]
--alsologtostderr
log to standard error as well as files
--anonymous-auth     Default: true
Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.
--api-audiences stringSlice
Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.
--apiserver-count int     Default: 1
The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.)
--audit-dynamic-configuration
Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
--audit-log-batch-buffer-size int     Default: 10000
The size of the buffer to store events before batching and writing. Only used in batch mode.
--audit-log-batch-max-size int     Default: 1
The maximum size of a batch. Only used in batch mode.
--audit-log-batch-max-wait duration
The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
--audit-log-batch-throttle-burst int
Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
--audit-log-batch-throttle-enable
Whether batching throttling is enabled. Only used in batch mode.
--audit-log-batch-throttle-qps float32
Maximum average number of batches per second. Only used in batch mode.
--audit-log-format string     Default: "json"
Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json.
--audit-log-maxage int
The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
--audit-log-maxbackup int
The maximum number of old audit log files to retain.
--audit-log-maxsize int
The maximum size in megabytes of the audit log file before it gets rotated.
--audit-log-mode string     Default: "blocking"
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
--audit-log-path string
If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
--audit-log-truncate-enabled
Whether event and batch truncating is enabled.
--audit-log-truncate-max-batch-size int     Default: 10485760
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
--audit-log-truncate-max-event-size int     Default: 102400
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
--audit-log-version string     Default: "audit.k8s.io/v1"
API group and version used for serializing audit events written to log.
--audit-policy-file string
Path to the file that defines the audit policy configuration.
--audit-webhook-batch-buffer-size int     Default: 10000
The size of the buffer to store events before batching and writing. Only used in batch mode.
--audit-webhook-batch-max-size int     Default: 400
The maximum size of a batch. Only used in batch mode.
--audit-webhook-batch-max-wait duration     Default: 30s
The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
--audit-webhook-batch-throttle-burst int     Default: 15
Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
--audit-webhook-batch-throttle-enable     Default: true
Whether batching throttling is enabled. Only used in batch mode.
--audit-webhook-batch-throttle-qps float32     Default: 10
Maximum average number of batches per second. Only used in batch mode.
--audit-webhook-config-file string
Path to a kubeconfig formatted file that defines the audit webhook configuration.
--audit-webhook-initial-backoff duration     Default: 10s
The amount of time to wait before retrying the first failed request.
--audit-webhook-mode string     Default: "batch"
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
--audit-webhook-truncate-enabled
Whether event and batch truncating is enabled.
--audit-webhook-truncate-max-batch-size int     Default: 10485760
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
--audit-webhook-truncate-max-event-size int     Default: 102400
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
--audit-webhook-version string     Default: "audit.k8s.io/v1"
API group and version used for serializing audit events written to webhook.
--authentication-token-webhook-cache-ttl duration     Default: 2m0s
The duration to cache responses from the webhook token authenticator.
--authentication-token-webhook-config-file string
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
--authentication-token-webhook-version string     Default: "v1beta1"
The API version of the authentication.k8s.io TokenReview to send to and expect from the webhook.
--authorization-mode stringSlice     Default: [AlwaysAllow]
Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node.
--authorization-policy-file string
File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
--authorization-webhook-cache-authorized-ttl duration     Default: 5m0s
The duration to cache 'authorized' responses from the webhook authorizer.
--authorization-webhook-cache-unauthorized-ttl duration     Default: 30s
The duration to cache 'unauthorized' responses from the webhook authorizer.
--authorization-webhook-config-file string
File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
--authorization-webhook-version string     Default: "v1beta1"
The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string     Default: "/var/run/kubernetes"
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--client-ca-file string
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
The provider for cloud services. Empty string for no provider.
--cloud-provider-gce-l7lb-src-cidrs cidrs     Default: 130.211.0.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
--contention-profiling
Enable lock contention profiling, if profiling is enabled
--cors-allowed-origins stringSlice
List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
--default-not-ready-toleration-seconds int     Default: 300
Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-unreachable-toleration-seconds int     Default: 300
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-watch-cache-size int     Default: 100
Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set.
--delete-collection-workers int     Default: 1
Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.
--disable-admission-plugins stringSlice
admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
--egress-selector-config-file string
File with apiserver egress selector configuration.
--enable-admission-plugins stringSlice
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
--enable-aggregator-routing
Turns on aggregator routing requests to endpoints IP rather than cluster IP.
--enable-bootstrap-token-auth
Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
--enable-garbage-collector     Default: true
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager.
--enable-priority-and-fairness     Default: true
If true and the APIPriorityAndFairness feature gate is enabled, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
--encryption-provider-config string
The file containing configuration for encryption providers to be used for storing secrets in etcd
--endpoint-reconciler-type string     Default: "lease"
Use an endpoint reconciler (master-count, lease, none)
--etcd-cafile string
SSL Certificate Authority file used to secure etcd communication.
--etcd-certfile string
SSL certification file used to secure etcd communication.
--etcd-compaction-interval duration     Default: 5m0s
The interval of compaction requests. If 0, the compaction request from apiserver is disabled.
--etcd-count-metric-poll-period duration     Default: 1m0s
Frequency of polling etcd for number of resources per type. 0 disables the metric collection.
--etcd-keyfile string
SSL key file used to secure etcd communication.
--etcd-prefix string     Default: "/registry"
The prefix to prepend to all resource paths in etcd.
--etcd-servers stringSlice
List of etcd servers to connect with (scheme://ip:port), comma separated.
--etcd-servers-overrides stringSlice
Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
--event-ttl duration     Default: 1h0m0s
Amount of time to retain events.
--external-hostname string
The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery).
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
--goaway-chance float
To prevent HTTP/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.
-h, --help
help for kube-apiserver
--http2-max-streams-per-connection int
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kubelet-certificate-authority string
Path to a cert file for the certificate authority.
--kubelet-client-certificate string
Path to a client cert file for TLS.
--kubelet-client-key string
Path to a client key file for TLS.
--kubelet-https     Default: true
Use https for kubelet connections.
--kubelet-preferred-address-types stringSlice     Default: [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]
List of the preferred NodeAddressTypes to use for kubelet connections.
--kubelet-timeout duration     Default: 5s
Timeout for kubelet operations.
--kubernetes-service-node-port int
If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
--livez-grace-period duration
This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, /livez will assume that unfinished post-start hooks will complete successfully and therefore return true.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--master-service-namespace string     Default: "default"
DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods.
--max-connection-bytes-per-sec int
If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
--max-mutating-requests-inflight int     Default: 200
The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--max-requests-inflight int     Default: 400
The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--min-request-timeout int     Default: 1800
An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
--oidc-ca-file string
If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
--oidc-client-id string
The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
--oidc-groups-claim string
If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
--oidc-groups-prefix string
If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
--oidc-issuer-url string
The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
--oidc-required-claim mapStringString
A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
--oidc-signing-algs stringSlice     Default: [RS256]
Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1.
--oidc-username-claim string     Default: "sub"
The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.
--oidc-username-prefix string
If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
--profiling     Default: true
Enable profiling via web interface host:port/debug/pprof/
--proxy-client-cert-file string
Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
--proxy-client-key-file string
Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
--request-timeout duration     Default: 1m0s
An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests.
--requestheader-allowed-names stringSlice
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix stringSlice
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
--requestheader-group-headers stringSlice
List of request headers to inspect for groups. X-Remote-Group is suggested.
--requestheader-username-headers stringSlice
List of request headers to inspect for usernames. X-Remote-User is common.
--runtime-config mapStringString
A set of key=value pairs that enable or disable built-in APIs. Supported options are:
v1=true|false for the core API group
<group>/<version>=true|false for a specific API group and version (e.g. apps/v1=true)
api/all=true|false controls all API versions
api/ga=true|false controls all API versions of the form v[0-9]+
api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+
api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+
api/legacy is deprecated, and will be removed in a future version
--secure-port int     Default: 6443
The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0.
--service-account-issuer {service-account-issuer}/.well-known/openid-configuration
Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}/.well-known/openid-configuration.
--service-account-jwks-uri string
Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname). Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled.
--service-account-key-file stringArray
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
--service-account-lookup     Default: true
If true, validate ServiceAccount tokens exist in etcd as part of authentication.
--service-account-max-token-expiration duration
The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
--service-account-signing-key-file string
Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
--service-cluster-ip-range string
A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.
--service-node-port-range portRange     Default: 30000-32767
A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range.
--show-hidden-metrics-for-version string
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--shutdown-delay-duration duration
Time to delay the termination. During that time the server keeps serving requests normally and /healthz returns success, but /readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server.
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--storage-backend string
The storage backend for persistence. Options: 'etcd3' (default).
--storage-media-type string     Default: "application/vnd.kubernetes.protobuf"
The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.
--target-ram-mb int
Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
--tls-cert-file string
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites stringSlice
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
--tls-min-version string
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey     Default: []
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--token-auth-file string
If set, the file that will be used to secure the secure port of the API server via token authentication.
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
--watch-cache     Default: true
Enable watch caching in the apiserver
--watch-cache-sizes stringSlice
Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--add-dir-header
If true, adds the file directory to the header
--admission-control-config-file string
File with admission control configuration.
--advertise-address ip
The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
--allow-privileged
If true, allow privileged containers. [default=false]
--alsologtostderr
log to standard error as well as files
--anonymous-auth     Default: true
Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.
--api-audiences stringSlice
Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.
--apiserver-count int     Default: 1
The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.)
--audit-dynamic-configuration
Enables dynamic audit configuration. This feature also requires the DynamicAuditing feature flag
--audit-log-batch-buffer-size int     Default: 10000
The size of the buffer to store events before batching and writing. Only used in batch mode.
--audit-log-batch-max-size int     Default: 1
The maximum size of a batch. Only used in batch mode.
--audit-log-batch-max-wait duration
The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
--audit-log-batch-throttle-burst int
Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
--audit-log-batch-throttle-enable
Whether batching throttling is enabled. Only used in batch mode.
--audit-log-batch-throttle-qps float32
Maximum average number of batches per second. Only used in batch mode.
--audit-log-format string     Default: "json"
Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json.
--audit-log-maxage int
The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
--audit-log-maxbackup int
The maximum number of old audit log files to retain.
--audit-log-maxsize int
The maximum size in megabytes of the audit log file before it gets rotated.
--audit-log-mode string     Default: "blocking"
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
--audit-log-path string
If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
--audit-log-truncate-enabled
Whether event and batch truncating is enabled.
--audit-log-truncate-max-batch-size int     Default: 10485760
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
--audit-log-truncate-max-event-size int     Default: 102400
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
--audit-log-version string     Default: "audit.k8s.io/v1"
API group and version used for serializing audit events written to log.
--audit-policy-file string
Path to the file that defines the audit policy configuration.
--audit-webhook-batch-buffer-size int     Default: 10000
The size of the buffer to store events before batching and writing. Only used in batch mode.
--audit-webhook-batch-max-size int     Default: 400
The maximum size of a batch. Only used in batch mode.
--audit-webhook-batch-max-wait duration     Default: 30s
The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
--audit-webhook-batch-throttle-burst int     Default: 15
Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
--audit-webhook-batch-throttle-enable     Default: true
Whether batching throttling is enabled. Only used in batch mode.
--audit-webhook-batch-throttle-qps float32     Default: 10
Maximum average number of batches per second. Only used in batch mode.
--audit-webhook-config-file string
Path to a kubeconfig formatted file that defines the audit webhook configuration.
--audit-webhook-initial-backoff duration     Default: 10s
The amount of time to wait before retrying the first failed request.
--audit-webhook-mode string     Default: "batch"
Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.
--audit-webhook-truncate-enabled
Whether event and batch truncating is enabled.
--audit-webhook-truncate-max-batch-size int     Default: 10485760
Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.
--audit-webhook-truncate-max-event-size int     Default: 102400
Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.
--audit-webhook-version string     Default: "audit.k8s.io/v1"
API group and version used for serializing audit events written to webhook.
--authentication-token-webhook-cache-ttl duration     Default: 2m0s
The duration to cache responses from the webhook token authenticator.
--authentication-token-webhook-config-file string
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
--authentication-token-webhook-version string     Default: "v1beta1"
The API version of the authentication.k8s.io TokenReview to send to and expect from the webhook.
--authorization-mode stringSlice     Default: [AlwaysAllow]
Ordered list of plug-ins to do authorization on secure port. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node.
--authorization-policy-file string
File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.
--authorization-webhook-cache-authorized-ttl duration     Default: 5m0s
The duration to cache 'authorized' responses from the webhook authorizer.
--authorization-webhook-cache-unauthorized-ttl duration     Default: 30s
The duration to cache 'unauthorized' responses from the webhook authorizer.
--authorization-webhook-config-file string
File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.
--authorization-webhook-version string     Default: "v1beta1"
The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string     Default: "/var/run/kubernetes"
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--client-ca-file string
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
The provider for cloud services. Empty string for no provider.
--cloud-provider-gce-l7lb-src-cidrs cidrs     Default: 130.211.0.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
--contention-profiling
Enable lock contention profiling, if profiling is enabled
--cors-allowed-origins stringSlice
List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled.
--default-not-ready-toleration-seconds int     Default: 300
Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-unreachable-toleration-seconds int     Default: 300
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-watch-cache-size int     Default: 100
Default watch cache size. If zero, watch cache will be disabled for resources that do not have a default watch size set.
--delete-collection-workers int     Default: 1
Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.
--disable-admission-plugins stringSlice
admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
--egress-selector-config-file string
File with apiserver egress selector configuration.
--enable-admission-plugins stringSlice
admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
--enable-aggregator-routing
Turns on aggregator routing requests to endpoints IP rather than cluster IP.
--enable-bootstrap-token-auth
Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.
--enable-garbage-collector     Default: true
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager.
--enable-priority-and-fairness     Default: true
If true and the APIPriorityAndFairness feature gate is enabled, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness
--encryption-provider-config string
The file containing configuration for encryption providers to be used for storing secrets in etcd
--endpoint-reconciler-type string     Default: "lease"
Use an endpoint reconciler (master-count, lease, none)
--etcd-cafile string
SSL Certificate Authority file used to secure etcd communication.
--etcd-certfile string
SSL certification file used to secure etcd communication.
--etcd-compaction-interval duration     Default: 5m0s
The interval of compaction requests. If 0, the compaction request from apiserver is disabled.
--etcd-count-metric-poll-period duration     Default: 1m0s
Frequency of polling etcd for number of resources per type. 0 disables the metric collection.
--etcd-keyfile string
SSL key file used to secure etcd communication.
--etcd-prefix string     Default: "/registry"
The prefix to prepend to all resource paths in etcd.
--etcd-servers stringSlice
List of etcd servers to connect with (scheme://ip:port), comma separated.
--etcd-servers-overrides stringSlice
Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated.
--event-ttl duration     Default: 1h0m0s
Amount of time to retain events.
--external-hostname string
The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery).
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
--goaway-chance float
To prevent HTTP/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.
-h, --help
help for kube-apiserver
--http2-max-streams-per-connection int
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kubelet-certificate-authority string
Path to a cert file for the certificate authority.
--kubelet-client-certificate string
Path to a client cert file for TLS.
--kubelet-client-key string
Path to a client key file for TLS.
--kubelet-https     Default: true
Use https for kubelet connections.
--kubelet-preferred-address-types stringSlice     Default: [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]
List of the preferred NodeAddressTypes to use for kubelet connections.
--kubelet-timeout duration     Default: 5s
Timeout for kubelet operations.
--kubernetes-service-node-port int
If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
--livez-grace-period duration
This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, /livez will assume that unfinished post-start hooks will complete successfully and therefore return true.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--master-service-namespace string     Default: "default"
DEPRECATED: the namespace from which the Kubernetes master services should be injected into pods.
--max-connection-bytes-per-sec int
If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
--max-mutating-requests-inflight int     Default: 200
The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--max-requests-inflight int     Default: 400
The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--min-request-timeout int     Default: 1800
An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
--oidc-ca-file string
If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.
--oidc-client-id string
The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.
--oidc-groups-claim string
If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.
--oidc-groups-prefix string
If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.
--oidc-issuer-url string
The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).
--oidc-required-claim mapStringString
A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.
--oidc-signing-algs stringSlice     Default: [RS256]
Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a 'alg' header value not in this list will be rejected. Values are defined by RFC 7518 https://tools.ietf.org/html/rfc7518#section-3.1.
--oidc-username-claim string     Default: "sub"
The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.
--oidc-username-prefix string
If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.
--profiling     Default: true
Enable profiling via web interface host:port/debug/pprof/
--proxy-client-cert-file string
Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
--proxy-client-key-file string
Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
--request-timeout duration     Default: 1m0s
An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests.
--requestheader-allowed-names stringSlice
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix stringSlice
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
--requestheader-group-headers stringSlice
List of request headers to inspect for groups. X-Remote-Group is suggested.
--requestheader-username-headers stringSlice
List of request headers to inspect for usernames. X-Remote-User is common.
--runtime-config mapStringString
A set of key=value pairs that enable or disable built-in APIs. Supported options are:
v1=true|false for the core API group
<group>/<version>=true|false for a specific API group and version (e.g. apps/v1=true)
api/all=true|false controls all API versions
api/ga=true|false controls all API versions of the form v[0-9]+
api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+
api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+
api/legacy is deprecated, and will be removed in a future version
--secure-port int     Default: 6443
The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0.
--service-account-issuer {service-account-issuer}/.well-known/openid-configuration
Identifier of the service account token issuer. The issuer will assert this identifier in "iss" claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}/.well-known/openid-configuration.
--service-account-jwks-uri string
Overrides the URI for the JSON Web Key Set in the discovery doc served at /.well-known/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname). Only valid if the ServiceAccountIssuerDiscovery feature gate is enabled.
--service-account-key-file stringArray
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided
--service-account-lookup     Default: true
If true, validate ServiceAccount tokens exist in etcd as part of authentication.
--service-account-max-token-expiration duration
The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.
--service-account-signing-key-file string
Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
--service-cluster-ip-range string
A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.
--service-node-port-range portRange     Default: 30000-32767
A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range.
--show-hidden-metrics-for-version string
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--shutdown-delay-duration duration
Time to delay the termination. During that time the server keeps serving requests normally and /healthz returns success, but /readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server.
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--storage-backend string
The storage backend for persistence. Options: 'etcd3' (default).
--storage-media-type string     Default: "application/vnd.kubernetes.protobuf"
The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting.
--target-ram-mb int
Memory limit for apiserver in MB (used to configure sizes of caches, etc.)
--tls-cert-file string
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites stringSlice
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
--tls-min-version string
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey     Default: []
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--token-auth-file string
If set, the file that will be used to secure the secure port of the API server via token authentication.
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
--watch-cache     Default: true
Enable watch caching in the apiserver
--watch-cache-sizes stringSlice
Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size
diff --git a/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md b/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md index 99595543a9cc1..69be42b17e3fb 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md @@ -1,7 +1,7 @@ --- title: kube-controller-manager content_template: templates/tool-reference -weight: 28 +weight: 30 --- {{% capture synopsis %}} @@ -24,875 +24,875 @@ kube-controller-manager [flags] {{% capture options %}} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--add-dir-header
If true, adds the file directory to the header
--allocate-node-cidrs
Should CIDRs for Pods be allocated and set on the cloud provider.
--alsologtostderr
log to standard error as well as files
--attach-detach-reconcile-sync-period duration     Default: 1m0s
The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.
--authentication-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
--authentication-token-webhook-cache-ttl duration     Default: 10s
The duration to cache responses from the webhook token authenticator.
--authentication-tolerate-lookup-failure
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
--authorization-always-allow-paths stringSlice     Default: [/healthz]
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
--authorization-webhook-cache-authorized-ttl duration     Default: 10s
The duration to cache 'authorized' responses from the webhook authorizer.
--authorization-webhook-cache-unauthorized-ttl duration     Default: 10s
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--cidr-allocator-type string     Default: "RangeAllocator"
Type of CIDR allocator to use
--client-ca-file string
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
The provider for cloud services. Empty string for no provider.
--cluster-cidr string
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
--cluster-name string     Default: "kubernetes"
The instance prefix for the cluster.
--cluster-signing-cert-file string     Default: "/etc/kubernetes/ca/ca.pem"
Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates
--cluster-signing-key-file string     Default: "/etc/kubernetes/ca/ca.key"
Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates
--concurrent-deployment-syncs int32     Default: 5
The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load
--concurrent-endpoint-syncs int32     Default: 5
The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load
--concurrent-gc-syncs int32     Default: 20
The number of garbage collector workers that are allowed to sync concurrently.
--concurrent-namespace-syncs int32     Default: 10
The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load
--concurrent-replicaset-syncs int32     Default: 5
The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--concurrent-resource-quota-syncs int32     Default: 5
The number of resource quotas that are allowed to sync concurrently. Larger number = more responsive quota management, but more CPU (and network) load
--concurrent-service-endpoint-syncs int32     Default: 5
The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
--concurrent-service-syncs int32     Default: 1
The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load
--concurrent-serviceaccount-token-syncs int32     Default: 5
The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load
--concurrent-statefulset-syncs int32     Default: 5
The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load
--concurrent-ttl-after-finished-syncs int32     Default: 5
The number of TTL-after-finished controller workers that are allowed to sync concurrently.
--concurrent_rc_syncs int32     Default: 5
The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--configure-cloud-routes     Default: true
Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
--contention-profiling
Enable lock contention profiling, if profiling is enabled
--controller-start-interval duration
Interval between starting controller managers.
--controllers stringSlice     Default: [*]
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.
All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished
Disabled-by-default controllers: bootstrapsigner, tokencleaner
--deployment-controller-sync-period duration     Default: 30s
Period for syncing the deployments.
--disable-attach-detach-reconcile-sync
Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely.
--enable-dynamic-provisioning     Default: true
Enable dynamic provisioning for environments that support it.
--enable-garbage-collector     Default: true
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver.
--enable-hostpath-provisioner
Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.
--enable-taint-manager     Default: true
WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints.
--endpoint-updates-batch-period duration
The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
--endpointslice-updates-batch-period duration
The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
--experimental-cluster-signing-duration duration     Default: 8760h0m0s
The length of duration signed certificates will be given.
--external-cloud-volume-plugin string
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
--flex-volume-plugin-dir string     Default: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Full path of the directory in which the flex volume plugin should search for additional third party volume plugins.
-h, --help
help for kube-controller-manager
--horizontal-pod-autoscaler-cpu-initialization-period duration     Default: 5m0s
The period after pod start when CPU samples might be skipped.
--horizontal-pod-autoscaler-downscale-stabilization duration     Default: 5m0s
The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period.
--horizontal-pod-autoscaler-initial-readiness-delay duration     Default: 30s
The period after pod start during which readiness changes will be treated as initial readiness.
--horizontal-pod-autoscaler-sync-period duration     Default: 15s
The period for syncing the number of pods in horizontal pod autoscaler.
--horizontal-pod-autoscaler-tolerance float     Default: 0.1
The minimum change (from 1.0) in the desired-to-actual metrics ratio for the horizontal pod autoscaler to consider scaling.
--http2-max-streams-per-connection int
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kube-api-burst int32     Default: 30
Burst to use while talking with kubernetes apiserver.
--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf"
Content type of requests sent to apiserver.
--kube-api-qps float32     Default: 20
QPS to use while talking with kubernetes apiserver.
--kubeconfig string
Path to kubeconfig file with authorization and master location information.
--large-cluster-size-threshold int32     Default: 50
Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller.
--leader-elect     Default: true
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
--leader-elect-lease-duration duration     Default: 15s
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
--leader-elect-renew-deadline duration     Default: 10s
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
--leader-elect-resource-lock endpoints     Default: "endpointsleases"
The type of resource object that is used for locking during leader election. Supported options are endpoints (default) and `configmaps`.
--leader-elect-resource-name string     Default: "kube-controller-manager"
The name of resource object that is used for locking during leader election.
--leader-elect-resource-namespace string     Default: "kube-system"
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration     Default: 2s
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--master string
The address of the Kubernetes API server (overrides any value in kubeconfig).
--max-endpoints-per-slice int32     Default: 100
The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
--min-resync-period duration     Default: 12h0m0s
The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.
--namespace-sync-period duration     Default: 5m0s
The period for syncing namespace life-cycle updates
--node-cidr-mask-size int32
Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.
--node-cidr-mask-size-ipv4 int32
Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.
--node-cidr-mask-size-ipv6 int32
Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.
--node-eviction-rate float32     Default: 0.1
Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters.
--node-monitor-grace-period duration     Default: 40s
Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
--node-monitor-period duration     Default: 5s
The period for syncing NodeStatus in NodeController.
--node-startup-grace-period duration     Default: 1m0s
Amount of time which we allow starting Node to be unresponsive before marking it unhealthy.
--pod-eviction-timeout duration     Default: 5m0s
The grace period for deleting pods on failed nodes.
--profiling     Default: true
Enable profiling via web interface host:port/debug/pprof/
--pv-recycler-increment-timeout-nfs int32     Default: 30
the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod
--pv-recycler-minimum-timeout-hostpath int32     Default: 60
The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster.
--pv-recycler-minimum-timeout-nfs int32     Default: 300
The minimum ActiveDeadlineSeconds to use for an NFS Recycler pod
--pv-recycler-pod-template-filepath-hostpath string
The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.
--pv-recycler-pod-template-filepath-nfs string
The file path to a pod definition used as a template for NFS persistent volume recycling
--pv-recycler-timeout-increment-hostpath int32     Default: 30
the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster.
--pvclaimbinder-sync-period duration     Default: 15s
The period for syncing persistent volumes and persistent volume claims
--requestheader-allowed-names stringSlice
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix stringSlice     Default: [x-remote-extra-]
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
--requestheader-group-headers stringSlice     Default: [x-remote-group]
List of request headers to inspect for groups. X-Remote-Group is suggested.
--requestheader-username-headers stringSlice     Default: [x-remote-user]
List of request headers to inspect for usernames. X-Remote-User is common.
--resource-quota-sync-period duration     Default: 5m0s
The period for syncing quota usage status in the system
--root-ca-file string
If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle.
--route-reconciliation-period duration     Default: 10s
The period for reconciling routes created for Nodes by cloud provider.
--secondary-node-eviction-rate float32     Default: 0.01
Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.
--secure-port int     Default: 10257
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--service-account-private-key-file string
Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens.
--service-cluster-ip-range string
CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true
--show-hidden-metrics-for-version string
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--terminated-pod-gc-threshold int32     Default: 12500
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
--tls-cert-file string
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites stringSlice
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
--tls-min-version string
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey     Default: []
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--unhealthy-zone-threshold float32     Default: 0.55
Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy.
--use-service-account-credentials
If true, use individual service account credentials for each controller.
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--add-dir-header
If true, adds the file directory to the header
--allocate-node-cidrs
Should CIDRs for Pods be allocated and set on the cloud provider.
--alsologtostderr
log to standard error as well as files
--attach-detach-reconcile-sync-period duration     Default: 1m0s
The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.
--authentication-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
--authentication-token-webhook-cache-ttl duration     Default: 10s
The duration to cache responses from the webhook token authenticator.
--authentication-tolerate-lookup-failure
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
--authorization-always-allow-paths stringSlice     Default: [/healthz]
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
--authorization-webhook-cache-authorized-ttl duration     Default: 10s
The duration to cache 'authorized' responses from the webhook authorizer.
--authorization-webhook-cache-unauthorized-ttl duration     Default: 10s
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--cidr-allocator-type string     Default: "RangeAllocator"
Type of CIDR allocator to use
--client-ca-file string
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config string
The path to the cloud provider configuration file. Empty string for no configuration file.
--cloud-provider string
The provider for cloud services. Empty string for no provider.
--cluster-cidr string
CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true
--cluster-name string     Default: "kubernetes"
The instance prefix for the cluster.
--cluster-signing-cert-file string     Default: "/etc/kubernetes/ca/ca.pem"
Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates
--cluster-signing-key-file string     Default: "/etc/kubernetes/ca/ca.key"
Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates
--concurrent-deployment-syncs int32     Default: 5
The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load
--concurrent-endpoint-syncs int32     Default: 5
The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load
--concurrent-gc-syncs int32     Default: 20
The number of garbage collector workers that are allowed to sync concurrently.
--concurrent-namespace-syncs int32     Default: 10
The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load
--concurrent-replicaset-syncs int32     Default: 5
The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--concurrent-resource-quota-syncs int32     Default: 5
The number of resource quotas that are allowed to sync concurrently. Larger number = more responsive quota management, but more CPU (and network) load
--concurrent-service-endpoint-syncs int32     Default: 5
The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.
--concurrent-service-syncs int32     Default: 1
The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load
--concurrent-serviceaccount-token-syncs int32     Default: 5
The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load
--concurrent-statefulset-syncs int32     Default: 5
The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load
--concurrent-ttl-after-finished-syncs int32     Default: 5
The number of TTL-after-finished controller workers that are allowed to sync concurrently.
--concurrent_rc_syncs int32     Default: 5
The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
--configure-cloud-routes     Default: true
Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
--contention-profiling
Enable lock contention profiling, if profiling is enabled
--controller-start-interval duration
Interval between starting controller managers.
--controllers stringSlice     Default: [*]
A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.
All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished
Disabled-by-default controllers: bootstrapsigner, tokencleaner
--deployment-controller-sync-period duration     Default: 30s
Period for syncing the deployments.
--disable-attach-detach-reconcile-sync
Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely.
--enable-dynamic-provisioning     Default: true
Enable dynamic provisioning for environments that support it.
--enable-garbage-collector     Default: true
Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver.
--enable-hostpath-provisioner
Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features. HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.
--enable-taint-manager     Default: true
WARNING: Beta feature. If set to true enables NoExecute Taints and will evict all not-tolerating Pod running on Nodes tainted with this kind of Taints.
--endpoint-updates-batch-period duration
The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
--endpointslice-updates-batch-period duration
The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated
--experimental-cluster-signing-duration duration     Default: 8760h0m0s
The length of duration signed certificates will be given.
--external-cloud-volume-plugin string
The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
--flex-volume-plugin-dir string     Default: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Full path of the directory in which the flex volume plugin should search for additional third party volume plugins.
-h, --help
help for kube-controller-manager
--horizontal-pod-autoscaler-cpu-initialization-period duration     Default: 5m0s
The period after pod start when CPU samples might be skipped.
--horizontal-pod-autoscaler-downscale-stabilization duration     Default: 5m0s
The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period.
--horizontal-pod-autoscaler-initial-readiness-delay duration     Default: 30s
The period after pod start during which readiness changes will be treated as initial readiness.
--horizontal-pod-autoscaler-sync-period duration     Default: 15s
The period for syncing the number of pods in horizontal pod autoscaler.
--horizontal-pod-autoscaler-tolerance float     Default: 0.1
The minimum change (from 1.0) in the desired-to-actual metrics ratio for the horizontal pod autoscaler to consider scaling.
--http2-max-streams-per-connection int
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kube-api-burst int32     Default: 30
Burst to use while talking with kubernetes apiserver.
--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf"
Content type of requests sent to apiserver.
--kube-api-qps float32     Default: 20
QPS to use while talking with kubernetes apiserver.
--kubeconfig string
Path to kubeconfig file with authorization and master location information.
--large-cluster-size-threshold int32     Default: 50
Number of nodes from which NodeController treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller.
--leader-elect     Default: true
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
--leader-elect-lease-duration duration     Default: 15s
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
--leader-elect-renew-deadline duration     Default: 10s
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
--leader-elect-resource-lock endpoints     Default: "endpointsleases"
The type of resource object that is used for locking during leader election. Supported options are endpoints (default) and `configmaps`.
--leader-elect-resource-name string     Default: "kube-controller-manager"
The name of resource object that is used for locking during leader election.
--leader-elect-resource-namespace string     Default: "kube-system"
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration     Default: 2s
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--master string
The address of the Kubernetes API server (overrides any value in kubeconfig).
--max-endpoints-per-slice int32     Default: 100
The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.
--min-resync-period duration     Default: 12h0m0s
The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.
--namespace-sync-period duration     Default: 5m0s
The period for syncing namespace life-cycle updates
--node-cidr-mask-size int32
Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.
--node-cidr-mask-size-ipv4 int32
Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.
--node-cidr-mask-size-ipv6 int32
Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.
--node-eviction-rate float32     Default: 0.1
Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters.
--node-monitor-grace-period duration     Default: 40s
Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.
--node-monitor-period duration     Default: 5s
The period for syncing NodeStatus in NodeController.
--node-startup-grace-period duration     Default: 1m0s
Amount of time which we allow starting Node to be unresponsive before marking it unhealthy.
--pod-eviction-timeout duration     Default: 5m0s
The grace period for deleting pods on failed nodes.
--profiling     Default: true
Enable profiling via web interface host:port/debug/pprof/
--pv-recycler-increment-timeout-nfs int32     Default: 30
the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod
--pv-recycler-minimum-timeout-hostpath int32     Default: 60
The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod. This is for development and testing only and will not work in a multi-node cluster.
--pv-recycler-minimum-timeout-nfs int32     Default: 300
The minimum ActiveDeadlineSeconds to use for an NFS Recycler pod
--pv-recycler-pod-template-filepath-hostpath string
The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.
--pv-recycler-pod-template-filepath-nfs string
The file path to a pod definition used as a template for NFS persistent volume recycling
--pv-recycler-timeout-increment-hostpath int32     Default: 30
the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod. This is for development and testing only and will not work in a multi-node cluster.
--pvclaimbinder-sync-period duration     Default: 15s
The period for syncing persistent volumes and persistent volume claims
--requestheader-allowed-names stringSlice
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix stringSlice     Default: [x-remote-extra-]
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
--requestheader-group-headers stringSlice     Default: [x-remote-group]
List of request headers to inspect for groups. X-Remote-Group is suggested.
--requestheader-username-headers stringSlice     Default: [x-remote-user]
List of request headers to inspect for usernames. X-Remote-User is common.
--resource-quota-sync-period duration     Default: 5m0s
The period for syncing quota usage status in the system
--root-ca-file string
If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle.
--route-reconciliation-period duration     Default: 10s
The period for reconciling routes created for Nodes by cloud provider.
--secondary-node-eviction-rate float32     Default: 0.01
Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.
--secure-port int     Default: 10257
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--service-account-private-key-file string
Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens.
--service-cluster-ip-range string
CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true
--show-hidden-metrics-for-version string
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--terminated-pod-gc-threshold int32     Default: 12500
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
--tls-cert-file string
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites stringSlice
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
--tls-min-version string
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey     Default: []
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--unhealthy-zone-threshold float32     Default: 0.55
Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy.
--use-service-account-credentials
If true, use individual service account credentials for each controller.
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
diff --git a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md index c8780d4626270..1ad3f6ee15568 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-proxy.md @@ -1,7 +1,7 @@ --- title: kube-proxy content_template: templates/tool-reference -weight: 28 +weight: 30 --- {{% capture synopsis %}} @@ -23,315 +23,315 @@ kube-proxy [flags] {{% capture options %}} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces)
--cleanup
If true cleanup iptables and ipvs rules and exit.
--cluster-cidr string
The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead
--config string
The path to the configuration file.
--config-sync-period duration     Default: 15m0s
How often configuration from the apiserver is refreshed. Must be greater than 0.
--conntrack-max-per-core int32     Default: 32768
Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).
--conntrack-min int32     Default: 131072
Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).
--conntrack-tcp-timeout-close-wait duration     Default: 1h0m0s
NAT timeout for TCP connections in the CLOSE_WAIT state
--conntrack-tcp-timeout-established duration     Default: 24h0m0s
Idle timeout for established TCP connections (0 to leave as-is)
--detect-local-mode LocalMode
Mode to use to detect local traffic
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
--healthz-bind-address ipport     Default: 0.0.0.0:10256
The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable.
-h, --help
help for kube-proxy
--hostname-override string
If non-empty, will use this string as identification instead of the actual hostname.
--iptables-masquerade-bit int32     Default: 14
If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].
--iptables-min-sync-period duration
The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
--iptables-sync-period duration     Default: 30s
The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
--ipvs-exclude-cidrs stringSlice
A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.
--ipvs-min-sync-period duration
The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
--ipvs-scheduler string
The ipvs scheduler type when proxy mode is ipvs
--ipvs-strict-arp
Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2
--ipvs-sync-period duration     Default: 30s
The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
--ipvs-tcp-timeout duration
The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--ipvs-tcpfin-timeout duration
The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--ipvs-udp-timeout duration
The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--kube-api-burst int32     Default: 10
Burst to use while talking with kubernetes apiserver
--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf"
Content type of requests sent to apiserver.
--kube-api-qps float32     Default: 5
QPS to use while talking with kubernetes apiserver
--kubeconfig string
Path to kubeconfig file with authorization information (the master location is set by the master flag).
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--masquerade-all
If using the pure iptables proxy, SNAT all traffic sent via Service cluster IPs (this not commonly needed)
--master string
The address of the Kubernetes API server (overrides any value in kubeconfig)
--metrics-bind-address ipport     Default: 127.0.0.1:10249
The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable.
--nodeport-addresses stringSlice
A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses.
--oom-score-adj int32     Default: -999
The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]
--profiling
If true enables profiling via web interface on /debug/pprof handler.
--proxy-mode ProxyMode
Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs'. If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
--proxy-port-range port-range
Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.
--show-hidden-metrics-for-version string
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--udp-timeout duration     Default: 250ms
How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
--version version[=true]
Print version information and quit
--write-config-to string
If set, write the default configuration values to this file and exit.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces)
--cleanup
If true cleanup iptables and ipvs rules and exit.
--cluster-cidr string
The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead
--config string
The path to the configuration file.
--config-sync-period duration     Default: 15m0s
How often configuration from the apiserver is refreshed. Must be greater than 0.
--conntrack-max-per-core int32     Default: 32768
Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).
--conntrack-min int32     Default: 131072
Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).
--conntrack-tcp-timeout-close-wait duration     Default: 1h0m0s
NAT timeout for TCP connections in the CLOSE_WAIT state
--conntrack-tcp-timeout-established duration     Default: 24h0m0s
Idle timeout for established TCP connections (0 to leave as-is)
--detect-local-mode LocalMode
Mode to use to detect local traffic
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
--healthz-bind-address ipport     Default: 0.0.0.0:10256
The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable.
-h, --help
help for kube-proxy
--hostname-override string
If non-empty, will use this string as identification instead of the actual hostname.
--iptables-masquerade-bit int32     Default: 14
If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].
--iptables-min-sync-period duration
The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
--iptables-sync-period duration     Default: 30s
The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
--ipvs-exclude-cidrs stringSlice
A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.
--ipvs-min-sync-period duration
The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').
--ipvs-scheduler string
The ipvs scheduler type when proxy mode is ipvs
--ipvs-strict-arp
Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2
--ipvs-sync-period duration     Default: 30s
The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.
--ipvs-tcp-timeout duration
The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--ipvs-tcpfin-timeout duration
The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--ipvs-udp-timeout duration
The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').
--kube-api-burst int32     Default: 10
Burst to use while talking with kubernetes apiserver
--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf"
Content type of requests sent to apiserver.
--kube-api-qps float32     Default: 5
QPS to use while talking with kubernetes apiserver
--kubeconfig string
Path to kubeconfig file with authorization information (the master location is set by the master flag).
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--masquerade-all
If using the pure iptables proxy, SNAT all traffic sent via Service cluster IPs (this not commonly needed)
--master string
The address of the Kubernetes API server (overrides any value in kubeconfig)
--metrics-bind-address ipport     Default: 127.0.0.1:10249
The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable.
--nodeport-addresses stringSlice
A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses.
--oom-score-adj int32     Default: -999
The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]
--profiling
If true enables profiling via web interface on /debug/pprof handler.
--proxy-mode ProxyMode
Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs'. If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
--proxy-port-range port-range
Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.
--show-hidden-metrics-for-version string
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--udp-timeout duration     Default: 250ms
How long an idle UDP connection will be kept open (e.g. '250ms', '2s'). Must be greater than 0. Only applicable for proxy-mode=userspace
--version version[=true]
Print version information and quit
--write-config-to string
If set, write the default configuration values to this file and exit.
diff --git a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md index 1a4b0882addc9..f807c5d0241b3 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -1,7 +1,7 @@ --- title: kube-scheduler content_template: templates/tool-reference -weight: 28 +weight: 30 --- {{% capture synopsis %}} @@ -13,7 +13,7 @@ and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed -through the API as necessary. See [scheduling](https://kubernetes.io/docs/concepts/scheduling/) +through the API as necessary. See [scheduling](https://kubernetes.io/docs/concepts/scheduling-eviction/) for more information about scheduling and the kube-scheduler component. ``` @@ -24,490 +24,490 @@ kube-scheduler [flags] {{% capture options %}} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--add-dir-header
If true, adds the file directory to the header
--address string     Default: "0.0.0.0"
DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead.
--algorithm-provider string
DEPRECATED: the scheduling algorithm provider to use, one of: ClusterAutoscalerProvider | DefaultProvider
--alsologtostderr
log to standard error as well as files
--authentication-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
--authentication-token-webhook-cache-ttl duration     Default: 10s
The duration to cache responses from the webhook token authenticator.
--authentication-tolerate-lookup-failure     Default: true
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
--authorization-always-allow-paths stringSlice     Default: [/healthz]
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
--authorization-webhook-cache-authorized-ttl duration     Default: 10s
The duration to cache 'authorized' responses from the webhook authorizer.
--authorization-webhook-cache-unauthorized-ttl duration     Default: 10s
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--client-ca-file string
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--config string
The path to the configuration file. Flags override values in this file.
--contention-profiling     Default: true
DEPRECATED: enable lock contention profiling, if profiling is enabled
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
--hard-pod-affinity-symmetric-weight int32     Default: 1
DEPRECATED: RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. Must be in the range 0-100.This option was moved to the policy configuration file
-h, --help
help for kube-scheduler
--http2-max-streams-per-connection int
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kube-api-burst int32     Default: 100
DEPRECATED: burst to use while talking with kubernetes apiserver
--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf"
DEPRECATED: content type of requests sent to apiserver.
--kube-api-qps float32     Default: 50
DEPRECATED: QPS to use while talking with kubernetes apiserver
--kubeconfig string
DEPRECATED: path to kubeconfig file with authorization and master location information.
--leader-elect     Default: true
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
--leader-elect-lease-duration duration     Default: 15s
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
--leader-elect-renew-deadline duration     Default: 10s
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
--leader-elect-resource-lock endpoints     Default: "endpointsleases"
The type of resource object that is used for locking during leader election. Supported options are endpoints (default) and `configmaps`.
--leader-elect-resource-name string     Default: "kube-scheduler"
The name of resource object that is used for locking during leader election.
--leader-elect-resource-namespace string     Default: "kube-system"
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration     Default: 2s
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
--lock-object-name string     Default: "kube-scheduler"
DEPRECATED: define the name of the lock object. Will be removed in favor of leader-elect-resource-name
--lock-object-namespace string     Default: "kube-system"
DEPRECATED: define the namespace of the lock object. Will be removed in favor of leader-elect-resource-namespace.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--master string
The address of the Kubernetes API server (overrides any value in kubeconfig)
--policy-config-file string
DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true
--policy-configmap string
DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'
--policy-configmap-namespace string     Default: "kube-system"
DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty.
--port int     Default: 10251
DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all. See --secure-port instead.
--profiling     Default: true
DEPRECATED: enable profiling via web interface host:port/debug/pprof/
--requestheader-allowed-names stringSlice
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix stringSlice     Default: [x-remote-extra-]
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
--requestheader-group-headers stringSlice     Default: [x-remote-group]
List of request headers to inspect for groups. X-Remote-Group is suggested.
--requestheader-username-headers stringSlice     Default: [x-remote-user]
List of request headers to inspect for usernames. X-Remote-User is common.
--scheduler-name string     Default: "default-scheduler"
DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName".
--secure-port int     Default: 10259
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--show-hidden-metrics-for-version string
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. Accepted format of version is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--tls-cert-file string
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites stringSlice
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
--tls-min-version string
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey     Default: []
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--use-legacy-policy-config
DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
--write-config-to string
If set, write the configuration values to this file and exit.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--add-dir-header
If true, adds the file directory to the header
--address string     Default: "0.0.0.0"
DEPRECATED: the IP address on which to listen for the --port port (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). See --bind-address instead.
--algorithm-provider string
DEPRECATED: the scheduling algorithm provider to use, one of: ClusterAutoscalerProvider | DefaultProvider
--alsologtostderr
log to standard error as well as files
--authentication-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.
--authentication-skip-lookup
If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
--authentication-token-webhook-cache-ttl duration     Default: 10s
The duration to cache responses from the webhook token authenticator.
--authentication-tolerate-lookup-failure     Default: true
If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
--authorization-always-allow-paths stringSlice     Default: [/healthz]
A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.
--authorization-kubeconfig string
kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.
--authorization-webhook-cache-authorized-ttl duration     Default: 10s
The duration to cache 'authorized' responses from the webhook authorizer.
--authorization-webhook-cache-unauthorized-ttl duration     Default: 10s
The duration to cache 'unauthorized' responses from the webhook authorizer.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--bind-address ip     Default: 0.0.0.0
The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.
--cert-dir string
The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.
--client-ca-file string
If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--config string
The path to the configuration file. Flags override values in this file.
--contention-profiling     Default: true
DEPRECATED: enable lock contention profiling, if profiling is enabled
--feature-gates mapStringBool
A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (ALPHA - default=false)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
AllowInsecureBackendProxy=true|false (BETA - default=true)
AnyVolumeDataSource=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
CSIMigrationAWS=true|false (BETA - default=false)
CSIMigrationAWSComplete=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (BETA - default=false)
CSIMigrationGCEComplete=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (BETA - default=false)
CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultIngressClass=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (BETA - default=true)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HugePageStorageMediumSize=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
ImmutableEphemeralVolumes=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodDisruptionBudget=true|false (BETA - default=true)
PodOverhead=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
SelectorIndex=true|false (ALPHA - default=false)
ServerSideApply=true|false (BETA - default=true)
ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
ServiceAppProtocol=true|false (ALPHA - default=false)
ServiceNodeExclusion=true|false (ALPHA - default=false)
ServiceTopology=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
--hard-pod-affinity-symmetric-weight int32     Default: 1
DEPRECATED: RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule corresponding to every RequiredDuringScheduling affinity rule. --hard-pod-affinity-symmetric-weight represents the weight of implicit PreferredDuringScheduling affinity rule. Must be in the range 0-100.This option was moved to the policy configuration file
-h, --help
help for kube-scheduler
--http2-max-streams-per-connection int
The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--kube-api-burst int32     Default: 100
DEPRECATED: burst to use while talking with kubernetes apiserver
--kube-api-content-type string     Default: "application/vnd.kubernetes.protobuf"
DEPRECATED: content type of requests sent to apiserver.
--kube-api-qps float32     Default: 50
DEPRECATED: QPS to use while talking with kubernetes apiserver
--kubeconfig string
DEPRECATED: path to kubeconfig file with authorization and master location information.
--leader-elect     Default: true
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
--leader-elect-lease-duration duration     Default: 15s
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
--leader-elect-renew-deadline duration     Default: 10s
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
--leader-elect-resource-lock endpoints     Default: "endpointsleases"
The type of resource object that is used for locking during leader election. Supported options are endpoints (default) and `configmaps`.
--leader-elect-resource-name string     Default: "kube-scheduler"
The name of resource object that is used for locking during leader election.
--leader-elect-resource-namespace string     Default: "kube-system"
The namespace of resource object that is used for locking during leader election.
--leader-elect-retry-period duration     Default: 2s
The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.
--lock-object-name string     Default: "kube-scheduler"
DEPRECATED: define the name of the lock object. Will be removed in favor of leader-elect-resource-name
--lock-object-namespace string     Default: "kube-system"
DEPRECATED: define the namespace of the lock object. Will be removed in favor of leader-elect-resource-namespace.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--master string
The address of the Kubernetes API server (overrides any value in kubeconfig)
--policy-config-file string
DEPRECATED: file with scheduler policy configuration. This file is used if policy ConfigMap is not provided or --use-legacy-policy-config=true
--policy-configmap string
DEPRECATED: name of the ConfigMap object that contains scheduler's policy configuration. It must exist in the system namespace before scheduler initialization if --use-legacy-policy-config=false. The config must be provided as the value of an element in 'Data' map with the key='policy.cfg'
--policy-configmap-namespace string     Default: "kube-system"
DEPRECATED: the namespace where policy ConfigMap is located. The kube-system namespace will be used if this is not provided or is empty.
--port int     Default: 10251
DEPRECATED: the port on which to serve HTTP insecurely without authentication and authorization. If 0, don't serve plain HTTP at all. See --secure-port instead.
--profiling     Default: true
DEPRECATED: enable profiling via web interface host:port/debug/pprof/
--requestheader-allowed-names stringSlice
List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string
Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix stringSlice     Default: [x-remote-extra-]
List of request header prefixes to inspect. X-Remote-Extra- is suggested.
--requestheader-group-headers stringSlice     Default: [x-remote-group]
List of request headers to inspect for groups. X-Remote-Group is suggested.
--requestheader-username-headers stringSlice     Default: [x-remote-user]
List of request headers to inspect for usernames. X-Remote-User is common.
--scheduler-name string     Default: "default-scheduler"
DEPRECATED: name of the scheduler, used to select which pods will be processed by this scheduler, based on pod's "spec.schedulerName".
--secure-port int     Default: 10259
The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.
--show-hidden-metrics-for-version string
The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. Accepted format of version is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--tls-cert-file string
File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites stringSlice
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
--tls-min-version string
Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string
File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey     Default: []
A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com".
--use-legacy-policy-config
DEPRECATED: when set to true, scheduler will ignore policy ConfigMap and uses policy config file
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
--write-config-to string
If set, write the configuration values to this file and exit.
diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index ef98ab814f2f7..660463c64d57f 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -12,7 +12,7 @@ node. It can register the node with the apiserver using one of: the hostname; a The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object that describes a pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms (primarily through the apiserver) and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes. -Other than from an PodSpec from the apiserver, there are three ways that a container manifest can be provided to the Kubelet. +Other than from a PodSpec from the apiserver, there are three ways that a container manifest can be provided to the Kubelet. File: Path passed as a flag on the command line. Files under this path will be monitored periodically for updates. The monitoring period is 20s by default and is configurable via a flag. @@ -390,7 +390,7 @@ kubelet [flags] --enable-cadvisor-json-endpoints - Enable cAdvisor json /spec and /stats/* endpoints. (default true) + Enable cAdvisor json /spec and /stats/* endpoints. (default false) (DEPRECATED: will be removed in a future version) @@ -916,7 +916,7 @@ kubelet [flags] --pod-infra-container-image string - The image whose network/ipc namespaces containers in each pod will use. This docker-specific flag only works when container-runtime is set to docker. (default "k8s.gcr.io/pause:3.1") + The image whose network/ipc namespaces containers in each pod will use. This docker-specific flag only works when container-runtime is set to docker. (default "k8s.gcr.io/pause:3.2") diff --git a/content/en/docs/reference/glossary/cloud-controller-manager.md b/content/en/docs/reference/glossary/cloud-controller-manager.md index 974b8539dfa68..c78bf393cb215 100755 --- a/content/en/docs/reference/glossary/cloud-controller-manager.md +++ b/content/en/docs/reference/glossary/cloud-controller-manager.md @@ -2,18 +2,23 @@ title: Cloud Controller Manager id: cloud-controller-manager date: 2018-04-12 -full_link: /docs/tasks/administer-cluster/running-cloud-controller/ +full_link: /docs/concepts/architecture/cloud-controller/ short_description: > - Cloud Controller Manager is an alpha feature in 1.8. In upcoming releases it will be the preferred way to integrate Kubernetes with any cloud. - + Control plane component that integrates Kubernetes with third-party cloud providers. aka: tags: - core-object - architecture - operation --- - Cloud Controller Manager is an alpha feature in 1.8. In upcoming releases it will be the preferred way to integrate Kubernetes with any cloud. + A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component +that embeds cloud-specific control logic. The cloud controller manager lets you link your +cluster into your cloud provider's API, and separates out the components that interact +with that cloud platform from components that just interact with your cluster. + + - +By decoupling the interoperability logic between Kubernetes and the underlying cloud +infrastructure, the cloud-controller-manager component enables cloud providers to release +features at a different pace compared to the main Kubernetes project. -Kubernetes v1.6 contains a new binary called cloud-controller-manager. cloud-controller-manager is a daemon that embeds cloud-specific control loops. These cloud-specific control loops were originally in the kube-controller-manager. Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the cloud-controller-manager binary allows cloud vendors to evolve independently from the core Kubernetes code. diff --git a/content/en/docs/reference/glossary/configmap.md b/content/en/docs/reference/glossary/configmap.md index b541a78762cdd..169372c95470c 100755 --- a/content/en/docs/reference/glossary/configmap.md +++ b/content/en/docs/reference/glossary/configmap.md @@ -2,16 +2,19 @@ title: ConfigMap id: configmap date: 2018-04-12 -full_link: /docs/tasks/configure-pod-container/configure-pod-configmap/ +full_link: /docs/concepts/configuration/configmap/ short_description: > - An API object used to store non-confidential data in key-value pairs. Can be consumed as environment variables, command-line arguments, or config files in a volume. + An API object used to store non-confidential data in key-value pairs. Can be consumed as environment variables, command-line arguments, or configuraton files in a volume. aka: tags: - core-object --- - An API object used to store non-confidential data in key-value pairs. Can be consumed as environment variables, command-line arguments, or config files in a {{< glossary_tooltip text="volume" term_id="volume" >}}. + An API object used to store non-confidential data in key-value pairs. +{{< glossary_tooltip text="Pods" term_id="pod" >}} can consume ConfigMaps as +environment variables, command-line arguments, or as configuration files in a +{{< glossary_tooltip text="volume" term_id="volume" >}}. -Allows you to decouple environment-specific configuration from your {{< glossary_tooltip text="container images" term_id="container" >}}, so that your applications are easily portable. When storing confidential data use a [Secret](/docs/concepts/configuration/secret/). +A ConfigMap allows you to decouple environment-specific configuration from your {{< glossary_tooltip text="container images" term_id="image" >}}, so that your applications are easily portable. diff --git a/content/en/docs/reference/glossary/container-runtime.md b/content/en/docs/reference/glossary/container-runtime.md index c45bed0f7b303..89bd4ae7ad2d5 100644 --- a/content/en/docs/reference/glossary/container-runtime.md +++ b/content/en/docs/reference/glossary/container-runtime.md @@ -2,7 +2,7 @@ title: Container Runtime id: container-runtime date: 2019-06-05 -full_link: /docs/reference/generated/container-runtime +full_link: /docs/setup/production-environment/container-runtimes short_description: > The container runtime is the software that is responsible for running containers. diff --git a/content/en/docs/reference/glossary/control-plane.md b/content/en/docs/reference/glossary/control-plane.md index c314906de321f..8dadd22037d5e 100644 --- a/content/en/docs/reference/glossary/control-plane.md +++ b/content/en/docs/reference/glossary/control-plane.md @@ -11,3 +11,15 @@ tags: - fundamental --- The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. + + + + This layer is composed by many different components, such as (but not restricted to): + + * {{< glossary_tooltip text="etcd" term_id="etcd" >}} + * {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} + * {{< glossary_tooltip text="Scheduler" term_id="kube-scheduler" >}} + * {{< glossary_tooltip text="Controller Manager" term_id="kube-controller-manager" >}} + * {{< glossary_tooltip text="Cloud Controller Manager" term_id="cloud-controller-manager" >}} + + These components can be run as traditional operating system services (daemons) or as containers. The hosts running these components were historically called {{< glossary_tooltip text="masters" term_id="master" >}}. \ No newline at end of file diff --git a/content/en/docs/reference/glossary/cronjob.md b/content/en/docs/reference/glossary/cronjob.md index d09dc8e0d4263..abfdfe3705d39 100755 --- a/content/en/docs/reference/glossary/cronjob.md +++ b/content/en/docs/reference/glossary/cronjob.md @@ -4,7 +4,7 @@ id: cronjob date: 2018-04-12 full_link: /docs/concepts/workloads/controllers/cron-jobs/ short_description: > - Manages a [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) that runs on a periodic schedule. + A repeating task (a Job) that runs on a regular schedule. aka: tags: diff --git a/content/en/docs/reference/glossary/deployment.md b/content/en/docs/reference/glossary/deployment.md index b1ea465746151..fd3c3ad710cab 100755 --- a/content/en/docs/reference/glossary/deployment.md +++ b/content/en/docs/reference/glossary/deployment.md @@ -4,7 +4,7 @@ id: deployment date: 2018-04-12 full_link: /docs/concepts/workloads/controllers/deployment/ short_description: > - An API object that manages a replicated application. + Manages a replicated application on your cluster. aka: tags: @@ -12,9 +12,10 @@ tags: - core-object - workload --- - An API object that manages a replicated application. + An API object that manages a replicated application, typically by running Pods with no local state. -Each replica is represented by a {{< glossary_tooltip term_id="pod" >}}, and the Pods are distributed among the {{< glossary_tooltip text="nodes" term_id="node" >}} of a cluster. - +Each replica is represented by a {{< glossary_tooltip term_id="pod" >}}, and the Pods are distributed among the +{{< glossary_tooltip text="nodes" term_id="node" >}} of a cluster. +For workloads that do require local state, consider using a {{< glossary_tooltip term_id="StatefulSet" >}}. diff --git a/content/en/docs/reference/glossary/host-aliases.md b/content/en/docs/reference/glossary/host-aliases.md index 47bd22d433d11..dc848960c35d4 100644 --- a/content/en/docs/reference/glossary/host-aliases.md +++ b/content/en/docs/reference/glossary/host-aliases.md @@ -2,7 +2,7 @@ title: HostAliases id: HostAliases date: 2019-01-31 -full_link: /docs/reference/generated/kubernetes-api/v1.13/#hostalias-v1-core +full_link: /docs/reference/generated/kubernetes-api/{{< param "version" >}}/#hostalias-v1-core short_description: > A HostAliases is a mapping between the IP address and hostname to be injected into a Pod's hosts file. @@ -14,4 +14,4 @@ tags: -[HostAliases](/docs/reference/generated/kubernetes-api/v1.13/#hostalias-v1-corev) is an optional list of hostnames and IP addresses that will be injected into the Pod's hosts file if specified. This is only valid for non-hostNetwork Pods. +[HostAliases](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#hostalias-v1-core) is an optional list of hostnames and IP addresses that will be injected into the Pod's hosts file if specified. This is only valid for non-hostNetwork Pods. diff --git a/content/en/docs/reference/glossary/master.md b/content/en/docs/reference/glossary/master.md new file mode 100644 index 0000000000000..64e809d1d9154 --- /dev/null +++ b/content/en/docs/reference/glossary/master.md @@ -0,0 +1,15 @@ +--- +title: Master +id: master +date: 2020-04-16 +short_description: > + Legacy term, used as synonym for nodes running the control plane. + +aka: +tags: +- fundamental +--- + Legacy term, used as synonym for {{< glossary_tooltip text="nodes" term_id="node" >}} hosting the {{< glossary_tooltip text="control plane" term_id="control-plane" >}}. + + +The term is still being used by some provisioning tools, such as {{< glossary_tooltip text="kubeadm" term_id="kubeadm" >}}, and managed services, to {{< glossary_tooltip text="label" term_id="label" >}} {{< glossary_tooltip text="nodes" term_id="node" >}} with `kubernetes.io/role` and control placement of {{< glossary_tooltip text="control plane" term_id="control-plane" >}} {{< glossary_tooltip text="pods" term_id="pod" >}}. \ No newline at end of file diff --git a/content/en/docs/reference/glossary/pod.md b/content/en/docs/reference/glossary/pod.md index fcd62f44239b2..f14393072c559 100755 --- a/content/en/docs/reference/glossary/pod.md +++ b/content/en/docs/reference/glossary/pod.md @@ -4,7 +4,7 @@ id: pod date: 2018-04-12 full_link: /docs/concepts/workloads/pods/pod-overview/ short_description: > - The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. + A Pod represents a set of running containers in your cluster. aka: tags: diff --git a/content/en/docs/reference/glossary/service-account.md b/content/en/docs/reference/glossary/service-account.md index aee24891c8b0f..7985f672fbacc 100755 --- a/content/en/docs/reference/glossary/service-account.md +++ b/content/en/docs/reference/glossary/service-account.md @@ -1,5 +1,5 @@ --- -title: Service Account +title: ServiceAccount id: service-account date: 2018-04-12 full_link: /docs/tasks/configure-pod-container/configure-service-account/ @@ -16,4 +16,3 @@ tags: When processes inside Pods access the cluster, they are authenticated by the API server as a particular service account, for example, `default`. When you create a Pod, if you do not specify a service account, it is automatically assigned the default service account in the same {{< glossary_tooltip text="Namespace" term_id="namespace" >}}. - diff --git a/content/en/docs/reference/glossary/statefulset.md b/content/en/docs/reference/glossary/statefulset.md index 6790d5fbb282d..be1b334cec3c1 100755 --- a/content/en/docs/reference/glossary/statefulset.md +++ b/content/en/docs/reference/glossary/statefulset.md @@ -4,7 +4,7 @@ id: statefulset date: 2018-04-12 full_link: /docs/concepts/workloads/controllers/statefulset/ short_description: > - Manages the deployment and scaling of a set of Pods, *and provides guarantees about the ordering and uniqueness* of these Pods. + Manages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod. aka: tags: @@ -18,3 +18,5 @@ tags: Like a {{< glossary_tooltip term_id="deployment" >}}, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling. + +If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed. diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index 5e10e88ece1b6..dbf0e2dc63121 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -68,7 +68,7 @@ kubectl config get-contexts # display list of contexts kubectl config current-context # display the current-context kubectl config use-context my-cluster-name # set the default context to my-cluster-name -# add a new cluster to your kubeconf that supports basic auth +# add a new user to your kubeconf that supports basic auth kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword # permanently save the namespace for all subsequent kubectl commands in that context. @@ -351,6 +351,21 @@ Output format | Description `-o=wide` | Output in the plain-text format with any additional information, and for pods, the node name is included `-o=yaml` | Output a YAML formatted API object +Examples using `-o=custom-columns`: + +```bash +# All images running in a cluster +kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image' + + # All images excluding "k8s.gcr.io/coredns:1.6.2" +kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image' + +# All fields under metadata regardless of name +kubectl get pods -A -o=custom-columns='DATA:metadata.*' +``` + +More examples in the kubectl [reference documentation](/docs/reference/kubectl/overview/#custom-columns). + ### Kubectl output verbosity and debugging Kubectl verbosity is controlled with the `-v` or `--v` flags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described [here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md). diff --git a/content/en/docs/reference/kubectl/conventions.md b/content/en/docs/reference/kubectl/conventions.md index bfa40b3be2d43..c4bdd59ec5f3e 100644 --- a/content/en/docs/reference/kubectl/conventions.md +++ b/content/en/docs/reference/kubectl/conventions.md @@ -17,7 +17,6 @@ For a stable output in a script: * Request one of the machine-oriented output forms, such as `-o name`, `-o json`, `-o yaml`, `-o go-template`, or `-o jsonpath`. * Fully-qualify the version. For example, `jobs.v1.batch/myjob`. This will ensure that kubectl does not use its default version that can change over time. -* Specify the `--generator` flag to pin to a specific behavior when you use generator-based commands such as `kubectl run` or `kubectl expose`. * Don't rely on context, preferences, or other implicit states. ## Best Practices @@ -27,48 +26,34 @@ For a stable output in a script: For `kubectl run` to satisfy infrastructure as code: * Tag the image with a version-specific tag and don't move that tag to a new version. For example, use `:v1234`, `v1.2.3`, `r03062016-1-4`, rather than `:latest` (For more information, see [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images)). -* Capture the parameters in a checked-in script, or at least use `--record` to annotate the created objects with the command line for an image that is lightly parameterized. -* Pin to a specific [generator](#generators) version, such as `kubectl run --generator=run-pod/v1`. * Check in the script for an image that is heavily parameterized. * Switch to configuration files checked into source control for features that are needed, but not expressible via `kubectl run` flags. -#### Generators - -You can create the following resources using `kubectl run` with the `--generator` flag: - -{{< table caption="Resources you can create using kubectl run" >}} -| Resource | API group | kubectl command | -|--------------------------------------|--------------------|---------------------------------------------------| -| Pod | v1 | `kubectl run --generator=run-pod/v1` | -| ReplicationController _(deprecated)_ | v1 | `kubectl run --generator=run/v1` | -| Deployment _(deprecated)_ | extensions/v1beta1 | `kubectl run --generator=deployment/v1beta1` | -| Deployment _(deprecated)_ | apps/v1beta1 | `kubectl run --generator=deployment/apps.v1beta1` | -| Job _(deprecated)_ | batch/v1 | `kubectl run --generator=job/v1` | -| CronJob _(deprecated)_ | batch/v2alpha1 | `kubectl run --generator=cronjob/v2alpha1` | -| CronJob _(deprecated)_ | batch/v1beta1 | `kubectl run --generator=cronjob/v1beta1` | -{{< /table >}} +You can use the `--dry-run` flag to preview the object that would be sent to your cluster, without really submitting it. {{< note >}} -Generators other than `run-pod/v1` are deprecated. +All `kubectl` generators are deprecated. See the Kubernetes v1.17 documentation for a [list](https://v1-17.docs.kubernetes.io/docs/reference/kubectl/conventions/#generators) of generators and how they were used. {{< /note >}} -If you explicitly set `--generator`, kubectl uses the generator you specified. If you invoke `kubectl run` and don't specify a generator, kubectl automatically selects which generator to use based on the other flags you set. The following table lists flags and the generators that are activated if you didn't specify one yourself: - -{{< table caption="kubectl run flags and the resource they imply" >}} -| Flag | Generated Resource | -|-------------------------|-----------------------| -| `--schedule=` | CronJob | -| `--restart=Always` | Deployment | -| `--restart=OnFailure` | Job | -| `--restart=Never` | Pod | -{{< /table >}} - -If you don't specify a generator, kubectl pays attention to other flags in the following order: - -1. `--schedule` -1. `--restart` - -You can use the `--dry-run` flag to preview the object that would be sent to your cluster, without really submitting it. +#### Generators +You can generate the following resources with a kubectl command, `kubectl create --dry-run -o yaml`: +``` + clusterrole Create a ClusterRole. + clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole. + configmap Create a configmap from a local file, directory or literal value. + cronjob Create a cronjob with the specified name. + deployment Create a deployment with the specified name. + job Create a job with the specified name. + namespace Create a namespace with the specified name. + poddisruptionbudget Create a pod disruption budget with the specified name. + priorityclass Create a priorityclass with the specified name. + quota Create a quota with the specified name. + role Create a role with single rule. + rolebinding Create a RoleBinding for a particular Role or ClusterRole. + secret Create a secret using specified subcommand. + service Create a service using specified subcommand. + serviceaccount Create a service account with the specified name. +``` ### `kubectl apply` diff --git a/content/en/docs/reference/kubectl/kubectl.md b/content/en/docs/reference/kubectl/kubectl.md index 75ddc04715d59..6342de0008d5c 100644 --- a/content/en/docs/reference/kubectl/kubectl.md +++ b/content/en/docs/reference/kubectl/kubectl.md @@ -1,7 +1,7 @@ --- title: kubectl content_template: templates/tool-reference -weight: 28 +weight: 30 --- {{% capture synopsis %}} @@ -19,504 +19,504 @@ kubectl [flags] {{% capture options %}} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--add-dir-header
If true, adds the file directory to the header
--alsologtostderr
log to standard error as well as files
--application-metrics-count-limit int     Default: 100
Max number of application metrics to store (per container)
--as string
Username to impersonate for the operation
--as-group stringArray
Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--boot-id-file string     Default: "/proc/sys/kernel/random/boot_id"
Comma-separated list of files to check for boot-id. Use the first one that exists.
--cache-dir string     Default: "$HOME/.kube/http-cache"
Default HTTP cache directory
--certificate-authority string
Path to a cert file for the certificate authority
--client-certificate string
Path to a client certificate file for TLS
--client-key string
Path to a client key file for TLS
--cloud-provider-gce-l7lb-src-cidrs cidrs     Default: 130.211.0.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
--cloud-provider-gce-lb-src-cidrs cidrs     Default: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L4 LB traffic proxy & health checks
--cluster string
The name of the kubeconfig cluster to use
--container-hints string     Default: "/etc/cadvisor/container_hints.json"
location of the container hints file
--containerd string     Default: "/run/containerd/containerd.sock"
containerd endpoint
--containerd-namespace string     Default: "k8s.io"
containerd namespace
--context string
The name of the kubeconfig context to use
--default-not-ready-toleration-seconds int     Default: 300
Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-unreachable-toleration-seconds int     Default: 300
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
--disable-root-cgroup-stats
Disable collecting root Cgroup stats
--docker string     Default: "unix:///var/run/docker.sock"
docker endpoint
--docker-env-metadata-whitelist string
a comma-separated list of environment variable keys that needs to be collected for docker containers
--docker-only
Only report docker containers in addition to root stats
--docker-root string     Default: "/var/lib/docker"
DEPRECATED: docker root is read from docker info (this is a fallback, default: /var/lib/docker)
--docker-tls
use TLS to connect to docker
--docker-tls-ca string     Default: "ca.pem"
path to trusted CA
--docker-tls-cert string     Default: "cert.pem"
path to client certificate
--docker-tls-key string     Default: "key.pem"
path to private key
--enable-load-reader
Whether to enable cpu load reader
--event-storage-age-limit string     Default: "default=0"
Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is a duration. Default is applied to all non-specified event types
--event-storage-event-limit string     Default: "default=0"
Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is an integer. Default is applied to all non-specified event types
--global-housekeeping-interval duration     Default: 1m0s
Interval between global housekeepings
-h, --help
help for kubectl
--housekeeping-interval duration     Default: 10s
Interval between container housekeepings
--insecure-skip-tls-verify
If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kubeconfig string
Path to the kubeconfig file to use for CLI requests.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-cadvisor-usage
Whether to log the usage of the cAdvisor container
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--machine-id-file string     Default: "/etc/machine-id,/var/lib/dbus/machine-id"
Comma-separated list of files to check for machine-id. Use the first one that exists.
--match-server-version
Require server version to match client version
-n, --namespace string
If present, the namespace scope for this CLI request
--password string
Password for basic authentication to the API server
--profile string     Default: "none"
Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)
--profile-output string     Default: "profile.pprof"
Name of the file to write the profile to
--request-timeout string     Default: "0"
The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.
-s, --server string
The address and port of the Kubernetes API server
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--storage-driver-buffer-duration duration     Default: 1m0s
Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction
--storage-driver-db string     Default: "cadvisor"
database name
--storage-driver-host string     Default: "localhost:8086"
database host:port
--storage-driver-password string     Default: "root"
database password
--storage-driver-secure
use secure connection with database
--storage-driver-table string     Default: "stats"
table name
--storage-driver-user string     Default: "root"
database username
--tls-server-name string
Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
--token string
Bearer token for authentication to the API server
--update-machine-info-interval duration     Default: 5m0s
Interval between machine info updates.
--user string
The name of the kubeconfig user to use
--username string
Username for basic authentication to the API server
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--add-dir-header
If true, adds the file directory to the header
--alsologtostderr
log to standard error as well as files
--application-metrics-count-limit int     Default: 100
Max number of application metrics to store (per container)
--as string
Username to impersonate for the operation
--as-group stringArray
Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--azure-container-registry-config string
Path to the file containing Azure container registry configuration information.
--boot-id-file string     Default: "/proc/sys/kernel/random/boot_id"
Comma-separated list of files to check for boot-id. Use the first one that exists.
--cache-dir string     Default: "$HOME/.kube/http-cache"
Default HTTP cache directory
--certificate-authority string
Path to a cert file for the certificate authority
--client-certificate string
Path to a client certificate file for TLS
--client-key string
Path to a client key file for TLS
--cloud-provider-gce-l7lb-src-cidrs cidrs     Default: 130.211.0.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks
--cloud-provider-gce-lb-src-cidrs cidrs     Default: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
CIDRs opened in GCE firewall for L4 LB traffic proxy & health checks
--cluster string
The name of the kubeconfig cluster to use
--container-hints string     Default: "/etc/cadvisor/container_hints.json"
location of the container hints file
--containerd string     Default: "/run/containerd/containerd.sock"
containerd endpoint
--containerd-namespace string     Default: "k8s.io"
containerd namespace
--context string
The name of the kubeconfig context to use
--default-not-ready-toleration-seconds int     Default: 300
Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
--default-unreachable-toleration-seconds int     Default: 300
Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
--disable-root-cgroup-stats
Disable collecting root Cgroup stats
--docker string     Default: "unix:///var/run/docker.sock"
docker endpoint
--docker-env-metadata-whitelist string
a comma-separated list of environment variable keys that needs to be collected for docker containers
--docker-only
Only report docker containers in addition to root stats
--docker-root string     Default: "/var/lib/docker"
DEPRECATED: docker root is read from docker info (this is a fallback, default: /var/lib/docker)
--docker-tls
use TLS to connect to docker
--docker-tls-ca string     Default: "ca.pem"
path to trusted CA
--docker-tls-cert string     Default: "cert.pem"
path to client certificate
--docker-tls-key string     Default: "key.pem"
path to private key
--enable-load-reader
Whether to enable cpu load reader
--event-storage-age-limit string     Default: "default=0"
Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is a duration. Default is applied to all non-specified event types
--event-storage-event-limit string     Default: "default=0"
Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is an integer. Default is applied to all non-specified event types
--global-housekeeping-interval duration     Default: 1m0s
Interval between global housekeepings
-h, --help
help for kubectl
--housekeeping-interval duration     Default: 10s
Interval between container housekeepings
--insecure-skip-tls-verify
If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kubeconfig string
Path to the kubeconfig file to use for CLI requests.
--log-backtrace-at traceLocation     Default: :0
when logging hits line file:N, emit a stack trace
--log-cadvisor-usage
Whether to log the usage of the cAdvisor container
--log-dir string
If non-empty, write log files in this directory
--log-file string
If non-empty, use this log file
--log-file-max-size uint     Default: 1800
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.
--log-flush-frequency duration     Default: 5s
Maximum number of seconds between log flushes
--logtostderr     Default: true
log to standard error instead of files
--machine-id-file string     Default: "/etc/machine-id,/var/lib/dbus/machine-id"
Comma-separated list of files to check for machine-id. Use the first one that exists.
--match-server-version
Require server version to match client version
-n, --namespace string
If present, the namespace scope for this CLI request
--password string
Password for basic authentication to the API server
--profile string     Default: "none"
Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)
--profile-output string     Default: "profile.pprof"
Name of the file to write the profile to
--request-timeout string     Default: "0"
The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.
-s, --server string
The address and port of the Kubernetes API server
--skip-headers
If true, avoid header prefixes in the log messages
--skip-log-headers
If true, avoid headers when opening log files
--stderrthreshold severity     Default: 2
logs at or above this threshold go to stderr
--storage-driver-buffer-duration duration     Default: 1m0s
Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction
--storage-driver-db string     Default: "cadvisor"
database name
--storage-driver-host string     Default: "localhost:8086"
database host:port
--storage-driver-password string     Default: "root"
database password
--storage-driver-secure
use secure connection with database
--storage-driver-table string     Default: "stats"
table name
--storage-driver-user string     Default: "root"
database username
--tls-server-name string
Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
--token string
Bearer token for authentication to the API server
--update-machine-info-interval duration     Default: 5m0s
Interval between machine info updates.
--user string
The name of the kubeconfig user to use
--username string
Username for basic authentication to the API server
-v, --v Level
number for the log level verbosity
--version version[=true]
Print version information and quit
--vmodule moduleSpec
comma-separated list of pattern=N settings for file-filtered logging
diff --git a/content/en/docs/reference/scheduling/policies.md b/content/en/docs/reference/scheduling/policies.md index 23d0bc915efc9..53e5981deae2f 100644 --- a/content/en/docs/reference/scheduling/policies.md +++ b/content/en/docs/reference/scheduling/policies.md @@ -8,7 +8,7 @@ weight: 10 A scheduling Policy can be used to specify the *predicates* and *priorities* that the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} -runs to [filter and score nodes](/docs/concepts/scheduling/kube-scheduler/#kube-scheduler-implementation), +runs to [filter and score nodes](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation), respectively. You can set a scheduling policy by running @@ -120,6 +120,6 @@ The following *priorities* implement scoring: {{% /capture %}} {{% capture whatsnext %}} -* Learn about [scheduling](/docs/concepts/scheduling/kube-scheduler/) +* Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/) * Learn about [kube-scheduler profiles](/docs/reference/scheduling/profiles/) {{% /capture %}} diff --git a/content/en/docs/reference/scheduling/profiles.md b/content/en/docs/reference/scheduling/profiles.md index f5595f8480bdf..b8ab40c3fefa7 100644 --- a/content/en/docs/reference/scheduling/profiles.md +++ b/content/en/docs/reference/scheduling/profiles.md @@ -177,5 +177,5 @@ only has one pending pods queue. {{% /capture %}} {{% capture whatsnext %}} -* Learn about [scheduling](/docs/concepts/scheduling/kube-scheduler/) +* Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/) {{% /capture %}} diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md index 65538640fd467..ed03bf49c45d2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md @@ -36,28 +36,28 @@ Example usage: ### Options - - - - - - - - - - - - - - - - - - - - - - +
-h, --help
help for kubeadm
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
-h, --help
help for kubeadm
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha.md index d6cff46aa2706..95b034be1bb44 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha.md @@ -6,42 +6,42 @@ Kubeadm experimental sub-commands ### Options - - - - - - - - - - - - - - - +
-h, --help
help for alpha
++++ + + + + + + + + + +
-h, --help
help for alpha
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs.md index 8926adde83e02..fef772e702650 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs.md @@ -6,42 +6,42 @@ Commands related to handling kubernetes certificates ### Options - - - - - - - - - - - - - - - +
-h, --help
help for certs
++++ + + + + + + + + + +
-h, --help
help for certs
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_certificate-key.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_certificate-key.md index da61320407965..534851d42ba7f 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_certificate-key.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_certificate-key.md @@ -16,42 +16,42 @@ kubeadm alpha certs certificate-key [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for certificate-key
++++ + + + + + + + + + +
-h, --help
help for certificate-key
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_check-expiration.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_check-expiration.md index ac11e52444d19..5f51836f25a2b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_check-expiration.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_check-expiration.md @@ -10,63 +10,63 @@ kubeadm alpha certs check-expiration [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
-h, --help
help for check-expiration
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
-h, --help
help for check-expiration
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew.md index eaef41388b129..e0bfc54c5b4e9 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew.md @@ -10,42 +10,42 @@ kubeadm alpha certs renew [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for renew
++++ + + + + + + + + + +
-h, --help
help for renew
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md index bed37769d03e0..2c342c8bc6716 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew admin.conf [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for admin.conf
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for admin.conf
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md index be586b8e4b2ab..979bd4f5bc5ca 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md @@ -10,77 +10,77 @@ kubeadm alpha certs renew all [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for all
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for all
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md index 33113474a3968..9414ea2087cb9 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew apiserver-etcd-client [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver-etcd-client
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver-etcd-client
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md index 5123a9a0e10c8..f945da440e176 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew apiserver-kubelet-client [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver-kubelet-client
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver-kubelet-client
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md index 7dda656795560..afbb0f97c4c1c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew apiserver [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md index 9e33b47bc45ba..24792208313bd 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew controller-manager.conf [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for controller-manager.conf
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for controller-manager.conf
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md index 12c57913dcad2..6076f031d56e4 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew etcd-healthcheck-client [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-healthcheck-client
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-healthcheck-client
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md index 3fa0f3fd52946..c19189fc86e1c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew etcd-peer [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-peer
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-peer
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md index 3484542725b4f..8ba3e0f4a80c6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew etcd-server [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-server
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-server
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md index 1bfc2f1d312fb..c592d5ea91c19 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew front-proxy-client [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for front-proxy-client
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for front-proxy-client
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md index 77537a7452548..3f3b6ca76f6a6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md @@ -16,77 +16,77 @@ kubeadm alpha certs renew scheduler.conf [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for scheduler.conf
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save the certificates
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for scheduler.conf
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig.md index 7d7922e037f98..67f30bc3f8390 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig.md @@ -8,42 +8,42 @@ Alpha Disclaimer: this command is currently alpha. ### Options - - - - - - - - - - - - - - - +
-h, --help
help for kubeconfig
++++ + + + + + + + + + +
-h, --help
help for kubeconfig
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig_user.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig_user.md index f39fa93729580..bf4c53ed0e95e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig_user.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubeconfig_user.md @@ -19,84 +19,84 @@ kubeadm alpha kubeconfig user [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API server is accessible on
--apiserver-bind-port int32     Default: 6443
The port the API server is accessible on
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where certificates are stored
--client-name string
The name of user. It will be used as the CN if client certificates are created
-h, --help
help for user
--org stringSlice
The orgnizations of the client certificate. It will be used as the O if client certificates are created
--token string
The token that should be used as the authentication mechanism for this kubeconfig, instead of client certificates
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API server is accessible on
--apiserver-bind-port int32     Default: 6443
The port the API server is accessible on
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where certificates are stored
--client-name string
The name of user. It will be used as the CN if client certificates are created
-h, --help
help for user
--org stringSlice
The orgnizations of the client certificate. It will be used as the O if client certificates are created
--token string
The token that should be used as the authentication mechanism for this kubeconfig, instead of client certificates
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet.md index 7335ba5f11b27..055c8ecac5ed7 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet.md @@ -6,42 +6,42 @@ This command is not meant to be run on its own. See list of available subcommand ### Options - - - - - - - - - - - - - - - +
-h, --help
help for kubelet
++++ + + + + + + + + + +
-h, --help
help for kubelet
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config.md index 570e0064b6fdb..563d9fe2277da 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config.md @@ -6,42 +6,42 @@ This command is not meant to be run on its own. See list of available subcommand ### Options - - - - - - - - - - - - - - - +
-h, --help
help for config
++++ + + + + + + + + + +
-h, --help
help for config
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_download.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_download.md deleted file mode 100644 index a6e3b6f969391..0000000000000 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_download.md +++ /dev/null @@ -1,78 +0,0 @@ - -### Synopsis - - -Download the kubelet configuration from a ConfigMap of the form "kubelet-config-1.X" in the cluster, where X is the minor version of the kubelet. Either kubeadm autodetects the kubelet version by exec-ing "kubelet --version" or respects the --kubelet-version parameter. - -Alpha Disclaimer: this command is currently alpha. - -``` -kubeadm alpha kubelet config download [flags] -``` - -### Examples - -``` - # Download the kubelet configuration from the ConfigMap in the cluster. Autodetect the kubelet version. - kubeadm alpha phase kubelet config download - - # Download the kubelet configuration from the ConfigMap in the cluster. Use a specific desired kubelet version. - kubeadm alpha phase kubelet config download --kubelet-version 1.17.0 -``` - -### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-h, --help
help for download
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubelet-version string
The desired version for the kubelet. Defaults to being autodetected from 'kubelet --version'.
- - - -### Options inherited from parent commands - - - - - - - - - - - - - - - - -
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
- - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md index 88fb003f6833e..278def1dd336b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md @@ -24,63 +24,63 @@ kubeadm alpha kubelet config enable-dynamic [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-h, --help
help for enable-dynamic
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubelet-version string
The desired version for the kubelet
--node-name string
Name of the node that should enable the dynamic kubelet configuration
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
-h, --help
help for enable-dynamic
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubelet-version string
The desired version for the kubelet
--node-name string
Name of the node that should enable the dynamic kubelet configuration
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting.md index 714ab0317ea77..77646b064b1ab 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting.md @@ -6,42 +6,42 @@ This command is not meant to be run on its own. See list of available subcommand ### Options - - - - - - - - - - - - - - - +
-h, --help
help for selfhosting
++++ + + + + + + + + + +
-h, --help
help for selfhosting
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting_pivot.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting_pivot.md index c38be7005ec97..554b8fe4c65fd 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting_pivot.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_selfhosting_pivot.md @@ -22,77 +22,77 @@ kubeadm alpha selfhosting pivot [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where certificates are stored
--config string
Path to a kubeadm configuration file.
-f, --force
Pivot the cluster without prompting for confirmation
-h, --help
help for pivot
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
-s, --store-certs-in-secrets
Enable storing certs in secrets
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where certificates are stored
--config string
Path to a kubeadm configuration file.
-f, --force
Pivot the cluster without prompting for confirmation
-h, --help
help for pivot
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
-s, --store-certs-in-secrets
Enable storing certs in secrets
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_completion.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_completion.md index ae1d0f13ff5de..f5a69d79fdacf 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_completion.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_completion.md @@ -48,42 +48,42 @@ source <(kubeadm completion zsh) ### Options - - - - - - - - - - - - - - - +
-h, --help
help for completion
++++ + + + + + + + + + +
-h, --help
help for completion
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config.md index d05d751e33fa2..b39cdd7a0d24a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config.md @@ -15,49 +15,49 @@ kubeadm config [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - +
-h, --help
help for config
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + +
-h, --help
help for config
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images.md index bd99ca50ab40f..436f3c3c7e303 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images.md @@ -10,49 +10,49 @@ kubeadm config images [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for images
++++ + + + + + + + + + +
-h, --help
help for images
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md index c0b924e5d9a04..18371394ac77d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md @@ -10,91 +10,91 @@ kubeadm config images list [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--allow-missing-template-keys     Default: true
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--config string
Path to a kubeadm configuration file.
-o, --experimental-output string     Default: "text"
Output format. One of: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for list
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--allow-missing-template-keys     Default: true
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
--config string
Path to a kubeadm configuration file.
-o, --experimental-output string     Default: "text"
Output format. One of: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for list
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md index 2a03893d45a40..d2f5961f85946 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md @@ -10,84 +10,84 @@ kubeadm config images pull [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for pull
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for pull
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_migrate.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_migrate.md index 1d47a3e26cf64..d07ffe8677493 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_migrate.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_migrate.md @@ -23,63 +23,63 @@ kubeadm config migrate [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-h, --help
help for migrate
--new-config string
Path to the resulting equivalent kubeadm config file using the new API version. Optional, if not specified output will be sent to STDOUT.
--old-config string
Path to the kubeadm config file that is using an old API version and should be converted. This flag is mandatory.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
-h, --help
help for migrate
--new-config string
Path to the resulting equivalent kubeadm config file using the new API version. Optional, if not specified output will be sent to STDOUT.
--old-config string
Path to the kubeadm config file that is using an old API version and should be converted. This flag is mandatory.
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print.md index d09d51cec6bdf..c6e1ea2173ed7 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print.md @@ -12,49 +12,49 @@ kubeadm config print [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for print
++++ + + + + + + + + + +
-h, --help
help for print
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_init-defaults.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_init-defaults.md index 0c0adab90f098..adc76ee41cb7c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_init-defaults.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_init-defaults.md @@ -15,56 +15,56 @@ kubeadm config print init-defaults [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - +
--component-configs stringSlice
A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
-h, --help
help for init-defaults
++++ + + + + + + + + + + + + + + + + +
--component-configs stringSlice
A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
-h, --help
help for init-defaults
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md index f2577cd74a1d5..b1c976c663fa6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_print_join-defaults.md @@ -15,56 +15,56 @@ kubeadm config print join-defaults [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - +
--component-configs stringSlice
A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
-h, --help
help for join-defaults
++++ + + + + + + + + + + + + + + + + +
--component-configs stringSlice
A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
-h, --help
help for join-defaults
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_view.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_view.md index c5e41d29bca97..c3a3137105dfc 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_view.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_view.md @@ -14,49 +14,49 @@ kubeadm config view [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for view
++++ + + + + + + + + + +
-h, --help
help for view
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + +
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md index d19bb01a99c18..508d4a816d277 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md @@ -51,210 +51,210 @@ kubeadm init [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--apiserver-cert-extra-sans stringSlice
Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--certificate-key string
Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--dry-run
Don't apply any changes; just output what would be done.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for init
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--node-name string
Specify the node name.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
--skip-certificate-key-print
Don't print the key used to encrypt the control-plane certificates.
--skip-phases stringSlice
List of phases to be skipped
--skip-token-print
Skip printing of the default bootstrap token generated by 'kubeadm init'.
--token string
The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
--token-ttl duration     Default: 24h0m0s
The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire
--upload-certs
Upload control-plane certificates to the kubeadm-certs Secret.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--apiserver-cert-extra-sans stringSlice
Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--certificate-key string
Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--dry-run
Don't apply any changes; just output what would be done.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for init
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--node-name string
Specify the node name.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
--skip-certificate-key-print
Don't print the key used to encrypt the control-plane certificates.
--skip-phases stringSlice
List of phases to be skipped
--skip-token-print
Skip printing of the default bootstrap token generated by 'kubeadm init'.
--token string
The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
--token-ttl duration     Default: 24h0m0s
The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire
--upload-certs
Upload control-plane certificates to the kubeadm-certs Secret.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase.md index 4930435d590a9..2db3ea5e54aee 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase.md @@ -6,42 +6,42 @@ Use this command to invoke single phase of the init workflow ### Options - - - - - - - - - - - - - - - +
-h, --help
help for phase
++++ + + + + + + + + + +
-h, --help
help for phase
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md index 24046272878d5..67b9c3af7598a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon.md @@ -10,42 +10,42 @@ kubeadm init phase addon [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for addon
++++ + + + + + + + + + +
-h, --help
help for addon
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md index ff285596d5204..103dd7e7c5e74 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md @@ -10,119 +10,119 @@ kubeadm init phase addon all [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for all
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for all
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md index 40bc2e8101724..3eebcb828bf60 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md @@ -10,91 +10,91 @@ kubeadm init phase addon coredns [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for coredns
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for coredns
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md index 564c20d4779f0..78140e94e80ec 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy.md @@ -10,98 +10,98 @@ kubeadm init phase addon kube-proxy [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for kube-proxy
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for kube-proxy
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md index f7d4332148b8c..123ab38fdc843 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_bootstrap-token.md @@ -20,63 +20,63 @@ kubeadm init phase bootstrap-token [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
-h, --help
help for bootstrap-token
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--skip-token-print
Skip printing of the default bootstrap token generated by 'kubeadm init'.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
-h, --help
help for bootstrap-token
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--skip-token-print
Skip printing of the default bootstrap token generated by 'kubeadm init'.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs.md index a36b12da511b9..28f5acc3e3427 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs.md @@ -10,42 +10,42 @@ kubeadm init phase certs [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for certs
++++ + + + + + + + + + +
-h, --help
help for certs
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md index a294d58b2b522..7ac391c0784ff 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_all.md @@ -10,98 +10,98 @@ kubeadm init phase certs all [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-cert-extra-sans stringSlice
Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for all
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-cert-extra-sans stringSlice
Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for all
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md index 891d90aff6e71..60f4e0a51c7bc 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-etcd-client.md @@ -14,77 +14,77 @@ kubeadm init phase certs apiserver-etcd-client [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver-etcd-client
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver-etcd-client
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md index 3e7523695ace6..f4d1d823f4022 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver-kubelet-client.md @@ -14,77 +14,77 @@ kubeadm init phase certs apiserver-kubelet-client [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver-kubelet-client
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver-kubelet-client
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md index 3732db3b41107..21ed36eaf25e2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_apiserver.md @@ -16,112 +16,112 @@ kubeadm init phase certs apiserver [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-cert-extra-sans stringSlice
Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-cert-extra-sans stringSlice
Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for apiserver
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
--service-dns-domain string     Default: "cluster.local"
Use alternative domain for services, e.g. "myorg.internal".
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md index cabc44a009275..81ccc2cbc2e36 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_ca.md @@ -14,63 +14,63 @@ kubeadm init phase certs ca [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for ca
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for ca
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md index 99c6d6d00445d..17066413ddd10 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-ca.md @@ -14,63 +14,63 @@ kubeadm init phase certs etcd-ca [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for etcd-ca
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for etcd-ca
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md index 7cc480220c7a7..ba2fe8f80ad8e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-healthcheck-client.md @@ -14,77 +14,77 @@ kubeadm init phase certs etcd-healthcheck-client [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-healthcheck-client
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-healthcheck-client
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md index 16a6014fd3fc3..a79657acab533 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-peer.md @@ -16,77 +16,77 @@ kubeadm init phase certs etcd-peer [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-peer
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-peer
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md index f252af04d9272..620cbe90d0a63 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_etcd-server.md @@ -16,77 +16,77 @@ kubeadm init phase certs etcd-server [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-server
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for etcd-server
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md index 74a46cb2de681..4a05b78d776b6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-ca.md @@ -14,63 +14,63 @@ kubeadm init phase certs front-proxy-ca [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for front-proxy-ca
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for front-proxy-ca
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md index 1b43f2c57d85d..a0d518e5033bb 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_front-proxy-client.md @@ -14,77 +14,77 @@ kubeadm init phase certs front-proxy-client [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for front-proxy-client
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--csr-dir string
The path to output the CSRs and private keys to
--csr-only
Create CSRs instead of generating certificates
-h, --help
help for front-proxy-client
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md index 767c6fdf5aa2d..8d36df6c52f0d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_certs_sa.md @@ -12,49 +12,49 @@ kubeadm init phase certs sa [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
-h, --help
help for sa
++++ + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
-h, --help
help for sa
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane.md index 78716275e3e54..2bed8442d3364 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane.md @@ -10,42 +10,42 @@ kubeadm init phase control-plane [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for control-plane
++++ + + + + + + + + + +
-h, --help
help for control-plane
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md index fa735c27effc9..d61004e75baea 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md @@ -21,140 +21,140 @@ kubeadm init phase control-plane all [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--apiserver-extra-args mapStringString
A set of extra flags to pass to the API Server or override default ones in form of <flagname>=<value>
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
--controller-manager-extra-args mapStringString
A set of extra flags to pass to the Controller Manager or override default ones in form of <flagname>=<value>
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for all
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
--scheduler-extra-args mapStringString
A set of extra flags to pass to the Scheduler or override default ones in form of <flagname>=<value>
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--apiserver-extra-args mapStringString
A set of extra flags to pass to the API Server or override default ones in form of <flagname>=<value>
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
--controller-manager-extra-args mapStringString
A set of extra flags to pass to the Controller Manager or override default ones in form of <flagname>=<value>
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for all
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
--scheduler-extra-args mapStringString
A set of extra flags to pass to the Scheduler or override default ones in form of <flagname>=<value>
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md index 06348123864a5..6ef3f23a169e7 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md @@ -10,119 +10,119 @@ kubeadm init phase control-plane apiserver [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--apiserver-extra-args mapStringString
A set of extra flags to pass to the API Server or override default ones in form of <flagname>=<value>
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for apiserver
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--apiserver-extra-args mapStringString
A set of extra flags to pass to the API Server or override default ones in form of <flagname>=<value>
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for apiserver
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--service-cidr string     Default: "10.96.0.0/12"
Use alternative range of IP address for service VIPs.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_controller-manager.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_controller-manager.md index 974043ec9725c..bd5e4c10d2008 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_controller-manager.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_controller-manager.md @@ -10,91 +10,91 @@ kubeadm init phase control-plane controller-manager [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--controller-manager-extra-args mapStringString
A set of extra flags to pass to the Controller Manager or override default ones in form of <flagname>=<value>
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for controller-manager
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--controller-manager-extra-args mapStringString
A set of extra flags to pass to the Controller Manager or override default ones in form of <flagname>=<value>
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for controller-manager
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--pod-network-cidr string
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_scheduler.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_scheduler.md index 44f7e53eec706..1f001e5450a12 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_scheduler.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_scheduler.md @@ -10,84 +10,84 @@ kubeadm init phase control-plane scheduler [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for scheduler
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--scheduler-extra-args mapStringString
A set of extra flags to pass to the Scheduler or override default ones in form of <flagname>=<value>
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for scheduler
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--scheduler-extra-args mapStringString
A set of extra flags to pass to the Scheduler or override default ones in form of <flagname>=<value>
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd.md index 8ba117a2a1e1f..dc5227a34ca5f 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd.md @@ -10,42 +10,42 @@ kubeadm init phase etcd [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for etcd
++++ + + + + + + + + + +
-h, --help
help for etcd
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md index bffd9a0fb3724..e40e896ff1739 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_etcd_local.md @@ -22,70 +22,70 @@ kubeadm init phase etcd local [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for local
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for local
--image-repository string     Default: "k8s.gcr.io"
Choose a container registry to pull control plane images from
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md index 86ad05ba71b38..b3a200a22877b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md @@ -10,42 +10,42 @@ kubeadm init phase kubeconfig [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for kubeconfig
++++ + + + + + + + + + +
-h, --help
help for kubeconfig
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md index f138984c30c6c..85885559f761b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md @@ -10,91 +10,91 @@ kubeadm init phase kubeconfig admin [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for admin
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for admin
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md index a76f5bc232e65..9296e84a199cd 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md @@ -10,98 +10,98 @@ kubeadm init phase kubeconfig all [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for all
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--node-name string
Specify the node name.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for all
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--node-name string
Specify the node name.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md index 331073d00937d..295d7e57dc819 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md @@ -10,91 +10,91 @@ kubeadm init phase kubeconfig controller-manager [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for controller-manager
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for controller-manager
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md index 4805672d744d5..9fd3145290273 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md @@ -12,98 +12,98 @@ kubeadm init phase kubeconfig kubelet [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for kubelet
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--node-name string
Specify the node name.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for kubelet
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
--node-name string
Specify the node name.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md index e4ee1b3d97a29..c608732717d23 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md @@ -10,91 +10,91 @@ kubeadm init phase kubeconfig scheduler [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for scheduler
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
Port for the API Server to bind to.
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
--control-plane-endpoint string
Specify a stable IP address or DNS name for the control plane.
-h, --help
help for scheduler
--kubeconfig-dir string     Default: "/etc/kubernetes"
The path where to save the kubeconfig file.
--kubernetes-version string     Default: "stable-1"
Choose a specific Kubernetes version for the control plane.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize.md index a44fb0b9317b1..4e5febf638fdf 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize.md @@ -17,42 +17,42 @@ kubeadm init phase kubelet-finalize [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for kubelet-finalize
++++ + + + + + + + + + +
-h, --help
help for kubelet-finalize
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md index 6fa3078aa275b..fce712fc45cf9 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_all.md @@ -17,56 +17,56 @@ kubeadm init phase kubelet-finalize all [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for all
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for all
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md index 7e6273918986e..2ace62929bb8e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md @@ -10,56 +10,56 @@ kubeadm init phase kubelet-finalize experimental-cert-rotation [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for experimental-cert-rotation
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path where to save and store the certificates.
--config string
Path to a kubeadm configuration file.
-h, --help
help for experimental-cert-rotation
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md index 0bba93e4cc917..f9898b58e0efa 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubelet-start.md @@ -17,63 +17,63 @@ kubeadm init phase kubelet-start [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
-h, --help
help for kubelet-start
--node-name string
Specify the node name.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
-h, --help
help for kubelet-start
--node-name string
Specify the node name.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md index 3d29211383ca7..453783db52c13 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_mark-control-plane.md @@ -20,56 +20,56 @@ kubeadm init phase mark-control-plane [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
-h, --help
help for mark-control-plane
--node-name string
Specify the node name.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
-h, --help
help for mark-control-plane
--node-name string
Specify the node name.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md index b18783c39a42a..06d47e861c323 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_preflight.md @@ -17,56 +17,56 @@ kubeadm init phase preflight [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
-h, --help
help for preflight
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
-h, --help
help for preflight
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md index 57a89fe136d99..5d35b363768bf 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-certs.md @@ -10,70 +10,70 @@ kubeadm init phase upload-certs [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--certificate-key string
Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
--config string
Path to a kubeadm configuration file.
-h, --help
help for upload-certs
--skip-certificate-key-print
Don't print the key used to encrypt the control-plane certificates.
--upload-certs
Upload control-plane certificates to the kubeadm-certs Secret.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--certificate-key string
Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
--config string
Path to a kubeadm configuration file.
-h, --help
help for upload-certs
--skip-certificate-key-print
Don't print the key used to encrypt the control-plane certificates.
--upload-certs
Upload control-plane certificates to the kubeadm-certs Secret.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config.md index 81c8781c67599..c1b5c960921ae 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config.md @@ -10,42 +10,42 @@ kubeadm init phase upload-config [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for upload-config
++++ + + + + + + + + + +
-h, --help
help for upload-config
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md index e83c6bd5c6610..6370094df9b04 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_all.md @@ -10,56 +10,56 @@ kubeadm init phase upload-config all [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
-h, --help
help for all
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
-h, --help
help for all
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md index db9fba1a8d36d..030595466be3e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubeadm.md @@ -19,56 +19,56 @@ kubeadm init phase upload-config kubeadm [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
-h, --help
help for kubeadm
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
-h, --help
help for kubeadm
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md index fc56bce0aefee..bd334e091c35c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_upload-config_kubelet.md @@ -17,56 +17,56 @@ kubeadm init phase upload-config kubelet [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to a kubeadm configuration file.
-h, --help
help for kubelet
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to a kubeadm configuration file.
-h, --help
help for kubelet
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md index 38e88f9514792..252bdfe024644 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join.md @@ -67,154 +67,154 @@ kubeadm join [api-server-endpoint] [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
If the node should host a new control plane instance, the port for the API Server to bind to.
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for join
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--node-name string
Specify the node name.
--skip-phases stringSlice
List of phases to be skipped
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
If the node should host a new control plane instance, the port for the API Server to bind to.
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for join
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--node-name string
Specify the node name.
--skip-phases stringSlice
List of phases to be skipped
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase.md index b05d7219a54db..873f64aa163f6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase.md @@ -6,42 +6,42 @@ Use this command to invoke single phase of the join workflow ### Options - - - - - - - - - - - - - - - +
-h, --help
help for phase
++++ + + + + + + + + + +
-h, --help
help for phase
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md index 56fb6f241d67b..20170c783c7ae 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join.md @@ -17,42 +17,42 @@ kubeadm join phase control-plane-join [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for control-plane-join
++++ + + + + + + + + + +
-h, --help
help for control-plane-join
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md index 280f18137284e..9515d0dfe7bbc 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_all.md @@ -10,70 +10,70 @@ kubeadm join phase control-plane-join all [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-h, --help
help for all
--node-name string
Specify the node name.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-h, --help
help for all
--node-name string
Specify the node name.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md index d00bc341a1fd3..0e23176f6ef36 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_etcd.md @@ -10,77 +10,77 @@ kubeadm join phase control-plane-join etcd [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for etcd
--node-name string
Specify the node name.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for etcd
--node-name string
Specify the node name.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md index cc78c23081ddb..37bd9675b8802 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_mark-control-plane.md @@ -10,63 +10,63 @@ kubeadm join phase control-plane-join mark-control-plane [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-h, --help
help for mark-control-plane
--node-name string
Specify the node name.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-h, --help
help for mark-control-plane
--node-name string
Specify the node name.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md index e5a19fd838139..258210f3032a7 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-join_update-status.md @@ -10,70 +10,70 @@ kubeadm join phase control-plane-join update-status [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-h, --help
help for update-status
--node-name string
Specify the node name.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-h, --help
help for update-status
--node-name string
Specify the node name.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md index d580e5fb98d72..81a88bdaa5a5a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare.md @@ -17,42 +17,42 @@ kubeadm join phase control-plane-prepare [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for control-plane-prepare
++++ + + + + + + + + + +
-h, --help
help for control-plane-prepare
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md index 6f6eda68b9117..7270024e49586 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_all.md @@ -10,133 +10,133 @@ kubeadm join phase control-plane-prepare all [api-server-endpoint] [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
If the node should host a new control plane instance, the port for the API Server to bind to.
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for all
--node-name string
Specify the node name.
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
If the node should host a new control plane instance, the port for the API Server to bind to.
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for all
--node-name string
Specify the node name.
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md index cb717779731fc..c8d59d58eb032 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_certs.md @@ -10,112 +10,112 @@ kubeadm join phase control-plane-prepare certs [api-server-endpoint] [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for certs
--node-name string
Specify the node name.
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for certs
--node-name string
Specify the node name.
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md index 35883924c326a..7e0354cfc786c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_control-plane.md @@ -10,77 +10,77 @@ kubeadm join phase control-plane-prepare control-plane [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
If the node should host a new control plane instance, the port for the API Server to bind to.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for control-plane
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
If the node should host a new control plane instance, the port for the API Server to bind to.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for control-plane
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md index 8413cf27a5f0b..26e65cce87db4 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_download-certs.md @@ -10,105 +10,105 @@ kubeadm join phase control-plane-prepare download-certs [api-server-endpoint] [f ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for download-certs
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for download-certs
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md index bcb12ff455f13..722ec2263d9e5 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_control-plane-prepare_kubeconfig.md @@ -10,105 +10,105 @@ kubeadm join phase control-plane-prepare kubeconfig [api-server-endpoint] [flags ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for kubeconfig
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for kubeconfig
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md index 98ec5be62aecd..719700b9a04ba 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_kubelet-start.md @@ -10,105 +10,105 @@ kubeadm join phase kubelet-start [api-server-endpoint] [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--config string
Path to kubeadm config file.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for kubelet-start
--node-name string
Specify the node name.
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--config string
Path to kubeadm config file.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for kubelet-start
--node-name string
Specify the node name.
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md index 0174677e11f95..ca975f9d9204b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_join_phase_preflight.md @@ -17,140 +17,140 @@ kubeadm join phase preflight [api-server-endpoint] [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
If the node should host a new control plane instance, the port for the API Server to bind to.
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for preflight
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--node-name string
Specify the node name.
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
--apiserver-bind-port int32     Default: 6443
If the node should host a new control plane instance, the port for the API Server to bind to.
--certificate-key string
Use this key to decrypt the certificate secrets uploaded by init.
--config string
Path to kubeadm config file.
--control-plane
Create a new control plane instance on this node
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
--discovery-file string
For file-based discovery, a file or URL from which to load cluster information.
--discovery-token string
For token-based discovery, the token used to validate cluster information fetched from the API server.
--discovery-token-ca-cert-hash stringSlice
For token-based discovery, validate that the root CA public key matches this hash (format: "<type>:<value>").
--discovery-token-unsafe-skip-ca-verification
For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
-h, --help
help for preflight
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--node-name string
Specify the node name.
--tls-bootstrap-token string
Specify the token used to temporarily authenticate with the Kubernetes Control Plane while joining the node.
--token string
Use this token for both discovery-token and tls-bootstrap-token when those values are not provided.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md index f143ad7f57cd7..4cfa48be372d6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset.md @@ -19,84 +19,84 @@ kubeadm reset [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path to the directory where the certificates are stored. If specified, clean this directory.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
-f, --force
Reset the node without prompting for confirmation.
-h, --help
help for reset
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--skip-phases stringSlice
List of phases to be skipped
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path to the directory where the certificates are stored. If specified, clean this directory.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
-f, --force
Reset the node without prompting for confirmation.
-h, --help
help for reset
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--skip-phases stringSlice
List of phases to be skipped
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase.md index 632b7b64c1c0c..498621b95d8f6 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase.md @@ -6,42 +6,42 @@ Use this command to invoke single phase of the reset workflow ### Options - - - - - - - - - - - - - - - +
-h, --help
help for phase
++++ + + + + + + + + + +
-h, --help
help for phase
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md index cf26d5d010a19..84376e67b2d1c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_cleanup-node.md @@ -10,56 +10,56 @@ kubeadm reset phase cleanup-node [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path to the directory where the certificates are stored. If specified, clean this directory.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
-h, --help
help for cleanup-node
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--cert-dir string     Default: "/etc/kubernetes/pki"
The path to the directory where the certificates are stored. If specified, clean this directory.
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
-h, --help
help for cleanup-node
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md index 6048fc49b69e9..8f3537bc7c347 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_preflight.md @@ -10,56 +10,56 @@ kubeadm reset phase preflight [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-f, --force
Reset the node without prompting for confirmation.
-h, --help
help for preflight
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
-f, --force
Reset the node without prompting for confirmation.
-h, --help
help for preflight
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md index faf689ee8ae52..c7350d27ca463 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_remove-etcd-member.md @@ -10,49 +10,49 @@ kubeadm reset phase remove-etcd-member [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - +
-h, --help
help for remove-etcd-member
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + +
-h, --help
help for remove-etcd-member
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_update-cluster-status.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_update-cluster-status.md index b82b5f19cae29..de4700032bf84 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_update-cluster-status.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_reset_phase_update-cluster-status.md @@ -10,42 +10,42 @@ kubeadm reset phase update-cluster-status [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for update-cluster-status
++++ + + + + + + + + + +
-h, --help
help for update-cluster-status
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md index 790174fef8235..2662497699d8c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token.md @@ -27,56 +27,56 @@ kubeadm token [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--dry-run
Whether to enable dry-run mode or not
-h, --help
help for token
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run
Whether to enable dry-run mode or not
-h, --help
help for token
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md index d426e93ce0586..b2212bba44dc5 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_create.md @@ -17,105 +17,105 @@ kubeadm token create [token] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--certificate-key string
When used together with '--print-join-command', print the full 'kubeadm join' flag needed to join the cluster as a control-plane. To create a new certificate key you must use 'kubeadm init phase upload-certs --upload-certs'.
--config string
Path to a kubeadm configuration file.
--description string
A human friendly description of how this token is used.
--groups stringSlice     Default: [system:bootstrappers:kubeadm:default-node-token]
Extra groups that this token will authenticate as when used for authentication. Must match "\\Asystem:bootstrappers:[a-z0-9:-]{0,255}[a-z0-9]\\z"
-h, --help
help for create
--print-join-command
Instead of printing only the token, print the full 'kubeadm join' flag needed to join the cluster using the token.
--ttl duration     Default: 24h0m0s
The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire
--usages stringSlice     Default: [signing,authentication]
Describes the ways in which this token can be used. You can pass --usages multiple times or provide a comma separated list of options. Valid options: [signing,authentication]
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--certificate-key string
When used together with '--print-join-command', print the full 'kubeadm join' flag needed to join the cluster as a control-plane. To create a new certificate key you must use 'kubeadm init phase upload-certs --upload-certs'.
--config string
Path to a kubeadm configuration file.
--description string
A human friendly description of how this token is used.
--groups stringSlice     Default: [system:bootstrappers:kubeadm:default-node-token]
Extra groups that this token will authenticate as when used for authentication. Must match "\\Asystem:bootstrappers:[a-z0-9:-]{0,255}[a-z0-9]\\z"
-h, --help
help for create
--print-join-command
Instead of printing only the token, print the full 'kubeadm join' flag needed to join the cluster using the token.
--ttl duration     Default: 24h0m0s
The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire
--usages stringSlice     Default: [signing,authentication]
Describes the ways in which this token can be used. You can pass --usages multiple times or provide a comma separated list of options. Valid options: [signing,authentication]
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--dry-run
Whether to enable dry-run mode or not
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run
Whether to enable dry-run mode or not
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md index 4806962079e8e..d1ddd8bd2c542 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_delete.md @@ -15,56 +15,56 @@ kubeadm token delete [token-value] ... ### Options - - - - - - - - - - - - - - - +
-h, --help
help for delete
++++ + + + + + + + + + +
-h, --help
help for delete
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--dry-run
Whether to enable dry-run mode or not
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run
Whether to enable dry-run mode or not
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md index 9129adb7ef491..72ca0220ee46d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_generate.md @@ -20,56 +20,56 @@ kubeadm token generate [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for generate
++++ + + + + + + + + + +
-h, --help
help for generate
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--dry-run
Whether to enable dry-run mode or not
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run
Whether to enable dry-run mode or not
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md index 66f8e33cf1204..00d987eb78815 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_token_list.md @@ -12,70 +12,70 @@ kubeadm token list [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--allow-missing-template-keys     Default: true
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
-o, --experimental-output string     Default: "text"
Output format. One of: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
-h, --help
help for list
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--allow-missing-template-keys     Default: true
If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
-o, --experimental-output string     Default: "text"
Output format. One of: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
-h, --help
help for list
### Options inherited from parent commands - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--dry-run
Whether to enable dry-run mode or not
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run
Whether to enable dry-run mode or not
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade.md index 84c1c327aa763..b3fe44532beba 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade.md @@ -10,42 +10,42 @@ kubeadm upgrade [flags] ### Options - - - - - - - - - - - - - - - +
-h, --help
help for upgrade
++++ + + + + + + + + + +
-h, --help
help for upgrade
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md index b6b9f6d261895..45d7b7055b96c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md @@ -10,140 +10,140 @@ kubeadm upgrade apply [version] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--allow-experimental-upgrades
Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.
--allow-release-candidate-upgrades
Show release candidate versions of Kubernetes as an upgrade alternative and allow upgrading to a release candidate versions of Kubernetes.
--certificate-renewal     Default: true
Perform the renewal of certificates used by component changed during upgrades.
--config string
Path to a kubeadm configuration file.
--dry-run
Do not change any state, just output what actions would be performed.
--etcd-upgrade     Default: true
Perform the upgrade of etcd.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-f, --force
Force upgrading although some requirements might not be met. This also implies non-interactive mode.
-h, --help
help for apply
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--image-pull-timeout duration     Default: 15m0s
The maximum amount of time to wait for the control plane pods to be downloaded.
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--print-config
Specifies whether the configuration file that will be used in the upgrade should be printed or not.
-y, --yes
Perform the upgrade and do not prompt for confirmation (non-interactive mode).
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--allow-experimental-upgrades
Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.
--allow-release-candidate-upgrades
Show release candidate versions of Kubernetes as an upgrade alternative and allow upgrading to a release candidate versions of Kubernetes.
--certificate-renewal     Default: true
Perform the renewal of certificates used by component changed during upgrades.
--config string
Path to a kubeadm configuration file.
--dry-run
Do not change any state, just output what actions would be performed.
--etcd-upgrade     Default: true
Perform the upgrade of etcd.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-f, --force
Force upgrading although some requirements might not be met. This also implies non-interactive mode.
-h, --help
help for apply
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--image-pull-timeout duration     Default: 15m0s
The maximum amount of time to wait for the control plane pods to be downloaded.
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--print-config
Specifies whether the configuration file that will be used in the upgrade should be printed or not.
-y, --yes
Perform the upgrade and do not prompt for confirmation (non-interactive mode).
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_diff.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_diff.md index 3f5d5f443f82d..c15b1180752d5 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_diff.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_diff.md @@ -10,84 +10,84 @@ kubeadm upgrade diff [version] [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--api-server-manifest string     Default: "/etc/kubernetes/manifests/kube-apiserver.yaml"
path to API server manifest
--config string
Path to a kubeadm configuration file.
-c, --context-lines int     Default: 3
How many lines of context in the diff
--controller-manager-manifest string     Default: "/etc/kubernetes/manifests/kube-controller-manager.yaml"
path to controller manifest
-h, --help
help for diff
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--scheduler-manifest string     Default: "/etc/kubernetes/manifests/kube-scheduler.yaml"
path to scheduler manifest
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--api-server-manifest string     Default: "/etc/kubernetes/manifests/kube-apiserver.yaml"
path to API server manifest
--config string
Path to a kubeadm configuration file.
-c, --context-lines int     Default: 3
How many lines of context in the diff
--controller-manager-manifest string     Default: "/etc/kubernetes/manifests/kube-controller-manager.yaml"
path to controller manifest
-h, --help
help for diff
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--scheduler-manifest string     Default: "/etc/kubernetes/manifests/kube-scheduler.yaml"
path to scheduler manifest
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md index 7ec3ff6bbc719..4ae784ff12ae1 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md @@ -17,84 +17,84 @@ kubeadm upgrade node [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--certificate-renewal     Default: true
Perform the renewal of certificates used by component changed during upgrades.
--dry-run
Do not change any state, just output the actions that would be performed.
--etcd-upgrade     Default: true
Perform the upgrade of etcd.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for node
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--skip-phases stringSlice
List of phases to be skipped
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--certificate-renewal     Default: true
Perform the renewal of certificates used by component changed during upgrades.
--dry-run
Do not change any state, just output the actions that would be performed.
--etcd-upgrade     Default: true
Perform the upgrade of etcd.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for node
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--skip-phases stringSlice
List of phases to be skipped
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase.md index 79f1c7c8c271d..39a2e05ab0aef 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase.md @@ -6,42 +6,42 @@ Use this command to invoke single phase of the node workflow ### Options - - - - - - - - - - - - - - - +
-h, --help
help for phase
++++ + + + + + + + + + +
-h, --help
help for phase
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_control-plane.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_control-plane.md index 434e45c7e3d6c..133409ff515c3 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_control-plane.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_control-plane.md @@ -10,77 +10,77 @@ kubeadm upgrade node phase control-plane [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--certificate-renewal     Default: true
Perform the renewal of certificates used by component changed during upgrades.
--dry-run
Do not change any state, just output the actions that would be performed.
--etcd-upgrade     Default: true
Perform the upgrade of etcd.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for control-plane
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--certificate-renewal     Default: true
Perform the renewal of certificates used by component changed during upgrades.
--dry-run
Do not change any state, just output the actions that would be performed.
--etcd-upgrade     Default: true
Perform the upgrade of etcd.
-k, --experimental-kustomize string
The path where kustomize patches for static pod manifests are stored.
-h, --help
help for control-plane
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_kubelet-config.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_kubelet-config.md index 4b90ef8f344da..a4f5ceeafb7ca 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_kubelet-config.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_kubelet-config.md @@ -10,56 +10,56 @@ kubeadm upgrade node phase kubelet-config [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--dry-run
Do not change any state, just output the actions that would be performed.
-h, --help
help for kubelet-config
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
++++ + + + + + + + + + + + + + + + + + + + + + + + +
--dry-run
Do not change any state, just output the actions that would be performed.
-h, --help
help for kubelet-config
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md index 569e2bf8ae25d..eaa58b588f9b8 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md @@ -10,91 +10,91 @@ kubeadm upgrade plan [version] [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
--allow-experimental-upgrades
Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.
--allow-release-candidate-upgrades
Show release candidate versions of Kubernetes as an upgrade alternative and allow upgrading to a release candidate versions of Kubernetes.
--config string
Path to a kubeadm configuration file.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for plan
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--print-config
Specifies whether the configuration file that will be used in the upgrade should be printed or not.
++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--allow-experimental-upgrades
Show unstable versions of Kubernetes as an upgrade alternative and allow upgrading to an alpha/beta/release candidate versions of Kubernetes.
--allow-release-candidate-upgrades
Show release candidate versions of Kubernetes as an upgrade alternative and allow upgrading to a release candidate versions of Kubernetes.
--config string
Path to a kubeadm configuration file.
--feature-gates string
A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false)
-h, --help
help for plan
--ignore-preflight-errors stringSlice
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--kubeconfig string     Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
--print-config
Specifies whether the configuration file that will be used in the upgrade should be printed or not.
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md index 5f94025a9ed56..658075c4eacbe 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md @@ -10,49 +10,49 @@ kubeadm version [flags] ### Options - - - - - - - - - - - - - - - - - - - - - - +
-h, --help
help for version
-o, --output string
Output format; available options are 'yaml', 'json' and 'short'
++++ + + + + + + + + + + + + + + + + +
-h, --help
help for version
-o, --output string
Output format; available options are 'yaml', 'json' and 'short'
### Options inherited from parent commands - - - - - - - - - - - - - - - +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
++++ + + + + + + + + + +
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
diff --git a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md index 7186f28071179..e368a56847434 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md +++ b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md @@ -29,20 +29,19 @@ The cluster that `kubeadm init` and `kubeadm join` set up should be: - lock-down the kubelet API - locking down access to the API for system components like the kube-proxy and CoreDNS - locking down what a Bootstrap Token can access - - etc. - **Easy to use**: The user should not have to run anything more than a couple of commands: - `kubeadm init` - `export KUBECONFIG=/etc/kubernetes/admin.conf` - `kubectl apply -f ` - `kubeadm join --token :` - **Extendable**: - - It should for example _not_ favor any network provider, instead configuring a network is out-of-scope - - Should provide the possibility to use a config file for customizing various parameters + - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope + - It should provide the possibility to use a config file for customizing various parameters ## Constants and well-known values and paths -In order to reduce complexity and to simplify development of an on-top-of-kubeadm-implemented deployment solution, kubeadm uses a -limited set of constants values for well know-known paths and file names. +In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a +limited set of constant values for well-known paths and file names. The Kubernetes directory `/etc/kubernetes` is a constant in the application, since it is clearly the given path in a majority of cases, and the most intuitive location; other constants paths and file names are: @@ -70,14 +69,14 @@ in a majority of cases, and the most intuitive location; other constants paths a The `kubeadm init` [internal workflow](/docs/reference/setup-tools/kubeadm/kubeadm-init/#init-workflow) consists of a sequence of atomic work tasks to perform, as described in `kubeadm init`. -The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke individually each task, and ultimately offers a reusable and composable -API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by advanced user +The [`kubeadm init phase`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) command allows users to invoke each task individually, and ultimately offers a reusable and composable +API/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by an advanced user for creating custom clusters. ### Preflight checks Kubeadm executes a set of preflight checks before starting the init, with the aim to verify preconditions and avoid common cluster startup problems. -In any case the user can skip specific preflight checks (or eventually all preflight checks) with the `--ignore-preflight-errors` option. +The user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option. - [warning] If the Kubernetes version to use (specified with the `--kubernetes-version` flag) is at least one minor version higher than the kubeadm CLI version. - Kubernetes system requirements: @@ -102,7 +101,7 @@ In any case the user can skip specific preflight checks (or eventually all prefl - [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1 - [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1. - [Error] if swap is on -- [Error] if `ip`, `iptables`, `mount`, `nsenter` commands are not present in the command path +- [Error] if `conntrack`, `ip`, `iptables`, `mount`, `nsenter` commands are not present in the command path - [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path - [warning] if extra arg flags for API server, controller manager, scheduler contains some invalid options - [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy @@ -161,9 +160,9 @@ Certificates are stored by default in `/etc/kubernetes/pki`, but this directory ### Generate kubeconfig files for control plane components -Kubeadm kubeconfig files with identities for control plane components: +Kubeadm generates kubeconfig files with identities for control plane components: -- A kubeconfig file for kubelet to use, `/etc/kubernetes/kubelet.conf`; inside this file is embedded a client certificate with kubelet identity. +- A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for authenticating this node with the cluster. This client cert should: - Be in the `system:nodes` organization, as required by the [Node Authorization](/docs/reference/access-authn-authz/node/) module - Have the Common Name (CN) `system:node:` @@ -173,11 +172,11 @@ by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/ - A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded a client certificate with scheduler identity. This client cert should have the CN `system:kube-scheduler`, as defined by default [RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles) -Additionally, a kubeconfig file for kubeadm to use itself and the admin is generated and save into the `/etc/kubernetes/admin.conf` file. -The "admin" here is defined the actual person(s) that is administering the cluster and want to have full control (**root**) over the cluster. -The embedded client certificate for admin should: -- Be in the `system:masters` organization, as defined by default [RBAC user facing role bindings](/docs/reference/access-authn-authz/rbac/#user-facing-roles) -- Include a CN, but that can be anything. Kubeadm uses the `kubernetes-admin` CN +Additionally, a kubeconfig file for kubeadm itself and the admin is generated and saved into the `/etc/kubernetes/admin.conf` file. +The "admin" here is defined as the actual person(s) that is administering the cluster and wants to have full control (**root**) over the cluster. +The embedded client certificate for admin should be in the `system:masters` organization, as defined by default +[RBAC user facing role bindings](/docs/reference/access-authn-authz/rbac/#user-facing-roles). It should also include a +CN. Kubeadm uses the `kubernetes-admin` CN. Please note that: @@ -189,28 +188,24 @@ Please note that: ### Generate static Pod manifests for control plane components -Kubeadm writes static Pod manifest files for control plane components to `/etc/kubernetes/manifests`; the kubelet watches this directory for Pods to create on startup. +Kubeadm writes static Pod manifest files for control plane components to `/etc/kubernetes/manifests`. The kubelet watches this directory for Pods to create on startup. Static Pod manifest share a set of common properties: - All static Pods are deployed on `kube-system` namespace -- All static Pods gets `tier:control-plane` and `component:{component-name}` labels -- All static Pods gets `scheduler.alpha.kubernetes.io/critical-pod` annotation (this will be moved over to the proper solution - of using Pod Priority and Preemption when ready) +- All static Pods get `tier:control-plane` and `component:{component-name}` labels +- All static Pods use the `system-node-critical` priority class - `hostNetwork: true` is set on all static Pods to allow control plane startup before a network is configured; as a consequence: * The `address` that the controller-manager and the scheduler use to refer the API server is `127.0.0.1` * If using a local etcd server, `etcd-servers` address will be set to `127.0.0.1:2379` - Leader election is enabled for both the controller-manager and the scheduler - Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities -- All static Pods gets any extra flags specified by the user as described in [passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/) -- All static Pods gets any extra Volumes specified by the user (Host path) +- All static Pods get any extra flags specified by the user as described in [passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/) +- All static Pods get any extra Volumes specified by the user (Host path) Please note that: -1. All the images, for the `--kubernetes-version`/current architecture, will be pulled from `k8s.gcr.io`; - In case an alternative image repository or CI image repository is specified this one will be used; In case a specific container image - should be used for all control plane components, this one will be used. see [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) - for more details +1. All images will be pulled from k8s.gcr.io by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository 2. In case of kubeadm is executed in the `--dry-run` mode, static Pods files are written in a temporary folder 3. Static Pod manifest generation for master components can be invoked individually with the [`kubeadm init phase control-plane all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) command @@ -300,8 +295,7 @@ a local etcd instance running in a Pod with following attributes: Please note that: -1. The etcd image will be pulled from `k8s.gcr.io`. In case an alternative image repository is specified this one will be used; - In case an alternative image name is specified, this one will be used. see [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for more details +1. The etcd image will be pulled from `k8s.gcr.io` by default. See [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) for customizing the image repository 2. in case of kubeadm is executed in the `--dry-run` mode, the etcd static Pod manifest is written in a temporary folder 3. Static Pod manifest generation for local etcd can be invoked individually with the [`kubeadm init phase etcd local`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) command @@ -324,10 +318,9 @@ Please note that: ### Wait for the control plane to come up -This is a critical moment in time for kubeadm clusters. -kubeadm waits until `localhost:6443/healthz` returns `ok`, however in order to detect deadlock conditions, kubeadm fails fast -if `localhost:10255/healthz` (kubelet liveness) or `localhost:10255/healthz/syncloop` (kubelet readiness) don't return `ok`, -respectively after 40 and 60 second. +kubeadm waits (upto 4m0s) until `localhost:6443/healthz` (kube-apiserver liveness) returns `ok`. However in order to detect +deadlock conditions, kubeadm fails fast if `localhost:10255/healthz` (kubelet liveness) or +`localhost:10255/healthz/syncloop` (kubelet readiness) don't return `ok` within 40s and 60s respectively. kubeadm relies on the kubelet to pull the control plane images and run them properly as static Pods. After the control plane is up, kubeadm completes the tasks described in following paragraphs. @@ -343,26 +336,22 @@ If kubeadm is invoked with `--feature-gates=DynamicKubeletConfig`: ### Save the kubeadm ClusterConfiguration in a ConfigMap for later reference -kubeadm saves the configuration passed to `kubeadm init`, either via flags or the config file, in a ConfigMap -named `kubeadm-config` under `kube-system` namespace. +kubeadm saves the configuration passed to `kubeadm init` in a ConfigMap named `kubeadm-config` under `kube-system` namespace. This will ensure that kubeadm actions executed in future (e.g `kubeadm upgrade`) will be able to determine the actual/current cluster state and make new decisions based on that data. Please note that: -1. Before uploading, sensitive information like e.g. the token is stripped from the configuration +1. Before saving the ClusterConfiguration, sensitive information like the token is stripped from the configuration 2. Upload of master configuration can be invoked individually with the [`kubeadm init phase upload-config`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config) command -3. If you initialized your cluster using kubeadm v1.7.x or lower, you must create manually the master configuration ConfigMap - before `kubeadm upgrade` to v1.8 . In order to facilitate this task, the [`kubeadm config upload (from-flags|from-file)`](/docs/reference/setup-tools/kubeadm/kubeadm-config/) - was implemented -### Mark master +### Mark the node as control-plane As soon as the control plane is available, kubeadm executes following actions: -- Label the master with `node-role.kubernetes.io/master=""` -- Taints the master with `node-role.kubernetes.io/master:NoSchedule` +- Labels the node as control-plane with `node-role.kubernetes.io/master=""` +- Taints the node with `node-role.kubernetes.io/master:NoSchedule` Please note that: @@ -421,8 +410,8 @@ and the default role `system:certificates.k8s.io:certificatesigningrequests:self This phase creates the `cluster-info` ConfigMap in the `kube-public` namespace. -Additionally it is created a role and a RoleBinding granting access to the ConfigMap for unauthenticated users -(i.e. users in RBAC group `system:unauthenticated`) +Additionally it creates a Role and a RoleBinding granting access to the ConfigMap for unauthenticated users +(i.e. users in RBAC group `system:unauthenticated`). Please note that: diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md index 91d6dc981cd8b..c2356ed96640b 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md @@ -60,13 +60,11 @@ The `user` subcommand can be used for the creation of kubeconfig files for addit ## kubeadm alpha kubelet config {#cmd-phase-kubelet} -Use the following commands to either download the kubelet configuration from the cluster or -to enable the DynamicKubeletConfiguration feature. +Use the following command to enable the DynamicKubeletConfiguration feature. {{< tabs name="tab-kubelet" >}} {{< tab name="kubelet" include="generated/kubeadm_alpha_kubelet.md" />}} -{{< tab name="download" include="generated/kubeadm_alpha_kubelet_config_download.md" />}} -{{< tab name="enable-dynamic" include="generated/kubeadm_alpha_kubelet_config_download.md" />}} +{{< tab name="enable-dynamic" include="generated/kubeadm_alpha_kubelet_config_enable-dynamic.md" />}} {{< /tabs >}} diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md index c6374e54e8689..e3fe8c543c771 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md @@ -139,15 +139,9 @@ To use kube-dns instead of CoreDNS you have to pass a configuration file: ```bash # for installing a DNS addon only kubeadm init phase addon coredns --config=someconfig.yaml -# for creating a complete control plane node -kubeadm init --config=someconfig.yaml -# for listing or pulling images -kubeadm config images list/pull --config=someconfig.yaml -# for upgrades -kubeadm upgrade apply --config=someconfig.yaml ``` -The file has to contain a [`DNS`](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#DNS) field in[`ClusterConfiguration`](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#ClusterConfiguration) +The file has to contain a [`dns`](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#DNS) field in[`ClusterConfiguration`](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2#ClusterConfiguration) and also a type for the addon - `kube-dns` (default value is `CoreDNS`). ```yaml diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 9b006d15c012d..7103b39d42fce 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -23,8 +23,7 @@ following steps: considered errors and will exit kubeadm until the problem is corrected or the user specifies `--ignore-preflight-errors=`. -1. Generates a self-signed CA (or using an existing one if provided) to set up - identities for each component in the cluster. If the user has provided their +1. Generates a self-signed CA to set up identities for each component in the cluster. The user can provide their own CA cert and/or key by dropping it in the cert directory configured via `--cert-dir` (`/etc/kubernetes/pki` by default). The APIServer certs will have additional SAN entries for any `--apiserver-cert-extra-sans` arguments, lowercased if necessary. diff --git a/content/en/docs/reference/tools.md b/content/en/docs/reference/tools.md index 4264ae873f6ed..349ce58f2c372 100644 --- a/content/en/docs/reference/tools.md +++ b/content/en/docs/reference/tools.md @@ -18,11 +18,6 @@ Kubernetes contains several built-in tools to help you work with the Kubernetes [`kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) is the command line tool for easily provisioning a secure Kubernetes cluster on top of physical or cloud servers or virtual machines (currently in alpha). -## Kubefed - -[`kubefed`](/docs/tasks/federation/set-up-cluster-federation-kubefed/) is the command line tool -to help you administrate your federated clusters. - ## Minikube [`minikube`](/docs/tasks/tools/install-minikube/) is a tool that makes it diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index fbdbf14f878d5..4d06194a59ef6 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -89,7 +89,7 @@ A given Kubernetes server will only preserve a historical list of changes for a ### Watch bookmarks -To mitigate the impact of short history window, we introduced a concept of `bookmark` watch event. It is a special kind of event to pass an information that all changes up to a given `resourceVersion` client is requesting has already been sent. Object returned in that event is of the type requested by the request, but only `resourceVersion` field is set, e.g.: +To mitigate the impact of short history window, we introduced a concept of `bookmark` watch event. It is a special kind of event to mark that all changes up to a given `resourceVersion` the client is requesting have already been sent. Object returned in that event is of the type requested by the request, but only `resourceVersion` field is set, e.g.: GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true --- @@ -334,6 +334,17 @@ are not vulnerable to ordering changes in the list. Once the last finalizer is removed, the resource is actually removed from etcd. +## Single resource API + +API verbs GET, CREATE, UPDATE, PATCH, DELETE and PROXY support single resources only. +These verbs with single resource support have no support for submitting +multiple resources together in an ordered or unordered list or transaction. +Clients including kubectl will parse a list of resources and make +single-resource API requests. + +API verbs LIST and WATCH support getting multiple resources, and +DELETECOLLECTION supports deleting multiple resources. + ## Dry-run {{< feature-state for_k8s_version="v1.18" state="stable" >}} diff --git a/content/en/docs/reference/using-api/client-libraries.md b/content/en/docs/reference/using-api/client-libraries.md index 4f76e16352ecd..145a792b72f13 100644 --- a/content/en/docs/reference/using-api/client-libraries.md +++ b/content/en/docs/reference/using-api/client-libraries.md @@ -52,28 +52,28 @@ their authors, not the Kubernetes team. | Lisp | [github.com/brendandburns/cl-k8s](https://github.com/brendandburns/cl-k8s) | | Lisp | [github.com/xh4/cube](https://github.com/xh4/cube) | | Node.js (TypeScript) | [github.com/Goyoo/node-k8s-client](https://github.com/Goyoo/node-k8s-client) | -| Node.js | [github.com/tenxcloud/node-kubernetes-client](https://github.com/tenxcloud/node-kubernetes-client) | -| Node.js | [github.com/godaddy/kubernetes-client](https://github.com/godaddy/kubernetes-client) | | Node.js | [github.com/ajpauwels/easy-k8s](https://github.com/ajpauwels/easy-k8s) +| Node.js | [github.com/godaddy/kubernetes-client](https://github.com/godaddy/kubernetes-client) | +| Node.js | [github.com/tenxcloud/node-kubernetes-client](https://github.com/tenxcloud/node-kubernetes-client) | | Perl | [metacpan.org/pod/Net::Kubernetes](https://metacpan.org/pod/Net::Kubernetes) | -| PHP | [github.com/maclof/kubernetes-client](https://github.com/maclof/kubernetes-client) | | PHP | [github.com/allansun/kubernetes-php-client](https://github.com/allansun/kubernetes-php-client) | +| PHP | [github.com/maclof/kubernetes-client](https://github.com/maclof/kubernetes-client) | | PHP | [github.com/travisghansen/kubernetes-client-php](https://github.com/travisghansen/kubernetes-client-php) | | Python | [github.com/eldarion-gondor/pykube](https://github.com/eldarion-gondor/pykube) | | Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) | | Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) | | Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) | -| Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | | Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) | +| Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | | Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) | | Rust | [github.com/clux/kube-rs](https://github.com/clux/kube-rs) | | Rust | [github.com/ynqa/kubernetes-rust](https://github.com/ynqa/kubernetes-rust) | | Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) | -| dotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | +| Scala | [github.com/joan38/kubernetes-client](https://github.com/joan38/kubernetes-client) | +| DotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) | | DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) | | Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) | | Elixir | [github.com/coryodaniel/k8s](https://github.com/coryodaniel/k8s) | -| Haskell | [github.com/kubernetes-client/haskell](https://github.com/kubernetes-client/haskell) | {{% /capture %}} diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index 880dd460241aa..16702b40f5ad9 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -40,19 +40,15 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b |Community |Ecosystem | | ------------ | -------- | -| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) | -| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| -| | [Minishift](https://docs.okd.io/latest/minishift/)| +| [Minikube](/docs/setup/learning-environment/minikube/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| +| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Minishift](https://docs.okd.io/latest/minishift/)| | | [MicroK8s](https://microk8s.io/)| -| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | -| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| -| | [k3s](https://k3s.io)| ## Production environment When evaluating a solution for a production environment, consider which aspects of operating a Kubernetes cluster (or _abstractions_) you want to manage yourself or offload to a provider. -For a list of [Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) providers, see "[Partners](https://kubernetes.io/partners/#conformance)". +[Kubernetes Partners](https://kubernetes.io/partners/#conformance) includes a list of [Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) providers. {{% /capture %}} diff --git a/content/en/docs/setup/learning-environment/minikube.md b/content/en/docs/setup/learning-environment/minikube.md index 038a5cfa49b9a..ed9578452a7ab 100644 --- a/content/en/docs/setup/learning-environment/minikube.md +++ b/content/en/docs/setup/learning-environment/minikube.md @@ -200,7 +200,7 @@ plugins. * virtualbox * vmwarefusion -* docker (EXPERIMENTAL) +* docker ([driver installation](https://minikube.sigs.k8s.io/docs/drivers/docker/) * kvm2 ([driver installation](https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/)) * hyperkit ([driver installation](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/)) * hyperv ([driver installation](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/)) @@ -462,11 +462,11 @@ For more information about Minikube, see the [proposal](https://git.k8s.io/commu ## Additional Links -* **Goals and Non-Goals**: For the goals and non-goals of the Minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md). -* **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests. -* **Building Minikube**: For instructions on how to build/test Minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md). -* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube, see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md). -* **Adding a New Addon**: For instructions on how to add a new addon for Minikube, see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md). +* **Goals and Non-Goals**: For the goals and non-goals of the Minikube project, please see our [roadmap](https://minikube.sigs.k8s.io/docs/contrib/roadmap/). +* **Development Guide**: See [Contributing](https://minikube.sigs.k8s.io/docs/contrib/) for an overview of how to send pull requests. +* **Building Minikube**: For instructions on how to build/test Minikube from source, see the [build guide](https://minikube.sigs.k8s.io/docs/contrib/building/). +* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube, see the [adding dependencies guide](https://minikube.sigs.k8s.io/docs/contrib/drivers/). +* **Adding a New Addon**: For instructions on how to add a new addon for Minikube, see the [adding an addon guide](https://minikube.sigs.k8s.io/docs/contrib/addons/). * **MicroK8s**: Linux users wishing to avoid running a virtual machine may consider [MicroK8s](https://microk8s.io/) as an alternative. ## Community diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index 972bf1810bc7d..cccf422587b60 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -41,7 +41,7 @@ When systemd is chosen as the init system for a Linux distribution, the init pro and consumes a root control group (`cgroup`) and acts as a cgroup manager. Systemd has a tight integration with cgroups and will allocate cgroups per process. It's possible to configure your container runtime and the kubelet to use `cgroupfs`. Using `cgroupfs` alongside systemd means -that there will then be two different cgroup managers. +that there will be two different cgroup managers. Control groups are used to constrain resources that are allocated to processes. A single cgroup manager will simplify the view of what resources are being allocated diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md index af58f4f5a30fa..e2ae7267bc113 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md @@ -8,7 +8,7 @@ weight: 40 {{% capture overview %}} -{{< feature-state for_k8s_version="1.12" state="stable" >}} +{{< feature-state for_k8s_version="v1.12" state="stable" >}} The kubeadm `ClusterConfiguration` object exposes the field `extraArgs` that can override the default flags passed to control plane components such as the APIServer, ControllerManager and Scheduler. The components are defined using the following fields: diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 9d35fa2c5a0df..4a0d567f00a19 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -276,6 +276,12 @@ Cluster DNS (CoreDNS) will not start up before a network is installed.** {{< /caution >}} +{{< note >}} +Currently Calico is the only CNI plugin that the kubeadm project performs e2e tests against. +If you find an issue related to a CNI plugin you should log a ticket in its respective issue +tracker instead of the kubeadm or kubernetes issue trackers. +{{< /note >}} + Several external projects provide Kubernetes Pod networks using CNI, some of which also support [Network Policy](/docs/concepts/services-networking/networkpolicies/). @@ -340,23 +346,6 @@ It implements k8s services and network policies in the user space (on VPP). Please refer to this installation guide: [Contiv-VPP Manual Installation](https://github.com/contiv/vpp/blob/master/docs/setup/MANUAL_INSTALL.md) {{% /tab %}} -{{% tab name="Flannel" %}} - -For `flannel` to work correctly, you must pass `--pod-network-cidr=10.244.0.0/16` to `kubeadm init`. - -Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. The [Firewall](https://coreos.com/flannel/docs/latest/troubleshooting.html#firewalls) section of Flannel's troubleshooting guide explains about this in more detail. - -Flannel works on `amd64`, `arm`, `arm64`, `ppc64le` and `s390x` architectures under Linux. -Windows (`amd64`) is claimed as supported in v0.11.0 but the usage is undocumented. - -```shell -kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml -``` - -For more information about `flannel`, see [the CoreOS flannel repository on GitHub -](https://github.com/coreos/flannel). -{{% /tab %}} - {{% tab name="Kube-router" %}} Kube-router relies on kube-controller-manager to allocate Pod CIDR for the nodes. Therefore, use `kubeadm init` with the `--pod-network-cidr` flag. @@ -368,7 +357,7 @@ For information on using the `kubeadm` tool to set up a Kubernetes cluster with {{% tab name="Weave Net" %}} -For more information on setting up your Kubernetes cluster with Weave Net, please see [Integrating Kubernetes via the Addon]((https://www.weave.works/docs/net/latest/kube-addon/). +For more information on setting up your Kubernetes cluster with Weave Net, please see [Integrating Kubernetes via the Addon](https://www.weave.works/docs/net/latest/kube-addon/). Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` platforms without any extra action required. Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address @@ -467,7 +456,7 @@ The output is similar to: ``` {{< note >}} -To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`. +To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`. {{< /note >}} The output should look something like: @@ -609,13 +598,10 @@ of Pod network add-ons. ## Version skew policy {#version-skew-policy} -The `kubeadm` tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1). -`kubeadm` vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1). - -Due to that we can't see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters. +The `kubeadm` tool of version v{{< skew latestVersion >}} may deploy clusters with a control plane of version v{{< skew latestVersion >}} or v{{< skew prevMinorVersion >}}. +`kubeadm` v{{< skew latestVersion >}} can also upgrade an existing kubeadm-created cluster of version v{{< skew prevMinorVersion >}}. -Example: `kubeadm` v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to -v1.8. +Due to that we can't see into the future, kubeadm CLI v{{< skew latestVersion >}} may or may not be able to deploy v{{< skew nextMinorVersion >}} clusters. These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components: diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 6921ab35ee14b..b44275302aed3 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -56,18 +56,17 @@ route, we recommend you add IP route(s) so Kubernetes cluster addresses go via t As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g. ```bash -cat < /etc/sysctl.d/k8s.conf +cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 +baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +exclude=kubelet kubeadm kubectl EOF # Set SELinux in permissive mode (effectively disabling it) @@ -215,7 +215,7 @@ systemctl enable --now kubelet - Setting SELinux in permissive mode by running `setenforce 0` and `sed ...` effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet. - + {{% /tab %}} {{% tab name="Container Linux" %}} Install CNI plugins (required for most pod network): @@ -229,7 +229,7 @@ curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_ Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)) ```bash -CRICTL_VERSION="v1.16.0" +CRICTL_VERSION="v1.17.0" mkdir -p /opt/bin curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz ``` @@ -244,9 +244,10 @@ cd /opt/bin curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl} chmod +x {kubeadm,kubelet,kubectl} -curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service +RELEASE_VERSION="v0.2.7" +curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service mkdir -p /etc/systemd/system/kubelet.service.d -curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ``` Enable and start `kubelet`: @@ -264,21 +265,25 @@ kubeadm to tell it what to do. ## Configure cgroup driver used by kubelet on control-plane node When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet -and set it in the `/var/lib/kubelet/kubeadm-flags.env` file during runtime. +and set it in the `/var/lib/kubelet/config.yaml` file during runtime. -If you are using a different CRI, you have to modify the file -`/etc/default/kubelet` (`/etc/sysconfig/kubelet` for CentOS, RHEL, Fedora) with your `cgroup-driver` value, like so: +If you are using a different CRI, you have to modify the file with your `cgroupDriver` value, like so: -```bash -KUBELET_EXTRA_ARGS=--cgroup-driver= +```yaml +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration +cgroupDriver: ``` -This file will be used by `kubeadm init` and `kubeadm join` to source extra -user defined arguments for the kubelet. - Please mind, that you **only** have to do that if the cgroup driver of your CRI is not `cgroupfs`, because that is the default value in the kubelet already. +{{< note >}} +Since `--cgroup-driver` flag has been deprecated by kubelet, if you have that in `/var/lib/kubelet/kubeadm-flags.env` +or `/etc/default/kubelet`(`/etc/sysconfig/kubelet` for RPMs), please remove it and use the KubeletConfiguration instead +(stored in `/var/lib/kubelet/config.yaml` by default). +{{< /note >}} + Restarting the kubelet is required: ```bash diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md index d6e421b2bae7c..641d34944027a 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md @@ -8,7 +8,7 @@ weight: 80 {{% capture overview %}} -{{< feature-state for_k8s_version="1.11" state="stable" >}} +{{< feature-state for_k8s_version="v1.11" state="stable" >}} The lifecycle of the kubeadm CLI tool is decoupled from the [kubelet](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index 8187d5eac46ac..996f588dc7b38 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -6,22 +6,22 @@ weight: 30 {{% capture overview %}} -This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-incubator/kubespray). +This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). -Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: +Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: * a highly available cluster * composable attributes * support for most popular Linux distributions * Container Linux by CoreOS - * Debian Jessie, Stretch, Wheezy + * Debian Buster, Jessie, Stretch, Wheezy * Ubuntu 16.04, 18.04 - * CentOS/RHEL 7 - * Fedora/CentOS Atomic - * openSUSE Leap 42.3/Tumbleweed + * CentOS/RHEL/Oracle Linux 7 + * Fedora 28 + * openSUSE Leap 15 * continuous integration tests -To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops). +To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops). {{% /capture %}} @@ -31,11 +31,11 @@ To choose a tool which best fits your use case, read [this comparison](https://g ### (1/5) Meet the underlay requirements -Provision servers with the following [requirements](https://github.com/kubernetes-incubator/kubespray#requirements): +Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements): -* **Ansible v2.5 (or newer) and python-netaddr is installed on the machine that will run Ansible commands** +* **Ansible v2.7.8 and python-netaddr is installed on the machine that will run Ansible commands** * **Jinja 2.9 (or newer) is required to run the Ansible Playbooks** -* The target servers must have **access to the Internet** in order to pull docker images +* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment)) * The target servers are configured to allow **IPv4 forwarding** * **Your ssh key must be copied** to all the servers part of your inventory * The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall @@ -44,12 +44,13 @@ Provision servers with the following [requirements](https://github.com/kubernete Kubespray provides the following utilities to help provision your environment: * [Terraform](https://www.terraform.io/) scripts for the following cloud providers: - * [AWS](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws) - * [OpenStack](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/openstack) + * [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws) + * [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack) + * [Packet](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/packet) ### (2/5) Compose an inventory file -After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)". +After you provision your servers, create an [inventory file for Ansible](http://docs.ansible.com/ansible/intro_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)". ### (3/5) Plan your cluster deployment @@ -58,14 +59,14 @@ Kubespray provides the ability to customize many aspects of the deployment: * Choice deployment mode: kubeadm or non-kubeadm * CNI (networking) plugins * DNS configuration -* Choice of control plane: native/binary or containerized with docker or rkt +* Choice of control plane: native/binary or containerized * Component versions * Calico route reflectors * Component runtime options * {{< glossary_tooltip term_id="docker" >}} - * {{< glossary_tooltip term_id="rkt" >}} + * {{< glossary_tooltip term_id="containerd" >}} * {{< glossary_tooltip term_id="cri-o" >}} -* Certificate generation methods (**Vault being discontinued**) +* Certificate generation methods Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. @@ -73,18 +74,18 @@ Kubespray customizations can be made to a [variable file](http://docs.ansible.co Next, deploy your cluster: -Cluster deployment using [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment). +Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment). ```shell ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \ --private-key=~/.ssh/private_key ``` -Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md) for best results. +Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results. ### (5/5) Verify the deployment -Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators. +Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior of the rest of the workloads and serve as cluster health indicators. ## Cluster operations @@ -92,16 +93,16 @@ Kubespray provides additional playbooks to manage your cluster: _scale_ and _upg ### Scale your cluster -You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#adding-nodes)". -You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#remove-nodes)". +You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)". +You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)". ### Upgrade your cluster -You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/upgrades.md)". +You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)". ## Cleanup -You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/reset.yml). +You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml). {{< caution >}} When running the reset playbook, be sure not to accidentally target your production cluster! @@ -109,13 +110,13 @@ When running the reset playbook, be sure not to accidentally target your product ## Feedback -* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) -* [GitHub Issues](https://github.com/kubernetes-incubator/kubespray/issues) +* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](http://slack.k8s.io/)) +* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues) {{% /capture %}} {{% capture whatsnext %}} -Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/roadmap.md). +Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md). {{% /capture %}} diff --git a/content/en/docs/setup/release/version-skew-policy.md b/content/en/docs/setup/release/version-skew-policy.md index eb015dfdf0521..dc411807c541d 100644 --- a/content/en/docs/setup/release/version-skew-policy.md +++ b/content/en/docs/setup/release/version-skew-policy.md @@ -24,7 +24,7 @@ Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](http://semver.org/) terminology. For more information, see [Kubernetes Release Versioning](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning). -The Kubernetes project maintains release branches for the most recent three minor releases. +The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility. Patch releases are cut from those branches at a regular cadence, or as needed. @@ -41,8 +41,8 @@ In [highly-available (HA) clusters](/docs/setup/production-environment/tools/kub Example: -* newest `kube-apiserver` is at **1.13** -* other `kube-apiserver` instances are supported at **1.13** and **1.12** +* newest `kube-apiserver` is at **{{< skew latestVersion >}}** +* other `kube-apiserver` instances are supported at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** ### kubelet @@ -50,8 +50,8 @@ Example: Example: -* `kube-apiserver` is at **1.13** -* `kubelet` is supported at **1.13**, **1.12**, and **1.11** +* `kube-apiserver` is at **{{< skew latestVersion >}}** +* `kubelet` is supported at **{{< skew latestVersion >}}**, **{{< skew prevMinorVersion >}}**, and **{{< skew oldestMinorVersion >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the allowed `kubelet` versions. @@ -59,8 +59,8 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: -* `kube-apiserver` instances are at **1.13** and **1.12** -* `kubelet` is supported at **1.12**, and **1.11** (**1.13** is not supported because that would be newer than the `kube-apiserver` instance at version **1.12**) +* `kube-apiserver` instances are at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** +* `kubelet` is supported at **{{< skew prevMinorVersion >}}**, and **{{< skew oldestMinorVersion >}}** (**{{< skew latestVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew prevMinorVersion >}}**) ### kube-controller-manager, kube-scheduler, and cloud-controller-manager @@ -68,8 +68,8 @@ Example: Example: -* `kube-apiserver` is at **1.13** -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **1.13** and **1.12** +* `kube-apiserver` is at **{{< skew latestVersion >}}** +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, and these components can communicate with any `kube-apiserver` instance in the cluster (for example, via a load balancer), this narrows the allowed versions of these components. @@ -77,9 +77,9 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, and Example: -* `kube-apiserver` instances are at **1.13** and **1.12** +* `kube-apiserver` instances are at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** * `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer that can route to any `kube-apiserver` instance -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **1.12** (**1.13** is not supported because that would be newer than the `kube-apiserver` instance at version **1.12**) +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew prevMinorVersion >}}** (**{{< skew latestVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew prevMinorVersion >}}**) ### kubectl @@ -87,8 +87,8 @@ Example: Example: -* `kube-apiserver` is at **1.13** -* `kubectl` is supported at **1.14**, **1.13**, and **1.12** +* `kube-apiserver` is at **{{< skew latestVersion >}}** +* `kubectl` is supported at **{{< skew nextMinorVersion >}}**, **{{< skew latestVersion >}}**, and **{{< skew prevMinorVersion >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the supported `kubectl` versions. @@ -96,27 +96,27 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: -* `kube-apiserver` instances are at **1.13** and **1.12** -* `kubectl` is supported at **1.13** and **1.12** (other versions would be more than one minor version skewed from one of the `kube-apiserver` components) +* `kube-apiserver` instances are at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** +* `kubectl` is supported at **{{< skew latestVersion >}}** and **{{< skew prevMinorVersion >}}** (other versions would be more than one minor version skewed from one of the `kube-apiserver` components) ## Supported component upgrade order The supported version skew between components has implications on the order in which components must be upgraded. -This section describes the order in which components must be upgraded to transition an existing cluster from version **1.n** to version **1.(n+1)**. +This section describes the order in which components must be upgraded to transition an existing cluster from version **{{< skew prevMinorVersion >}}** to version **{{< skew latestVersion >}}**. ### kube-apiserver Pre-requisites: -* In a single-instance cluster, the existing `kube-apiserver` instance is **1.n** -* In an HA cluster, all `kube-apiserver` instances are at **1.n** or **1.(n+1)** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance) -* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that communicate with this server are at version **1.n** (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version) -* `kubelet` instances on all nodes are at version **1.n** or **1.(n-1)** (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version) +* In a single-instance cluster, the existing `kube-apiserver` instance is **{{< skew prevMinorVersion >}}** +* In an HA cluster, all `kube-apiserver` instances are at **{{< skew prevMinorVersion >}}** or **{{< skew latestVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance) +* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that communicate with this server are at version **{{< skew prevMinorVersion >}}** (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version) +* `kubelet` instances on all nodes are at version **{{< skew prevMinorVersion >}}** or **{{< skew oldestMinorVersion >}}** (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version) * Registered admission webhooks are able to handle the data the new `kube-apiserver` instance will send them: - * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **1.(n+1)** (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+) - * The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **1.(n+1)** + * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **{{< skew latestVersion >}}** (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+) + * The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **{{< skew latestVersion >}}** -Upgrade `kube-apiserver` to **1.(n+1)** +Upgrade `kube-apiserver` to **{{< skew latestVersion >}}** {{< note >}} Project policies for [API deprecation](/docs/reference/using-api/deprecation-policy/) and @@ -128,17 +128,17 @@ require `kube-apiserver` to not skip minor versions when upgrading, even in sing Pre-requisites: -* The `kube-apiserver` instances these components communicate with are at **1.(n+1)** (in HA clusters in which these control plane components can communicate with any `kube-apiserver` instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components) +* The `kube-apiserver` instances these components communicate with are at **{{< skew latestVersion >}}** (in HA clusters in which these control plane components can communicate with any `kube-apiserver` instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components) -Upgrade `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` to **1.(n+1)** +Upgrade `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` to **{{< skew latestVersion >}}** ### kubelet Pre-requisites: -* The `kube-apiserver` instances the `kubelet` communicates with are at **1.(n+1)** +* The `kube-apiserver` instances the `kubelet` communicates with are at **{{< skew latestVersion >}}** -Optionally upgrade `kubelet` instances to **1.(n+1)** (or they can be left at **1.n** or **1.(n-1)**) +Optionally upgrade `kubelet` instances to **{{< skew latestVersion >}}** (or they can be left at **{{< skew prevMinorVersion >}}** or **{{< skew oldestMinorVersion >}}**) {{< warning >}} Running a cluster with `kubelet` instances that are persistently two minor versions behind `kube-apiserver` is not recommended: diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index 67077aa33130c..4cccac0f58ffa 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -149,17 +149,17 @@ users: username: exp ``` -The `fake-ca-file`, `fake-cert-file` and `fake-key-file` above is the placeholders -for the real path of the certification files. You need change these to the real path -of certification files in your environment. +The `fake-ca-file`, `fake-cert-file` and `fake-key-file` above are the placeholders +for the pathnames of the certificate files. You need change these to the actual pathnames +of certificate files in your environment. -Some times you may want to use base64 encoded data here instead of the path of the -certification files, then you need add the suffix `-data` to the keys. For example, +Sometimes you may want to use Base64-encoded data embedded here instead of separate +certificate files; in that case you need add the suffix `-data` to the keys, for example, `certificate-authority-data`, `client-certificate-data`, `client-key-data`. Each context is a triple (cluster, user, namespace). For example, the -`dev-frontend` context says, Use the credentials of the `developer` -user to access the `frontend` namespace of the `development` cluster. +`dev-frontend` context says, "Use the credentials of the `developer` +user to access the `frontend` namespace of the `development` cluster". Set the current context: @@ -275,7 +275,7 @@ colon-delimited for Linux and Mac, and semicolon-delimited for Windows. If you h a `KUBECONFIG` environment variable, familiarize yourself with the configuration files in the list. -Temporarily append two paths to your `KUBECONFIG` environment variable. For example:
+Temporarily append two paths to your `KUBECONFIG` environment variable. For example: ### Linux ```shell @@ -359,11 +359,12 @@ kubectl config view ## Clean up Return your `KUBECONFIG` environment variable to its original value. For example:
-Linux: + +### Linux ```shell export KUBECONFIG=$KUBECONFIG_SAVED ``` -Windows PowerShell +### Windows PowerShell ```shell $Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED ``` diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md index 8ea504bd4aac7..c07155a25966a 100644 --- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -7,11 +7,7 @@ weight: 100 {{% capture overview %}} An [Ingress](/docs/concepts/services-networking/ingress/) is an API object that defines rules which allow external access -to services in a cluster. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers/) fulfills the rules set in the Ingress. - -{{< caution >}} -For the Ingress resource to work, the cluster **must** also have an Ingress controller running. -{{< /caution >}} +to services in a cluster. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers/) fulfills the rules set in the Ingress. This page shows you how to set up a simple Ingress which routes requests to Service web or web2 depending on the HTTP URI. @@ -70,7 +66,7 @@ This page shows you how to set up a simple Ingress which routes requests to Serv 1. Create a Deployment using the following command: ```shell - kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 + kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 ``` Output: @@ -82,7 +78,7 @@ This page shows you how to set up a simple Ingress which routes requests to Serv 1. Expose the Deployment: ```shell - kubectl expose deployment web --target-port=8080 --type=NodePort + kubectl expose deployment web --type=NodePort --port=8080 ``` Output: diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md index ecda7709cfd09..b62438a8a149f 100644 --- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -30,7 +30,7 @@ Dashboard also provides information on the state of Kubernetes resources in your The Dashboard UI is not deployed by default. To deploy it, run the following command: ``` -kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml ``` ## Accessing the Dashboard UI diff --git a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md index 037187499a874..9a77378d5cf04 100644 --- a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md +++ b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md @@ -251,15 +251,14 @@ The name of an APIService object must be a valid #### Contacting the extension apiserver -Once the Kubernetes apiserver has determined a request should be sent to a extension apiserver, +Once the Kubernetes apiserver has determined a request should be sent to an extension apiserver, it needs to know how to contact it. -The `service` stanza is a reference to the service for a extension apiserver. +The `service` stanza is a reference to the service for an extension apiserver. The service namespace and name are required. The port is optional and defaults to 443. -The path is optional and defaults to "/". -Here is an example of an extension apiserver that is configured to be called on port "1234" -at the subpath "/my-path", and to verify the TLS connection against the ServerName +Here is an example of an extension apiserver that is configured to be called on port "1234", +and to verify the TLS connection against the ServerName `my-service-name.my-service-namespace.svc` using a custom CA bundle. ```yaml diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md index 184e870fc39c7..ec35dd88e8e41 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md @@ -275,7 +275,7 @@ the version. ## Webhook conversion -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} {{< note >}} Webhook conversion is available as beta since 1.15, and as alpha since Kubernetes 1.13. The diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md index dd96f2d6d6286..ddcf7d4875e0e 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md @@ -243,7 +243,7 @@ If you later recreate the same CustomResourceDefinition, it will start out empty ## Specifying a structural schema -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} CustomResources traditionally store arbitrary JSON (next to `apiVersion`, `kind` and `metadata`, which is validated by the API server implicitly). With [OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) a schema can be specified, which is validated during creation and updates, compare below for details and limits of such a schema. @@ -364,7 +364,7 @@ Structural schemas are a requirement for `apiextensions.k8s.io/v1`, and disables ### Pruning versus preserving unknown fields -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} CustomResourceDefinitions traditionally store any (possibly validated) JSON as is in etcd. This means that unspecified fields (if there is a [OpenAPI v3.0 validation schema](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) at all) are persisted. This is in contrast to native Kubernetes resources such as a pod where unknown fields are dropped before being persisted to etcd. We call this "pruning" of unknown fields. @@ -604,7 +604,7 @@ meaning all finalizers have been executed. ### Validation -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} Validation of custom objects is possible via [OpenAPI v3 schemas](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schemaObject) or [validatingadmissionwebhook](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook). In `apiextensions.k8s.io/v1` schemas are required, in `apiextensions.k8s.io/v1beta1` they are optional. @@ -781,7 +781,7 @@ crontab "my-new-cron-object" created ### Defaulting -{{< feature-state state="stable" for_kubernetes_version="1.17" >}} +{{< feature-state state="stable" for_k8s_version="v1.17" >}} {{< note >}} To use defaulting, your CustomResourceDefinition must use API version `apiextensions.k8s.io/v1`. @@ -866,7 +866,7 @@ Default values for `metadata` fields of `x-kubernetes-embedded-resources: true` ### Publish Validation Schema in OpenAPI v2 -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} {{< note >}} OpenAPI v2 Publishing is available as beta since 1.15, and as alpha since 1.14. The @@ -1051,7 +1051,7 @@ The column's `format` controls the style used when `kubectl` prints the value. ### Subresources -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} +{{< feature-state state="stable" for_k8s_version="v1.16" >}} Custom resources support `/status` and `/scale` subresources. diff --git a/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md b/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md index d67ec982a0c00..436584ad14ca0 100644 --- a/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md @@ -108,40 +108,63 @@ my-scheduler-lnf4s-4744f 1/1 Running 0 2m You should see a "Running" my-scheduler pod, in addition to the default kube-scheduler pod in this list. +### Enable leader election + To run multiple-scheduler with leader election enabled, you must do the following: First, update the following fields in your YAML file: * `--leader-elect=true` -* `--lock-object-namespace=lock-object-namespace` -* `--lock-object-name=lock-object-name` +* `--lock-object-namespace=` +* `--lock-object-name=` + +{{< note >}} +The control plane creates the lock objects for you, but the namespace must already exist. +You can use the `kube-system` namespace. +{{< /note >}} -If RBAC is enabled on your cluster, you must update the `system:kube-scheduler` cluster role. Add your scheduler name to the resourceNames of the rule applied for endpoints resources, as in the following example: +If RBAC is enabled on your cluster, you must update the `system:kube-scheduler` cluster role. Add your scheduler name to the resourceNames of the rule applied for `endpoints` and `leases` resources, as in the following example: ``` kubectl edit clusterrole system:kube-scheduler ``` ```yaml -- apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - annotations: - rbac.authorization.kubernetes.io/autoupdate: "true" - labels: - kubernetes.io/bootstrapping: rbac-defaults - name: system:kube-scheduler - rules: - - apiGroups: - - "" - resourceNames: - - kube-scheduler - - my-scheduler - resources: - - endpoints - verbs: - - delete - - get - - patch - - update +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + rbac.authorization.kubernetes.io/autoupdate: "true" + labels: + kubernetes.io/bootstrapping: rbac-defaults + name: system:kube-scheduler +rules: +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create +- apiGroups: + - coordination.k8s.io + resourceNames: + - kube-scheduler + - my-scheduler + resources: + - leases + verbs: + - get + - update +- apiGroups: + - "" + resourceNames: + - kube-scheduler + - my-scheduler + resources: + - endpoints + verbs: + - delete + - get + - patch + - update ``` ## Specify schedulers for pods diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md index 0fdbff57b8758..1b6a706934e3b 100644 --- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md @@ -70,7 +70,7 @@ pod/nginx-701339712-e0qfq 1/1 Running 0 35s You should be able to access the new `nginx` service from other Pods. To access the `nginx` Service from another Pod in the `default` namespace, start a busybox container: ```console -kubectl run --generator=run-pod/v1 busybox --rm -ti --image=busybox -- /bin/sh +kubectl run busybox --rm -ti --image=busybox -- /bin/sh ``` In your shell, run the following command: @@ -113,7 +113,7 @@ networkpolicy.networking.k8s.io/access-nginx created When you attempt to access the `nginx` Service from a Pod without the correct labels, the request times out: ```console -kubectl run --generator=run-pod/v1 busybox --rm -ti --image=busybox -- /bin/sh +kubectl run busybox --rm -ti --image=busybox -- /bin/sh ``` In your shell, run the command: @@ -132,7 +132,7 @@ wget: download timed out You can create a Pod with the correct labels to see that the request is allowed: ```console -kubectl run --generator=run-pod/v1 busybox --rm -ti --labels="access=true" --image=busybox -- /bin/sh +kubectl run busybox --rm -ti --labels="access=true" --image=busybox -- /bin/sh ``` In your shell, run the command: diff --git a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md index 0b79ef581c1d9..0e80a018c43dc 100644 --- a/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md +++ b/content/en/docs/tasks/administer-cluster/developing-cloud-controller-manager.md @@ -10,35 +10,35 @@ content_template: templates/concept {{% capture overview %}} {{< feature-state for_k8s_version="v1.11" state="beta" >}} -In upcoming releases, Cloud Controller Manager will -be the preferred way to integrate Kubernetes with any cloud. This will ensure cloud providers -can develop their features independently from the core Kubernetes release cycles. -{{< feature-state for_k8s_version="1.8" state="alpha" >}} +{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="The cloud-controller-manager is">}} -Before going into how to build your own cloud controller manager, some background on how it works under the hood is helpful. The cloud controller manager is code from `kube-controller-manager` utilizing Go interfaces to allow implementations from any cloud to be plugged in. Most of the scaffolding and generic controller implementations will be in core, but it will always exec out to the cloud interfaces it is provided, so long as the [cloud provider interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go#L42-L62) is satisfied. +{{% /capture %}} -To dive a little deeper into implementation details, all cloud controller managers will import packages from Kubernetes core, the only difference being each project will register their own cloud providers by calling [cloudprovider.RegisterCloudProvider](https://github.com/kubernetes/cloud-provider/blob/6371aabbd7a7726f4b358444cca40def793950c2/plugins.go#L55-L63) where a global variable of available cloud providers is updated. +{{% capture body %}} -{{% /capture %}} +## Background +Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the `cloud-controller-manager` binary allows cloud vendors to evolve independently from the core Kubernetes code. -{{% capture body %}} +The Kubernetes project provides skeleton cloud-controller-manager code with Go interfaces to allow you (or your cloud provider) to plug in your own implementations. This means that a cloud provider can implement a cloud-controller-manager by importing packages from Kubernetes core; each cloudprovider will register their own code by calling `cloudprovider.RegisterCloudProvider` to update a global variable of available cloud providers. ## Developing -### Out of Tree +### Out of tree -To build an out-of-tree cloud-controller-manager for your cloud, follow these steps: +To build an out-of-tree cloud-controller-manager for your cloud: 1. Create a go package with an implementation that satisfies [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go). -2. Use [main.go in cloud-controller-manager](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/controller-manager.go) from Kubernetes core as a template for your main.go. As mentioned above, the only difference should be the cloud package that will be imported. -3. Import your cloud package in `main.go`, ensure your package has an `init` block to run [cloudprovider.RegisterCloudProvider](https://github.com/kubernetes/cloud-provider/blob/master/plugins.go). +2. Use [`main.go` in cloud-controller-manager](https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/controller-manager.go) from Kubernetes core as a template for your `main.go`. As mentioned above, the only difference should be the cloud package that will be imported. +3. Import your cloud package in `main.go`, ensure your package has an `init` block to run [`cloudprovider.RegisterCloudProvider`](https://github.com/kubernetes/cloud-provider/blob/master/plugins.go). -Using existing out-of-tree cloud providers as an example may be helpful. You can find the list [here](/docs/tasks/administer-cluster/running-cloud-controller.md#examples). +Many cloud providers publish their controller manager code as open source. If you are creating +a new cloud-controller-manager from scratch, you could take an existing out-of-tree cloud +controller manager as your starting point. -### In Tree +### In tree -For in-tree cloud providers, you can run the in-tree cloud controller manager as a [Daemonset](/examples/admin/cloud/ccm-example.yaml) in your cluster. See the [running cloud controller manager docs](/docs/tasks/administer-cluster/running-cloud-controller.md) for more details. +For in-tree cloud providers, you can run the in-tree cloud controller manager as a {{< glossary_tooltip term_id="daemonset" >}} in your cluster. See [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) for more details. {{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index 0203cfa469f01..3a69bd84ec73b 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -56,7 +56,7 @@ Take a look inside the resolv.conf file. [Known issues](#known-issues) below for more information) ```shell -kubectl exec dnsutils cat /etc/resolv.conf +kubectl exec -ti dnsutils -- cat /etc/resolv.conf ``` Verify that the search path and name server are set up like the following diff --git a/content/en/docs/tasks/administer-cluster/highly-available-master.md b/content/en/docs/tasks/administer-cluster/highly-available-master.md index 1a0a36d8753a6..e5529da7c74da 100644 --- a/content/en/docs/tasks/administer-cluster/highly-available-master.md +++ b/content/en/docs/tasks/administer-cluster/highly-available-master.md @@ -7,7 +7,7 @@ content_template: templates/task {{% capture overview %}} -{{< feature-state for_k8s_version="1.5" state="alpha" >}} +{{< feature-state for_k8s_version="v1.5" state="alpha" >}} You can replicate Kubernetes masters in `kube-up` or `kube-down` scripts for Google Compute Engine. This document describes how to use kube-up/down scripts to manage highly available (HA) masters and how HA masters are implemented for use with GCE. diff --git a/content/en/docs/tasks/administer-cluster/kms-provider.md b/content/en/docs/tasks/administer-cluster/kms-provider.md index 430312f8bc8e1..d90ca853cfb95 100644 --- a/content/en/docs/tasks/administer-cluster/kms-provider.md +++ b/content/en/docs/tasks/administer-cluster/kms-provider.md @@ -79,20 +79,20 @@ To encrypt the data: 1. Create a new encryption configuration file using the appropriate properties for the `kms` provider: -```yaml -apiVersion: apiserver.config.k8s.io/v1 -kind: EncryptionConfiguration -resources: - - resources: - - secrets - providers: - - kms: - name: myKmsPlugin - endpoint: unix:///tmp/socketfile.sock - cachesize: 100 - timeout: 3s - - identity: {} -``` + ```yaml + apiVersion: apiserver.config.k8s.io/v1 + kind: EncryptionConfiguration + resources: + - resources: + - secrets + providers: + - kms: + name: myKmsPlugin + endpoint: unix:///tmp/socketfile.sock + cachesize: 100 + timeout: 3s + - identity: {} + ``` 2. Set the `--encryption-provider-config` flag on the kube-apiserver to point to the location of the configuration file. 3. Restart your API server. @@ -135,22 +135,22 @@ To switch from a local encryption provider to the `kms` provider and re-encrypt 1. Add the `kms` provider as the first entry in the configuration file as shown in the following example. -```yaml -apiVersion: apiserver.config.k8s.io/v1 -kind: EncryptionConfiguration -resources: - - resources: - - secrets - providers: - - kms: - name : myKmsPlugin - endpoint: unix:///tmp/socketfile.sock - cachesize: 100 - - aescbc: - keys: - - name: key1 - secret: -``` + ```yaml + apiVersion: apiserver.config.k8s.io/v1 + kind: EncryptionConfiguration + resources: + - resources: + - secrets + providers: + - kms: + name : myKmsPlugin + endpoint: unix:///tmp/socketfile.sock + cachesize: 100 + - aescbc: + keys: + - name: key1 + secret: + ``` 2. Restart all kube-apiserver processes. @@ -165,24 +165,22 @@ To disable encryption at rest: 1. Place the `identity` provider as the first entry in the configuration file: -```yaml -apiVersion: apiserver.config.k8s.io/v1 -kind: EncryptionConfiguration -resources: - - resources: - - secrets - providers: - - identity: {} - - kms: - name : myKmsPlugin - endpoint: unix:///tmp/socketfile.sock - cachesize: 100 -``` + ```yaml + apiVersion: apiserver.config.k8s.io/v1 + kind: EncryptionConfiguration + resources: + - resources: + - secrets + providers: + - identity: {} + - kms: + name : myKmsPlugin + endpoint: unix:///tmp/socketfile.sock + cachesize: 100 + ``` 2. Restart all kube-apiserver processes. 3. Run the following command to force all secrets to be decrypted. ``` kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` {{% /capture %}} - - diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md index 54978dc55dfb4..28df69c13aecb 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md @@ -48,7 +48,7 @@ Once you have a Linux-based Kubernetes control-plane node you are ready to choos 1. Prepare Kubernetes control plane for Flannel - Some minor preparation is recommended on the Kubernetes control plane in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command: + Some minor preparation is recommended on the Kubernetes control plane in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. The following command must be run on all Linux nodes: ```bash sudo sysctl net.bridge.bridge-nf-call-iptables=1 @@ -114,11 +114,27 @@ Once you have a Linux-based Kubernetes control-plane node you are ready to choos curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/{{< param "fullversion" >}}/g' | kubectl apply -f - kubectl apply -f https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml ``` - {{< note >}} If you're using host-gateway use https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-host-gw.yml instead {{< /note >}} + {{< note >}} +If you're using a different interface rather than Ethernet (i.e. "Ethernet0 2") on the Windows nodes, you have to modify the line: + +```powershell +wins cli process run --path /k/flannel/setup.exe --args "--mode=overlay --interface=Ethernet" +``` + +in the `flannel-host-gw.yml` or `flannel-overlay.yml` file and specify your interface accordingly. + +```bash +# Example +curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml | sed 's/Ethernet/Ethernet0 2/g' | kubectl apply -f - +``` + {{< /note >}} + + + ### Joining a Windows worker node {{< note >}} You must install the `Containers` feature and install Docker. Instructions diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 9fc79c1e12b1f..6e7cfe079dc20 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -50,18 +50,18 @@ The upgrade workflow at high level is the following: ## Determine which version to upgrade to -1. Find the latest stable 1.17 version: +1. Find the latest stable 1.18 version: {{< tabs name="k8s_install_versions" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} apt update apt-cache madison kubeadm - # find the latest 1.17 version in the list + # find the latest 1.18 version in the list # it should look like 1.18.x-00, where x is the latest patch {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes - # find the latest 1.17 version in the list + # find the latest 1.18 version in the list # it should look like 1.18.x-0, where x is the latest patch {{% /tab %}} {{< /tabs >}} @@ -259,17 +259,17 @@ The upgrade workflow at high level is the following: 1. Same as the first control plane node but use: -``` -sudo kubeadm upgrade node -``` + ``` + sudo kubeadm upgrade node + ``` -instead of: + instead of: -``` -sudo kubeadm upgrade apply -``` + ``` + sudo kubeadm upgrade apply + ``` -Also `sudo kubeadm upgrade plan` is not needed. + Also `sudo kubeadm upgrade plan` is not needed. ### Upgrade kubelet and kubectl @@ -292,7 +292,7 @@ Also `sudo kubeadm upgrade plan` is not needed. {{% /tab %}} {{< /tabs >}} -1. Restart the kubelet +1. Restart the kubelet ```shell sudo systemctl restart kubelet @@ -370,7 +370,7 @@ without compromising the minimum required capacity for running your workloads. {{% /tab %}} {{< /tabs >}} -1. Restart the kubelet +1. Restart the kubelet ```shell sudo systemctl restart kubelet diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md index 6bdd9a1061e61..71cc28ff40bc5 100644 --- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md +++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md @@ -3,17 +3,17 @@ reviewers: - luxas - thockin - wlan0 -title: Kubernetes Cloud Controller Manager +title: Cloud Controller Manager Administration content_template: templates/concept --- {{% capture overview %}} -{{< feature-state state="beta" >}} +{{< feature-state state="beta" for_k8s_version="v1.11" >}} -Kubernetes v1.6 introduced a new binary called `cloud-controller-manager`. `cloud-controller-manager` is a daemon that embeds cloud-specific control loops. These cloud-specific control loops were originally in the `kube-controller-manager`. Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the `cloud-controller-manager` binary allows cloud vendors to evolve independently from the core Kubernetes code. +Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the {{< glossary_tooltip text="`cloud-controller-manager`" term_id="cloud-controller-manager" >}} binary allows cloud vendors to evolve independently from the core Kubernetes code. -The `cloud-controller-manager` can be linked to any cloud provider that satisfies [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go). For backwards compatibility, the [cloud-controller-manager](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager) provided in the core Kubernetes project uses the same cloud libraries as `kube-controller-manager`. Cloud providers already supported in Kubernetes core are expected to use the in-tree cloud-controller-manager to transition out of Kubernetes core. In future Kubernetes releases, all cloud controller managers will be developed outside of the core Kubernetes project managed by sig leads or cloud vendors. +The `cloud-controller-manager` can be linked to any cloud provider that satisfies [cloudprovider.Interface](https://github.com/kubernetes/cloud-provider/blob/master/cloud.go). For backwards compatibility, the [cloud-controller-manager](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager) provided in the core Kubernetes project uses the same cloud libraries as `kube-controller-manager`. Cloud providers already supported in Kubernetes core are expected to use the in-tree cloud-controller-manager to transition out of Kubernetes core. {{% /capture %}} @@ -43,11 +43,11 @@ Keep in mind that setting up your cluster to use cloud controller manager will c * cloud information about nodes in the cluster will no longer be retrieved using local metadata, but instead all API calls to retrieve node information will go through cloud controller manager. This may mean you can restrict access to your cloud API on the kubelets for better security. For larger clusters you may want to consider if cloud controller manager will hit rate limits since it is now responsible for almost all API calls to your cloud from within the cluster. -As of v1.8, cloud controller manager can implement: +The cloud controller manager can implement: -* node controller - responsible for updating kubernetes nodes using cloud APIs and deleting kubernetes nodes that were deleted on your cloud. -* service controller - responsible for loadbalancers on your cloud against services of type LoadBalancer. -* route controller - responsible for setting up network routes on your cloud +* Node controller - responsible for updating kubernetes nodes using cloud APIs and deleting kubernetes nodes that were deleted on your cloud. +* Service controller - responsible for loadbalancers on your cloud against services of type LoadBalancer. +* Route controller - responsible for setting up network routes on your cloud * any other features you would like to implement if you are running an out-of-tree provider. @@ -55,14 +55,9 @@ As of v1.8, cloud controller manager can implement: If you are using a cloud that is currently supported in Kubernetes core and would like to adopt cloud controller manager, see the [cloud controller manager in kubernetes core](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager). -For cloud controller managers not in Kubernetes core, you can find the respective projects in repos maintained by cloud vendors or sig leads. +For cloud controller managers not in Kubernetes core, you can find the respective projects in repositories maintained by cloud vendors or by SIGs. -* [DigitalOcean](https://github.com/digitalocean/digitalocean-cloud-controller-manager) -* [keepalived](https://github.com/munnerz/keepalived-cloud-provider) -* [Oracle Cloud Infrastructure](https://github.com/oracle/oci-cloud-controller-manager) -* [Rancher](https://github.com/rancher/rancher-cloud-controller-manager) - -For providers already in Kubernetes core, you can run the in-tree cloud controller manager as a Daemonset in your cluster, use the following as a guideline: +For providers already in Kubernetes core, you can run the in-tree cloud controller manager as a DaemonSet in your cluster, use the following as a guideline: {{< codenew file="admin/cloud/ccm-example.yaml" >}} @@ -77,18 +72,19 @@ Cloud controller manager does not implement any of the volume controllers found ### Scalability -In the previous architecture for cloud providers, we relied on kubelets using a local metadata service to retrieve node information about itself. With this new architecture, we now fully rely on the cloud controller managers to retrieve information for all nodes. For very large clusters, you should consider possible bottle necks such as resource requirements and API rate limiting. +The cloud-controller-manager queries your cloud provider's APIs to retrieve information for all nodes. For very large clusters, consider possible bottlenecks such as resource requirements and API rate limiting. ### Chicken and Egg The goal of the cloud controller manager project is to decouple development of cloud features from the core Kubernetes project. Unfortunately, many aspects of the Kubernetes project has assumptions that cloud provider features are tightly integrated into the project. As a result, adopting this new architecture can create several situations where a request is being made for information from a cloud provider, but the cloud controller manager may not be able to return that information without the original request being complete. -A good example of this is the TLS bootstrapping feature in the Kubelet. Currently, TLS bootstrapping assumes that the Kubelet has the ability to ask the cloud provider (or a local metadata service) for all its address types (private, public, etc) but cloud controller manager cannot set a node's address types without being initialized in the first place which requires that the kubelet has TLS certificates to communicate with the apiserver. +A good example of this is the TLS bootstrapping feature in the Kubelet. TLS bootstrapping assumes that the Kubelet has the ability to ask the cloud provider (or a local metadata service) for all its address types (private, public, etc) but cloud controller manager cannot set a node's address types without being initialized in the first place which requires that the kubelet has TLS certificates to communicate with the apiserver. As this initiative evolves, changes will be made to address these issues in upcoming releases. -## Developing your own Cloud Controller Manager +{{% /capture %}} +{{% capture whatsnext %}} -To build and develop your own cloud controller manager, read the [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager.md) doc. +To build and develop your own cloud controller manager, read [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/). {{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/topology-manager.md b/content/en/docs/tasks/administer-cluster/topology-manager.md index 2e37830e411b8..156146e1c2d69 100644 --- a/content/en/docs/tasks/administer-cluster/topology-manager.md +++ b/content/en/docs/tasks/administer-cluster/topology-manager.md @@ -6,6 +6,7 @@ reviewers: - klueska - lmdaly - nolancon +- bg-chun content_template: templates/task min-kubernetes-server-version: v1.18 @@ -13,7 +14,7 @@ min-kubernetes-server-version: v1.18 {{% capture overview %}} -{{< feature-state state="beta" >}} +{{< feature-state state="beta" for_k8s_version="v1.18" >}} An increasing number of systems leverage a combination of CPUs and hardware accelerators to support latency-critical execution and high-throughput parallel computation. These include workloads in fields such as telecommunications, scientific computing, machine learning, financial services and data analytics. Such hybrid systems comprise a high performance environment. @@ -53,11 +54,16 @@ Support for the Topology Manager requires `TopologyManager` [feature gate](/docs The Topology Manager currently: - - Works on Nodes with the `static` CPU Manager Policy enabled. See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/) - - Works on Pods making CPU requests or Device requests via extended resources + - Aligns Pods of all QoS classes. + - Aligns the requested resources that Hint Provider provides topology hints for. If these conditions are met, Topology Manager will align the requested resources. +{{< note >}} +To align CPU resources with other requested resources in a Pod Spec, the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node. See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/). +{{< /note >}} + + Topology Manager supports four allocation policies. You can set a policy via a Kubelet flag, `--topology-manager-policy`. There are four supported policies: @@ -72,7 +78,7 @@ This is the default policy and does not perform any topology alignment. ### best-effort policy {#policy-best-effort} -For each container in a Guaranteed Pod, kubelet, with `best-effort` topology +For each container in a Pod, the kubelet, with `best-effort` topology management policy, calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, @@ -83,7 +89,7 @@ resource allocation decision. ### restricted policy {#policy-restricted} -For each container in a Guaranteed Pod, kubelet, with `restricted` topology +For each container in a Pod, the kubelet, with `restricted` topology management policy, calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, @@ -97,7 +103,7 @@ resource allocation decision. ### single-numa-node policy {#policy-single-numa-node} -For each container in a Guaranteed Pod, kubelet, with `single-numa-node` topology +For each container in a Pod, the kubelet, with `single-numa-node` topology management policy, calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, Topology Manager will store this and the *Hint Providers* can then use this information when making the @@ -135,8 +141,7 @@ spec: This pod runs in the `Burstable` QoS class because requests are less than limits. -If the selected policy is anything other than `none` , Topology Manager would not consider either of these Pod -specifications. +If the selected policy is anything other than `none`, Topology Manager would consider these Pod specifications. The Topology Manager would consult the Hint Providers to get topology hints. In the case of the `static`, the CPU Manager policy would return default topology hint, because these Pods do not have explicity request CPU resources. ```yaml @@ -155,7 +160,26 @@ spec: example.com/device: "1" ``` -This pod runs in the `Guaranteed` QoS class because `requests` are equal to `limits`. +This pod with integer CPU request runs in the `Guaranteed` QoS class because `requests` are equal to `limits`. + + +```yaml +spec: + containers: + - name: nginx + image: nginx + resources: + limits: + memory: "200Mi" + cpu: "300m" + example.com/device: "1" + requests: + memory: "200Mi" + cpu: "300m" + example.com/device: "1" +``` + +This pod with sharing CPU request runs in the `Guaranteed` QoS class because `requests` are equal to `limits`. ```yaml @@ -173,10 +197,15 @@ spec: ``` This pod runs in the `BestEffort` QoS class because there are no CPU and memory requests. -The Topology Manager would consider both of the above pods. The Topology Manager would consult the Hint Providers, which are CPU and Device Manager to get topology hints for the pods. -In the case of the `Guaranteed` pod the `static` CPU Manager policy would return hints relating to the CPU request and the Device Manager would send back hints for the requested device. +The Topology Manager would consider the above pods. The Topology Manager would consult the Hint Providers, which are CPU and Device Manager to get topology hints for the pods. + +In the case of the `Guaranteed` pod with integer CPU request, the `static` CPU Manager policy would return topology hints relating to the exclusive CPU and the Device Manager would send back hints for the requested device. + +In the case of the `Guaranteed` pod with sharing CPU request, the `static` CPU Manager policy would return default topology hint as there is no exclusive CPU request and the Device Manager would send back hints for the requested device. + +In the above two cases of the `Guaranteed` pod, the `none` CPU Manager policy would return default topology hint. -In the case of the `BestEffort` pod the CPU Manager would send back the default hint as there is no CPU request and the Device Manager would send back the hints for each of the requested devices. +In the case of the `BestEffort` pod, the `static` CPU Manager policy would send back the default topology hint as there is no CPU request and the Device Manager would send back the hints for each of the requested devices. Using this information the Topology Manager calculates the optimal hint for the pod and stores this information, which will be used by the Hint Providers when they are making their resource assignments. diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index d4b306f02bbfc..19b077ab35ef9 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -49,7 +49,7 @@ In this exercise, you create a Pod that runs a container based on the In the configuration file, you can see that the Pod has a single `Container`. The `periodSeconds` field specifies that the kubelet should perform a liveness probe every 5 seconds. The `initialDelaySeconds` field tells the kubelet that it -should wait 5 second before performing the first probe. To perform a probe, the +should wait 5 seconds before performing the first probe. To perform a probe, the kubelet executes the command `cat /tmp/healthy` in the target container. If the command succeeds, it returns 0, and the kubelet considers the container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the container diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application.md b/content/en/docs/tasks/debug-application-cluster/debug-application.md index f63173e334841..08f0fad008ba4 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application.md @@ -65,7 +65,7 @@ Again, the information from `kubectl describe ...` should be informative. The m #### My pod is crashing or otherwise unhealthy Once your pod has been scheduled, the methods described in [Debug Running Pods]( -/docs/tasks/debug-application-cluster/debug-running-pods/) are available for debugging. +/docs/tasks/debug-application-cluster/debug-running-pod/) are available for debugging. #### My pod is running but not doing what I told it to do diff --git a/content/en/docs/tasks/debug-application-cluster/debug-cluster.md b/content/en/docs/tasks/debug-application-cluster/debug-cluster.md index 495545ee1ac1e..473f364361e58 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-cluster.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-cluster.md @@ -124,7 +124,4 @@ This is an incomplete list of things that could go wrong, and how to adjust your - Mitigates: Node shutdown - Mitigates: Kubelet software fault -- Action: [Multiple independent clusters](/docs/concepts/cluster-administration/federation/) (and avoid making risky changes to all clusters at once) - - Mitigates: Everything listed above. - {{% /capture %}} diff --git a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index ec84b82fd02fb..28c9885e57c82 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -94,7 +94,7 @@ worker node, but it can't run on that machine. Again, the information from ### My pod is crashing or otherwise unhealthy Once your pod has been scheduled, the methods described in [Debug Running Pods]( -/docs/tasks/debug-application-cluster/debug-running-pods/) are available for debugging. +/docs/tasks/debug-application-cluster/debug-running-pod/) are available for debugging. ## Debugging ReplicationControllers diff --git a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md index 95065ca595c6d..a812640555aad 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md @@ -107,7 +107,7 @@ If you attempt to use `kubectl exec` to create a shell you will see an error because there is no shell in this container image. ```shell -kubectl exec -it pause -- sh +kubectl exec -it ephemeral-demo -- sh ``` ``` diff --git a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md index 072070ab663cf..547790e5b051b 100644 --- a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md +++ b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md @@ -52,7 +52,7 @@ Memory is reported as the working set, in bytes, at the instant the metric was c [Metrics Server](https://github.com/kubernetes-incubator/metrics-server) is a cluster-wide aggregator of resource usage data. It is deployed by default in clusters created by `kube-up.sh` script as a Deployment object. If you use a different Kubernetes setup mechanism you can deploy it using the provided -[deployment yamls](https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy). +[deployment components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases) file. Metric server collects metrics from the Summary API, exposed by [Kubelet](/docs/admin/kubelet/) on each node. diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index 42c47a43b5fb4..ae5b6633ad8f4 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -49,12 +49,6 @@ The output is similar to this: cronjob.batch/hello created ``` -Alternatively, you can use `kubectl run` to create a cron job without writing a full config: - -```shell -kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster" -``` - After creating the cron job, get its status using this command: ```shell diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md index ba0dcf96054e9..e2d0975a70891 100644 --- a/content/en/docs/tasks/job/parallel-processing-expansion.md +++ b/content/en/docs/tasks/job/parallel-processing-expansion.md @@ -1,52 +1,70 @@ --- title: Parallel Processing using Expansions -content_template: templates/concept +content_template: templates/task min-kubernetes-server-version: v1.8 weight: 20 --- {{% capture overview %}} -In this example, we will run multiple Kubernetes Jobs created from -a common template. You may want to be familiar with the basic, -non-parallel, use of [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) first. +This task demonstrates running multiple {{< glossary_tooltip text="Jobs" term_id="job" >}} +based on a common template. You can use this approach to process batches of work in +parallel. +For this example there are only three items: _apple_, _banana_, and _cherry_. +The sample Jobs process each item simply by printing a string then pausing. + +See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how +this pattern fits more realistic use cases. {{% /capture %}} +{{% capture prerequisites %}} -{{% capture body %}} +You should be familiar with the basic, +non-parallel, use of [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). -## Basic Template Expansion +{{< include "task-tutorial-prereqs.md" >}} -First, download the following template of a job to a file called `job-tmpl.yaml` +For basic templating you need the command-line utility `sed`. -{{< codenew file="application/job/job-tmpl.yaml" >}} +To follow the advanced templating example, you need a working installation of +[Python](https://www.python.org/), and the Jinja2 template +library for Python. + +Once you have Python set up, you can install Jinja2 by running: +```shell +pip install --user jinja2 +``` +{{% /capture %}} -Unlike a *pod template*, our *job template* is not a Kubernetes API type. It is just -a yaml representation of a Job object that has some placeholders that need to be filled -in before it can be used. The `$ITEM` syntax is not meaningful to Kubernetes. -In this example, the only processing the container does is to `echo` a string and sleep for a bit. -In a real use case, the processing would be some substantial computation, such as rendering a frame -of a movie, or processing a range of rows in a database. The `$ITEM` parameter would specify for -example, the frame number or the row range. +{{% capture steps %}} -This Job and its Pod template have a label: `jobgroup=jobexample`. There is nothing special -to the system about this label. This label -makes it convenient to operate on all the jobs in this group at once. -We also put the same label on the pod template so that we can check on all Pods of these Jobs -with a single command. -After the job is created, the system will add more labels that distinguish one Job's pods -from another Job's pods. -Note that the label key `jobgroup` is not special to Kubernetes. You can pick your own label scheme. +## Create Jobs based on a template -Next, expand the template into multiple files, one for each item to be processed. +First, download the following template of a Job to a file called `job-tmpl.yaml`. +Here's what you'll download: + +{{< codenew file="application/job/job-tmpl.yaml" >}} ```shell -# Download job-templ.yaml +# Use curl to download job-tmpl.yaml curl -L -s -O https://k8s.io/examples/application/job/job-tmpl.yaml +``` + +The file you downloaded is not yet a valid Kubernetes +{{< glossary_tooltip text="manifest" term_id="manifest" >}}. +Instead that template is a YAML representation of a Job object with some placeholders +that need to be filled in before it can be used. The `$ITEM` syntax is not meaningful to Kubernetes. + -# Expand files into a temporary directory +### Create manifests from the template + +The following shell snippet uses `sed` to replace the string `$ITEM` with the loop +variable, writing into a temporary directory named `jobs`. Run this now: + +```shell +# Expand the template into multiple files, one for each item to be processed. mkdir ./jobs for i in apple banana cherry do @@ -68,11 +86,12 @@ job-banana.yaml job-cherry.yaml ``` -Here, we used `sed` to replace the string `$ITEM` with the loop variable. -You could use any type of template language (jinja2, erb) or write a program -to generate the Job objects. +You could use any type of template language (for example: Jinja2; ERB), or +write a program to generate the Job manifests. -Next, create all the jobs with one kubectl command: +### Create Jobs from the manifests + +Next, create all the Jobs with one kubectl command: ```shell kubectl create -f ./jobs @@ -96,22 +115,23 @@ The output is similar to this: ``` NAME COMPLETIONS DURATION AGE -process-item-apple 1/1 14s 20s -process-item-banana 1/1 12s 20s +process-item-apple 1/1 14s 22s +process-item-banana 1/1 12s 21s process-item-cherry 1/1 12s 20s ``` -Here we use the `-l` option to select all jobs that are part of this -group of jobs. (There might be other unrelated jobs in the system that we -do not care to see.) +Using the `-l` option to kubectl selects only the Jobs that are part +of this group of jobs (there might be other unrelated jobs in the system). + +You can check on the Pods as well using the same +{{< glossary_tooltip text="label selector" term_id="selector" >}}: -We can check on the pods as well using the same label selector: ```shell kubectl get pods -l jobgroup=jobexample ``` -The output is similar to this: +The output is similar to: ``` NAME READY STATUS RESTARTS AGE @@ -126,7 +146,7 @@ We can use this single command to check on the output of all jobs at once: kubectl logs -f -l jobgroup=jobexample ``` -The output is: +The output should be: ``` Processing item apple @@ -134,26 +154,40 @@ Processing item banana Processing item cherry ``` -## Multiple Template Parameters +### Clean up {#cleanup-1} + +```shell +# Remove the Jobs you created +# Your cluster automatically cleans up their Pods +kubectl delete job -l jobgroup=jobexample +``` + +## Use advanced template parameters + +In the [first example](#create-jobs-based-on-a-template), each instance of the template had one +parameter, and that parameter was also used in the Job's name. However, +[names](/docs/concepts/overview/working-with-objects/names/#names) are restricted +to contain only certain characters. -In the first example, each instance of the template had one parameter, and that parameter was also -used as a label. However label keys are limited in [what characters they can -contain](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). +This slightly more complex example uses the +[Jinja template language](https://palletsprojects.com/p/jinja/) to generate manifests +and then objects from those manifests, with a multiple parameters for each Job. -This slightly more complex example uses the jinja2 template language to generate our objects. -We will use a one-line python script to convert the template to a file. +For this part of the task, you are going to use a one-line Python script to +convert the template to a set of manifests. First, copy and paste the following template of a Job object, into a file called `job.yaml.jinja2`: ```liquid -{%- set params = [{ "name": "apple", "url": "https://www.orangepippin.com/varieties/apples", }, - { "name": "banana", "url": "https://en.wikipedia.org/wiki/Banana", }, - { "name": "raspberry", "url": "https://www.raspberrypi.org/" }] +{%- set params = [{ "name": "apple", "url": "http://dbpedia.org/resource/Apple", }, + { "name": "banana", "url": "http://dbpedia.org/resource/Banana", }, + { "name": "cherry", "url": "http://dbpedia.org/resource/Cherry" }] %} {%- for p in params %} {%- set name = p["name"] %} {%- set url = p["url"] %} +--- apiVersion: batch/v1 kind: Job metadata: @@ -172,51 +206,108 @@ spec: image: busybox command: ["sh", "-c", "echo Processing URL {{ url }} && sleep 5"] restartPolicy: Never ---- {%- endfor %} - ``` -The above template defines parameters for each job object using a list of -python dicts (lines 1-4). Then a for loop emits one job yaml object -for each set of parameters (remaining lines). -We take advantage of the fact that multiple yaml documents can be concatenated -with the `---` separator (second to last line). -.) We can pipe the output directly to kubectl to -create the objects. +The above template defines two parameters for each Job object using a list of +python dicts (lines 1-4). A `for` loop emits one Job manifest for each +set of parameters (remaining lines). -You will need the jinja2 package if you do not already have it: `pip install --user jinja2`. -Now, use this one-line python program to expand the template: +This example relies on a feature of YAML. One YAML file can contain multiple +documents (Kubernetes manifests, in this case), separated by `---` on a line +by itself. +You can pipe the output directly to `kubectl` to create the Jobs. + +Next, use this one-line Python program to expand the template: ```shell alias render_template='python -c "from jinja2 import Template; import sys; print(Template(sys.stdin.read()).render());"' ``` - - -The output can be saved to a file, like this: +Use `render_template` to convert the parameters and template into a single +YAML file containing Kubernetes manifests: ```shell +# This requires the alias you defined earlier cat job.yaml.jinja2 | render_template > jobs.yaml ``` -Or sent directly to kubectl, like this: +You can view `jobs.yaml` to verify that the `render_template` script worked +correctly. + +Once you are happy that `render_template` is working how you intend, +you can pipe its output into `kubectl`: ```shell cat job.yaml.jinja2 | render_template | kubectl apply -f - ``` +Kubernetes accepts and runs the Jobs you created. + +### Clean up {#cleanup-2} + +```shell +# Remove the Jobs you created +# Your cluster automatically cleans up their Pods +kubectl delete job -l jobgroup=jobexample +``` + +{{% /capture %}} +{{% capture discussion %}} + +## Using Jobs in real workloads + +In a real use case, each Job performs some substantial computation, such as rendering a frame +of a movie, or processing a range of rows in a database. If you were rendering a movie +you would set `$ITEM` to the frame number. If you were processing rows from a database +table, you would set `$ITEM` to represent the range of database rows to process. + +In the task, you ran a command to collect the output from Pods by fetching +their logs. In a real use case, each Pod for a Job writes its output to +durable storage before completing. You can use a PersistentVolume for each Job, +or an external storage service. For example, if you are rendering frames for a movie, +use HTTP to `PUT` the rendered frame data to a URL, using a different URL for each +frame. + +## Labels on Jobs and Pods + +After you create a Job, Kubernetes automatically adds additional +{{< glossary_tooltip text="labels" term_id="label" >}} that +distinguish one Job's pods from another Job's pods. + +In this example, each Job and its Pod template have a label: +`jobgroup=jobexample`. + +Kubernetes itself pays no attention to labels named `jobgroup`. Setting a label +for all the Jobs you create from a template makes it convenient to operate on all +those Jobs at once. +In the [first example](#create-jobs-based-on-a-template) you used a template to +create several Jobs. The template ensures that each Pod also gets the same label, so +you can check on all Pods for these templated Jobs with a single command. + +{{< note >}} +The label key `jobgroup` is not special or reserved. +You can pick your own labelling scheme. +There are [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels) +that you can use if you wish. +{{< /note >}} + ## Alternatives -If you have a large number of job objects, you may find that: +If you plan to create a large number of Job objects, you may find that: -- Even using labels, managing so many Job objects is cumbersome. -- You exceed resource quota when creating all the Jobs at once, - and do not want to wait to create them incrementally. -- Very large numbers of jobs created at once overload the - Kubernetes apiserver, controller, or scheduler. +- Even using labels, managing so many Jobs is cumbersome. +- If you create many Jobs in a batch, you might place high load + on the Kubernetes control plane. Alternatively, the Kubernetes API + server could rate limit you, temporarily rejecting your requests with a 429 status. +- You are limited by a {{< glossary_tooltip text="resource quota" term_id="resource-quota" >}} + on Jobs: the API server permanently rejects some of your requests + when you create a great deal of work in one batch. -In this case, you can consider one of the -other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns). +There are other [job patterns](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns) +that you can use to process large amounts of work without creating very many Job +objects. +You could also consider writing your own [controller](/docs/concepts/architecture/controller/) +to manage Job objects automatically. {{% /capture %}} diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index 8eec3c17818cc..8e32763e018ab 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -43,21 +43,39 @@ To enable the rolling update feature of a DaemonSet, you must set its You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well. +### Creating a DaemonSet with `RollingUpdate` update strategy -### Step 1: Checking DaemonSet `RollingUpdate` update strategy +This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate' -First, check the update strategy of your DaemonSet, and make sure it's set to +{{< codenew file="controllers/fluentd-daemonset.yaml" >}} + +After verifying the update strategy of the DaemonSet manifest, create the DaemonSet: + +```shell +kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml +``` + +Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to +update the DaemonSet with `kubectl apply`. + +```shell +kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml +``` + +### Checking DaemonSet `RollingUpdate` update strategy + +Check the update strategy of your DaemonSet, and make sure it's set to `RollingUpdate`: ```shell -kubectl get ds/ -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' +kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system ``` If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead: ```shell -kubectl apply -f ds.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' +kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' ``` The output from both commands should be: @@ -69,28 +87,13 @@ RollingUpdate If the output isn't `RollingUpdate`, go back and modify the DaemonSet object or manifest accordingly. -### Step 2: Creating a DaemonSet with `RollingUpdate` update strategy -If you have already created the DaemonSet, you may skip this step and jump to -step 3. - -After verifying the update strategy of the DaemonSet manifest, create the DaemonSet: - -```shell -kubectl create -f ds.yaml -``` - -Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to -update the DaemonSet with `kubectl apply`. - -```shell -kubectl apply -f ds.yaml -``` - -### Step 3: Updating a DaemonSet template +### Updating a DaemonSet template Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling -update. This can be done with several different `kubectl` commands. +update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different `kubectl` commands. + +{{< codenew file="controllers/fluentd-daemonset-update.yaml" >}} #### Declarative commands @@ -99,21 +102,17 @@ If you update DaemonSets using use `kubectl apply`: ```shell -kubectl apply -f ds-v2.yaml +kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset-update.yaml ``` #### Imperative commands If you update DaemonSets using [imperative commands](/docs/tasks/manage-kubernetes-objects/imperative-command/), -use `kubectl edit` or `kubectl patch`: - -```shell -kubectl edit ds/ -``` +use `kubectl edit` : ```shell -kubectl patch ds/ -p= +kubectl edit ds/fluentd-elasticsearch -n kube-system ``` ##### Updating only the container image @@ -122,21 +121,21 @@ If you just need to update the container image in the DaemonSet template, i.e. `.spec.template.spec.containers[*].image`, use `kubectl set image`: ```shell -kubectl set image ds/ = +kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system ``` -### Step 4: Watching the rolling update status +### Watching the rolling update status Finally, watch the rollout status of the latest DaemonSet rolling update: ```shell -kubectl rollout status ds/ +kubectl rollout status ds/fluentd-elasticsearch -n kube-system ``` When the rollout is complete, the output is similar to this: ```shell -daemonset "" successfully rolled out +daemonset "fluentd-elasticsearch" successfully rolled out ``` ## Troubleshooting @@ -156,7 +155,7 @@ When this happens, find the nodes that don't have the DaemonSet pods scheduled o by comparing the output of `kubectl get nodes` and the output of: ```shell -kubectl get pods -l = -o wide +kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system ``` Once you've found those nodes, delete some non-DaemonSet pods from the node to @@ -183,6 +182,13 @@ If `.spec.minReadySeconds` is specified in the DaemonSet, clock skew between master and nodes will make DaemonSet unable to detect the right rollout progress. +## Clean up + +Delete DaemonSet from a namespace : + +```shell +kubectl delete ds fluentd-elasticsearch -n kube-system +``` {{% /capture %}} diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 97500e6bebce1..4c0b9f9bc376b 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -7,7 +7,7 @@ title: Schedule GPUs {{% capture overview %}} -{{< feature-state state="beta" for_k8s_version="1.10" >}} +{{< feature-state state="beta" for_k8s_version="v1.10" >}} Kubernetes includes **experimental** support for managing AMD and NVIDIA GPUs (graphical processing units) across several nodes. diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 800abfea69718..71357f34164ef 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -119,7 +119,7 @@ kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/ Hit enter for command prompt -while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done +while true; do wget -q -O- http://php-apache; done ``` Within a minute or so, we should see the higher CPU load by executing: @@ -203,7 +203,6 @@ apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: php-apache - namespace: default spec: scaleTargetRef: apiVersion: apps/v1 @@ -239,7 +238,7 @@ the only other supported resource metric is memory. These resources do not chan to cluster, and should always be available, as long as the `metrics.k8s.io` API is available. You can also specify resource metrics in terms of direct values, instead of as percentages of the -requested value, by using a `target` type of `AverageValue` instead of `AverageUtilization`, and +requested value, by using a `target.type` of `AverageValue` instead of `Utilization`, and setting the corresponding `target.averageValue` field instead of the `target.averageUtilization`. There are two other types of metrics, both of which are considered *custom metrics*: pod metrics and @@ -296,7 +295,6 @@ apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: php-apache - namespace: default spec: scaleTargetRef: apiVersion: apps/v1 diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index b5f7612d75439..f6852845ec18d 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -75,7 +75,7 @@ metrics-server, which needs to be launched separately. See for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster. {{< note >}} -{{< feature-state state="deprecated" for_k8s_version="1.11" >}} +{{< feature-state state="deprecated" for_k8s_version="v1.11" >}} Fetching metrics from Heapster is deprecated as of Kubernetes 1.11. {{< /note >}} @@ -291,7 +291,7 @@ the `v2beta2` API allows scaling behavior to be configured through the HPA `behavior` field. Behaviors are specified separately for scaling up and down in `scaleUp` or `scaleDown` section under the `behavior` field. A stabilization window can be specified for both directions which prevents the flapping of the -number of the replicas in the scaling target. Similarly specifing scaling +number of the replicas in the scaling target. Similarly specifying scaling policies controls the rate of change of replicas while scaling. ### Scaling Policies @@ -332,7 +332,7 @@ scaling in that direction. ### Stabilization Window -The stabilization window is used to retrict the flapping of replicas when the metrics +The stabilization window is used to restrict the flapping of replicas when the metrics used for scaling keep fluctuating. The stabilization window is used by the autoscaling algorithm to consider the computed desired state from the past to prevent scaling. In the following example the stabilization window is specified for `scaleDown`. diff --git a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md index c9e0aebd51373..68d41b5a837e2 100644 --- a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md @@ -88,16 +88,16 @@ a Deployment that runs the nginx:1.14.2 Docker image: nginx-deployment-1771418926-7o5ns 1/1 Running 0 16h nginx-deployment-1771418926-r18az 1/1 Running 0 16h -1. Display information about a pod: +1. Display information about a Pod: kubectl describe pod - where `` is the name of one of your pods. + where `` is the name of one of your Pods. ## Updating the deployment You can update the deployment by applying a new YAML file. This YAML file -specifies that the deployment should be updated to use nginx 1.8. +specifies that the deployment should be updated to use nginx 1.16.1. {{< codenew file="application/deployment-update.yaml" >}} diff --git a/content/en/docs/tasks/setup-konnectivity/_index.md b/content/en/docs/tasks/setup-konnectivity/_index.md new file mode 100755 index 0000000000000..09f254eba0d6f --- /dev/null +++ b/content/en/docs/tasks/setup-konnectivity/_index.md @@ -0,0 +1,5 @@ +--- +title: "Setup Konnectivity Service" +weight: 20 +--- + diff --git a/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md new file mode 100644 index 0000000000000..b5dbd052152b5 --- /dev/null +++ b/content/en/docs/tasks/setup-konnectivity/setup-konnectivity.md @@ -0,0 +1,52 @@ +--- +title: Set up Konnectivity service +content_template: templates/task +weight: 70 +--- + +{{% capture overview %}} + +The Konnectivity service provides TCP level proxy for the Master → Cluster +communication. + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} + +{{% /capture %}} + +{{% capture steps %}} + +## Configure the Konnectivity service + +First, you need to configure the API Server to use the Konnectivity service +to direct its network traffic to cluster nodes: + +1. Set the `--egress-selector-config-file` flag of the API Server, it is the +path to the API Server egress configuration file. +1. At the path, create a configuration file. For example, + +{{< codenew file="admin/konnectivity/egress-selector-configuration.yaml" >}} + +Next, you need to deploy the Konnectivity server and agents. +[kubernetes-sigs/apiserver-network-proxy](https://github.com/kubernetes-sigs/apiserver-network-proxy) +is a reference implementation. + +Deploy the Konnectivity server on your master node. The provided yaml assumes +that the Kubernetes components are deployed as a {{< glossary_tooltip text="static Pod" +term_id="static-pod" >}} in your cluster. If not, you can deploy the Konnectivity +server as a DaemonSet. + +{{< codenew file="admin/konnectivity/konnectivity-server.yaml" >}} + +Then deploy the Konnectivity agents in your cluster: + +{{< codenew file="admin/konnectivity/konnectivity-agent.yaml" >}} + +Last, if RBAC is enabled in your cluster, create the relevant RBAC rules: + +{{< codenew file="admin/konnectivity/konnectivity-rbac.yaml" >}} + +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md index b0dd0f4ecf801..7cd4cc8be5866 100644 --- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md +++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md @@ -208,7 +208,7 @@ the CSR and otherwise should deny the CSR. ## A Word of Warning on the Approval Permission -The ability to approve CSRs decides who trusts who within your environment. The +The ability to approve CSRs decides who trusts whom within your environment. The ability to approve CSRs should not be granted broadly or lightly. The requirements of the challenge noted in the previous section and the repercussions of issuing a specific certificate should be fully understood diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md index 83c9d1761bca4..6d9ffdcd0a2ff 100644 --- a/content/en/docs/tasks/tools/install-kubectl.md +++ b/content/en/docs/tasks/tools/install-kubectl.md @@ -59,7 +59,7 @@ You must use a kubectl version that is within one minor version difference of yo {{< tabs name="kubectl_install" >}} {{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}} -sudo apt-get update && sudo apt-get install -y apt-transport-https +sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update @@ -87,7 +87,7 @@ If you are on Ubuntu or another Linux distribution that support [snap](https://s ```shell snap install kubectl --classic -kubectl version +kubectl version --client ``` {{% /tab %}} {{% tab name="Homebrew" %}} @@ -95,7 +95,7 @@ If you are on Linux and using [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) ```shell brew install kubectl -kubectl version +kubectl version --client ``` {{% /tab %}} {{< /tabs >}} @@ -419,7 +419,7 @@ You can test if you have bash-completion v2 already installed with `type _init_c brew install bash-completion@2 ``` -As stated in the output of this command, add the following to your `~/.bashrc` file: +As stated in the output of this command, add the following to your `~/.bash_profile` file: ```shell export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" @@ -432,10 +432,10 @@ Reload your shell and verify that bash-completion v2 is correctly installed with You now have to ensure that the kubectl completion script gets sourced in all your shell sessions. There are multiple ways to achieve this: -- Source the completion script in your `~/.bashrc` file: +- Source the completion script in your `~/.bash_profile` file: ```shell - echo 'source <(kubectl completion bash)' >>~/.bashrc + echo 'source <(kubectl completion bash)' >>~/.bash_profile ``` @@ -448,8 +448,8 @@ You now have to ensure that the kubectl completion script gets sourced in all yo - If you have an alias for kubectl, you can extend shell completion to work with that alias: ```shell - echo 'alias k=kubectl' >>~/.bashrc - echo 'complete -F __start_kubectl k' >>~/.bashrc + echo 'alias k=kubectl' >>~/.bash_profile + echo 'complete -F __start_kubectl k' >>~/.bash_profile ``` - If you installed kubectl with Homebrew (as explained [above](#install-with-homebrew-on-macos)), then the kubectl completion script should already be in `/usr/local/etc/bash_completion.d/kubectl`. In that case, you don't need to do anything. diff --git a/content/en/docs/test.md b/content/en/docs/test.md index b02557f52b872..848decff35411 100644 --- a/content/en/docs/test.md +++ b/content/en/docs/test.md @@ -1,6 +1,7 @@ --- title: Docs smoke test page main_menu: false +mermaid: true --- This page serves two purposes: @@ -295,6 +296,60 @@ tables, use HTML instead. +## Visualizations with Mermaid + +Add `mermaid: true` to the [front matter](https://gohugo.io/content-management/front-matter/) of any page to enable [Mermaid JS](https://mermaidjs.github.io) visualizations. The Mermaid JS version is specified in [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html) + +``` +{{}} +graph TD; + A-->B; + A-->C; + B-->D; + C-->D; +{{}} +``` + +Produces: + +{{< mermaid >}} +graph TD; + A-->B; + A-->C; + B-->D; + C-->D; +{{}} + +``` +{{}} +sequenceDiagram + Alice ->> Bob: Hello Bob, how are you? + Bob-->>John: How about you John? + Bob--x Alice: I am good thanks! + Bob-x John: I am good thanks! + Note right of John: Bob thinks a long
long time, so long
that the text does
not fit on a row. + + Bob-->Alice: Checking with John... + Alice->John: Yes... John, how are you? +{{}} +``` + +Produces: + +{{< mermaid >}} +sequenceDiagram + Alice ->> Bob: Hello Bob, how are you? + Bob-->>John: How about you John? + Bob--x Alice: I am good thanks! + Bob-x John: I am good thanks! + Note right of John: Bob thinks a long
long time, so long
that the text does
not fit on a row. + + Bob-->Alice: Checking with John... + Alice->John: Yes... John, how are you? +{{}} + +
More [examples](https://mermaid-js.github.io/mermaid/#/examples) from the offical docs. + ## Sidebars and Admonitions Sidebars and admonitions provide ways to add visual importance to text. Use diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index e8a16568ad9b7..e64cb16a7399f 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -78,7 +78,7 @@ recommended way to manage the creation and scaling of Pods. Pod runs a Container based on the provided Docker image. ```shell - kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 ``` 2. View the Deployment: diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html index 7cd1ce456b479..1816442416230 100644 --- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -19,7 +19,7 @@
- To interact with the Terminal, please use the desktop/tablet version + The screen is too narrow to interact with the Terminal, please use a desktop/tablet.
diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 8f38960de7d54..6d8e43ebfcd12 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -31,7 +31,8 @@

Kubernetes Deployments

Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of your application. Once you've created a Deployment, the Kubernetes - master schedules mentioned application instances onto individual Nodes in the cluster. + master schedules the application instances included in that Deployment to run on individual Nodes in the + cluster.

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces the instance with an instance on another Node in the cluster. This provides a self-healing mechanism to address machine failure or maintenance.

@@ -75,7 +76,7 @@

Deploying your first app on Kubernetes

You can create and manage a Deployment by using the Kubernetes command line interface, Kubectl. Kubectl uses the Kubernetes API to interact with the cluster. In this module, you'll learn the most common Kubectl commands needed to create Deployments that run your applications on a Kubernetes cluster.

-

When you create a Deployment, you'll need to specify the container image for your application and the number of replicas that you want to run. You can change that information later by updating your Deployment; Modules 5 and 6 of the bootcamp discuss how you can scale and update your Deployments.

+

When you create a Deployment, you'll need to specify the container image for your application and the number of replicas that you want to run. You can change that information later by updating your Deployment; Modules 5 and 6 of the bootcamp discuss how you can scale and update your Deployments.

diff --git a/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html index 278dbe17651c9..f3608039dcef2 100644 --- a/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -29,7 +29,7 @@

Objectives

Kubernetes Pods

-

When you created a Deployment in Module 2, Kubernetes created a Pod to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:

+

When you created a Deployment in Module 2, Kubernetes created a Pod to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:

  • Shared storage, as Volumes
  • Networking, as a unique cluster IP address
  • @@ -108,7 +108,7 @@

    Node overview

    Troubleshooting with kubectl

    -

    In Module 2, you used Kubectl command-line interface. You'll continue to use it in Module 3 to get information about deployed applications and their environments. The most common operations can be done with the following kubectl commands:

    +

    In Module 2, you used Kubectl command-line interface. You'll continue to use it in Module 3 to get information about deployed applications and their environments. The most common operations can be done with the following kubectl commands:

    • kubectl get - list resources
    • kubectl describe - show detailed information about a resource
    • diff --git a/content/en/docs/tutorials/stateful-application/cassandra.md b/content/en/docs/tutorials/stateful-application/cassandra.md index a20d93926d2f3..f55a852abb96e 100644 --- a/content/en/docs/tutorials/stateful-application/cassandra.md +++ b/content/en/docs/tutorials/stateful-application/cassandra.md @@ -1,5 +1,5 @@ --- -title: "Example: Deploying Cassandra with Stateful Sets" +title: "Example: Deploying Cassandra with a StatefulSet" reviewers: - ahmetb content_template: templates/tutorial @@ -7,79 +7,66 @@ weight: 30 --- {{% capture overview %}} -This tutorial shows you how to develop a native cloud [Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. In this example, a custom Cassandra *SeedProvider* enables Cassandra to discover new Cassandra nodes as they join the cluster. +This tutorial shows you how to run [Apache Cassandra](http://cassandra.apache.org/) on Kubernetes. Cassandra, a database, needs persistent storage to provide data durability (application _state_). In this example, a custom Cassandra seed provider lets the database discover new Cassandra instances as they join the Cassandra cluster. -*StatefulSets* make it easier to deploy stateful applications within a clustered environment. For more information on the features used in this tutorial, see the [*StatefulSet*](/docs/concepts/workloads/controllers/statefulset/) documentation. - -**Cassandra on Docker** - -The *Pods* in this tutorial use the [`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile) -image from Google's [container registry](https://cloud.google.com/container-registry/docs/). -The Docker image above is based on [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base) -and includes OpenJDK 8. - -This image includes a standard Cassandra installation from the Apache Debian repo. -By using environment variables you can change values that are inserted into `cassandra.yaml`. - -| ENV VAR | DEFAULT VALUE | -| ------------- |:-------------: | -| `CASSANDRA_CLUSTER_NAME` | `'Test Cluster'` | -| `CASSANDRA_NUM_TOKENS` | `32` | -| `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` | +*StatefulSets* make it easier to deploy stateful applications into your Kubernetes cluster. For more information on the features used in this tutorial, see [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). +{{< note >}} +Cassandra and Kubernetes both use the term _node_ to mean a member of a cluster. In this +tutorial, the Pods that belong to the StatefulSet are Cassandra nodes and are members +of the Cassandra cluster (called a _ring_). When those Pods run in your Kubernetes cluster, +the Kubernetes control plane schedules those Pods onto Kubernetes +{{< glossary_tooltip text="Nodes" term_id="node" >}}. + +When a Cassandra node starts, it uses a _seed list_ to bootstrap discovery of other +nodes in the ring. +This tutorial deploys a custom Cassandra seed provider that lets the database discover +new Cassandra Pods as they appear inside your Kubernetes cluster. +{{< /note >}} {{% /capture %}} {{% capture objectives %}} -* Create and validate a Cassandra headless [*Service*](/docs/concepts/services-networking/service/). -* Use a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) to create a Cassandra ring. -* Validate the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). -* Modify the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). -* Delete the [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) and its [Pods](/docs/concepts/workloads/pods/pod/). +* Create and validate a Cassandra headless {{< glossary_tooltip text="Service" term_id="service" >}}. +* Use a {{< glossary_tooltip term_id="StatefulSet" >}} to create a Cassandra ring. +* Validate the StatefulSet. +* Modify the StatefulSet. +* Delete the StatefulSet and its {{< glossary_tooltip text="Pods" term_id="pod" >}}. {{% /capture %}} {{% capture prerequisites %}} -To complete this tutorial, you should already have a basic familiarity with [Pods](/docs/concepts/workloads/pods/pod/), [Services](/docs/concepts/services-networking/service/), and [StatefulSets](/docs/concepts/workloads/controllers/statefulset/). In addition, you should: - -* [Install and Configure](/docs/tasks/tools/install-kubectl/) the *kubectl* command-line tool +{{< include "task-tutorial-prereqs.md" >}} -* Download [`cassandra-service.yaml`](/examples/application/cassandra/cassandra-service.yaml) - and [`cassandra-statefulset.yaml`](/examples/application/cassandra/cassandra-statefulset.yaml) +To complete this tutorial, you should already have a basic familiarity with {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip text="Services" term_id="service" >}}, and {{< glossary_tooltip text="StatefulSets" term_id="StatefulSet" >}}. -* Have a supported Kubernetes cluster running - -{{< note >}} -Please read the [setup](/docs/setup/) if you do not already have a cluster. -{{< /note >}} - -### Additional Minikube Setup Instructions +### Additional Minikube setup instructions {{< caution >}} -[Minikube](/docs/getting-started-guides/minikube/) defaults to 1024MB of memory and 1 CPU. Running Minikube with the default resource configuration results in insufficient resource errors during this tutorial. To avoid these errors, start Minikube with the following settings: +[Minikube](/docs/getting-started-guides/minikube/) defaults to 1024MiB of memory and 1 CPU. Running Minikube with the default resource configuration results in insufficient resource errors during this tutorial. To avoid these errors, start Minikube with the following settings: ```shell minikube start --memory 5120 --cpus=4 ``` {{< /caution >}} - + {{% /capture %}} {{% capture lessoncontent %}} -## Creating a Cassandra Headless Service +## Creating a headless Service for Cassandra {#creating-a-cassandra-headless-service} -A Kubernetes [Service](/docs/concepts/services-networking/service/) describes a set of [Pods](/docs/concepts/workloads/pods/pod/) that perform the same task. +In Kubernetes, a {{< glossary_tooltip text="Service" term_id="service" >}} describes a set of {{< glossary_tooltip text="Pods" term_id="pod" >}} that perform the same task. -The following `Service` is used for DNS lookups between Cassandra Pods and clients within the Kubernetes cluster. +The following Service is used for DNS lookups between Cassandra Pods and clients within your cluster: {{< codenew file="application/cassandra/cassandra-service.yaml" >}} -1. Launch a terminal window in the directory you downloaded the manifest files. -1. Create a Service to track all Cassandra StatefulSet nodes from the `cassandra-service.yaml` file: +Create a Service to track all Cassandra StatefulSet members from the `cassandra-service.yaml` file: - ```shell - kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml - ``` +```shell +kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml +``` -### Validating (optional) + +### Validating (optional) {#validating} Get the Cassandra Service. @@ -94,9 +81,9 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cassandra ClusterIP None 9042/TCP 45s ``` -Service creation failed if anything else is returned. Read [Debug Services](/docs/tasks/debug-application-cluster/debug-service/) for common issues. +If you don't see a Service named `cassandra`, that means creation failed. Read [Debug Services](/docs/tasks/debug-application-cluster/debug-service/) for help troubleshooting common issues. -## Using a StatefulSet to Create a Cassandra Ring +## Using a StatefulSet to create a Cassandra ring The StatefulSet manifest, included below, creates a Cassandra ring that consists of three Pods. @@ -106,14 +93,23 @@ This example uses the default provisioner for Minikube. Please update the follow {{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}} -1. Update the StatefulSet if necessary. -1. Create the Cassandra StatefulSet from the `cassandra-statefulset.yaml` file: +Create the Cassandra StatefulSet from the `cassandra-statefulset.yaml` file: - ```shell - kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml - ``` +```shell +# Use this if you are able to apply cassandra-statefulset.yaml unmodified +kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml +``` + +If you need to modify `cassandra-statefulset.yaml` to suit your cluster, download +https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml and then apply +that manifest, from the folder you saved the modified version into: +```shell +# Use this if you needed to modify cassandra-statefulset.yaml locally +kubectl apply -f cassandra-statefulset.yaml +``` -## Validating The Cassandra StatefulSet + +## Validating the Cassandra StatefulSet 1. Get the Cassandra StatefulSet: @@ -121,31 +117,32 @@ This example uses the default provisioner for Minikube. Please update the follow kubectl get statefulset cassandra ``` - The response should be: + The response should be similar to: ``` NAME DESIRED CURRENT AGE cassandra 3 0 13s ``` - The `StatefulSet` resource deploys Pods sequentially. + The `StatefulSet` resource deploys Pods sequentially. 1. Get the Pods to see the ordered creation status: ```shell kubectl get pods -l="app=cassandra" ``` - - The response should be: - + + The response should be similar to: + ```shell NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 1m cassandra-1 0/1 ContainerCreating 0 8s ``` - - It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns: - + + It can take several minutes for all three Pods to deploy. Once they are deployed, the same command + returns output similar to: + ``` NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 10m @@ -153,13 +150,14 @@ This example uses the default provisioner for Minikube. Please update the follow cassandra-2 1/1 Running 0 8m ``` -3. Run the Cassandra [nodetool](https://wiki.apache.org/cassandra/NodeTool) to display the status of the ring. +3. Run the Cassandra [nodetool](https://cwiki.apache.org/confluence/display/CASSANDRA2/NodeTool) inside the first Pod, to + display the status of the ring. ```shell kubectl exec -it cassandra-0 -- nodetool status ``` - The response should look something like this: + The response should look something like: ``` Datacenter: DC1-K8Demo @@ -174,7 +172,7 @@ This example uses the default provisioner for Minikube. Please update the follow ## Modifying the Cassandra StatefulSet -Use `kubectl edit` to modify the size of a Cassandra StatefulSet. +Use `kubectl edit` to modify the size of a Cassandra StatefulSet. 1. Run the following command: @@ -182,14 +180,14 @@ Use `kubectl edit` to modify the size of a Cassandra StatefulSet. kubectl edit statefulset cassandra ``` - This command opens an editor in your terminal. The line you need to change is the `replicas` field. The following sample is an excerpt of the `StatefulSet` file: + This command opens an editor in your terminal. The line you need to change is the `replicas` field. The following sample is an excerpt of the StatefulSet file: ```yaml # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # - apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 + apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: 2016-08-13T18:40:58Z @@ -204,50 +202,66 @@ Use `kubectl edit` to modify the size of a Cassandra StatefulSet. replicas: 3 ``` -1. Change the number of replicas to 4, and then save the manifest. +1. Change the number of replicas to 4, and then save the manifest. - The `StatefulSet` now contains 4 Pods. + The StatefulSet now scales to run with 4 Pods. -1. Get the Cassandra StatefulSet to verify: +1. Get the Cassandra StatefulSet to verify your change: ```shell kubectl get statefulset cassandra ``` - The response should be + The response should be similar to: ``` NAME DESIRED CURRENT AGE cassandra 4 4 36m ``` - + {{% /capture %}} {{% capture cleanup %}} -Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all related StatefulSet resources. +Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all related StatefulSet resources. {{< warning >}} Depending on the storage class and reclaim policy, deleting the *PersistentVolumeClaims* may cause the associated volumes to also be deleted. Never assume you’ll be able to access data if its volume claims are deleted. {{< /warning >}} -1. Run the following commands (chained together into a single command) to delete everything in the Cassandra `StatefulSet`: +1. Run the following commands (chained together into a single command) to delete everything in the Cassandra StatefulSet: ```shell - grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \ + grace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \ && kubectl delete statefulset -l app=cassandra \ - && echo "Sleeping $grace" \ + && echo "Sleeping ${grace} seconds" 1>&2 \ && sleep $grace \ - && kubectl delete pvc -l app=cassandra + && kubectl delete persistentvolumeclaim -l app=cassandra ``` -1. Run the following command to delete the Cassandra Service. +1. Run the following command to delete the Service you set up for Cassandra: ```shell kubectl delete service -l app=cassandra ``` -{{% /capture %}} +## Cassandra container environment variables + +The Pods in this tutorial use the [`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile) +image from Google's [container registry](https://cloud.google.com/container-registry/docs/). +The Docker image above is based on [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base) +and includes OpenJDK 8. +This image includes a standard Cassandra installation from the Apache Debian repo. +By using environment variables you can change values that are inserted into `cassandra.yaml`. + +| Environment variable | Default value | +| ------------------------ |:---------------: | +| `CASSANDRA_CLUSTER_NAME` | `'Test Cluster'` | +| `CASSANDRA_NUM_TOKENS` | `32` | +| `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` | + + +{{% /capture %}} {{% capture whatsnext %}} * Learn how to [Scale a StatefulSet](/docs/tasks/run-application/scale-stateful-set/). diff --git a/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml b/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml new file mode 100644 index 0000000000000..6659ff3fbb4fb --- /dev/null +++ b/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml @@ -0,0 +1,21 @@ +apiVersion: apiserver.k8s.io/v1beta1 +kind: EgressSelectorConfiguration +egressSelections: +# Since we want to control the egress traffic to the cluster, we use the +# "cluster" as the name. Other supported values are "etcd", and "master". +- name: cluster + connection: + # This controls the protocol between the API Server and the Konnectivity + # server. Supported values are "GRPC" and "HTTPConnect". There is no + # end user visible difference between the two modes. You need to set the + # Konnectivity server to work in the same mode. + proxyProtocol: GRPC + transport: + # This controls what transport the API Server uses to communicate with the + # Konnectivity server. UDS is recommended if the Konnectivity server + # locates on the same machine as the API Server. You need to configure the + # Konnectivity server to listen on the same UDS socket. + # The other supported transport is "tcp". You will need to set up TLS + # config to secure the TCP transport. + uds: + udsName: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket diff --git a/content/en/examples/admin/konnectivity/konnectivity-agent.yaml b/content/en/examples/admin/konnectivity/konnectivity-agent.yaml new file mode 100644 index 0000000000000..c3dc71040b9c8 --- /dev/null +++ b/content/en/examples/admin/konnectivity/konnectivity-agent.yaml @@ -0,0 +1,53 @@ +apiVersion: apps/v1 +# Alternatively, you can deploy the agents as Deployments. It is not necessary +# to have an agent on each node. +kind: DaemonSet +metadata: + labels: + addonmanager.kubernetes.io/mode: Reconcile + k8s-app: konnectivity-agent + namespace: kube-system + name: konnectivity-agent +spec: + selector: + matchLabels: + k8s-app: konnectivity-agent + template: + metadata: + labels: + k8s-app: konnectivity-agent + spec: + priorityClassName: system-cluster-critical + tolerations: + - key: "CriticalAddonsOnly" + operator: "Exists" + containers: + - image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent:v0.0.8 + name: konnectivity-agent + command: ["/proxy-agent"] + args: [ + "--logtostderr=true", + "--ca-cert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt", + # Since the konnectivity server runs with hostNetwork=true, + # this is the IP address of the master machine. + "--proxy-server-host=35.225.206.7", + "--proxy-server-port=8132", + "--service-account-token-path=/var/run/secrets/tokens/konnectivity-agent-token" + ] + volumeMounts: + - mountPath: /var/run/secrets/tokens + name: konnectivity-agent-token + livenessProbe: + httpGet: + port: 8093 + path: /healthz + initialDelaySeconds: 15 + timeoutSeconds: 15 + serviceAccountName: konnectivity-agent + volumes: + - name: konnectivity-agent-token + projected: + sources: + - serviceAccountToken: + path: konnectivity-agent-token + audience: system:konnectivity-server diff --git a/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml b/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml new file mode 100644 index 0000000000000..7687f49b77e82 --- /dev/null +++ b/content/en/examples/admin/konnectivity/konnectivity-rbac.yaml @@ -0,0 +1,24 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: system:konnectivity-server + labels: + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: Reconcile +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:auth-delegator +subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: system:konnectivity-server +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: konnectivity-agent + namespace: kube-system + labels: + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: Reconcile diff --git a/content/en/examples/admin/konnectivity/konnectivity-server.yaml b/content/en/examples/admin/konnectivity/konnectivity-server.yaml new file mode 100644 index 0000000000000..730c26c66a801 --- /dev/null +++ b/content/en/examples/admin/konnectivity/konnectivity-server.yaml @@ -0,0 +1,70 @@ +apiVersion: v1 +kind: Pod +metadata: + name: konnectivity-server + namespace: kube-system +spec: + priorityClassName: system-cluster-critical + hostNetwork: true + containers: + - name: konnectivity-server-container + image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server:v0.0.8 + command: ["/proxy-server"] + args: [ + "--log-file=/var/log/konnectivity-server.log", + "--logtostderr=false", + "--log-file-max-size=0", + # This needs to be consistent with the value set in egressSelectorConfiguration. + "--uds-name=/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket", + # The following two lines assume the Konnectivity server is + # deployed on the same machine as the apiserver, and the certs and + # key of the API Server are at the specified location. + "--cluster-cert=/etc/srv/kubernetes/pki/apiserver.crt", + "--cluster-key=/etc/srv/kubernetes/pki/apiserver.key", + # This needs to be consistent with the value set in egressSelectorConfiguration. + "--mode=grpc", + "--server-port=0", + "--agent-port=8132", + "--admin-port=8133", + "--agent-namespace=kube-system", + "--agent-service-account=konnectivity-agent", + "--kubeconfig=/etc/srv/kubernetes/konnectivity-server/kubeconfig", + "--authentication-audience=system:konnectivity-server" + ] + livenessProbe: + httpGet: + scheme: HTTP + host: 127.0.0.1 + port: 8133 + path: /healthz + initialDelaySeconds: 30 + timeoutSeconds: 60 + ports: + - name: agentport + containerPort: 8132 + hostPort: 8132 + - name: adminport + containerPort: 8133 + hostPort: 8133 + volumeMounts: + - name: varlogkonnectivityserver + mountPath: /var/log/konnectivity-server.log + readOnly: false + - name: pki + mountPath: /etc/srv/kubernetes/pki + readOnly: true + - name: konnectivity-uds + mountPath: /etc/srv/kubernetes/konnectivity-server + readOnly: false + volumes: + - name: varlogkonnectivityserver + hostPath: + path: /var/log/konnectivity-server.log + type: FileOrCreate + - name: pki + hostPath: + path: /etc/srv/kubernetes/pki + - name: konnectivity-uds + hostPath: + path: /etc/srv/kubernetes/konnectivity-server + type: DirectoryOrCreate diff --git a/content/en/examples/admin/sched/my-scheduler.yaml b/content/en/examples/admin/sched/my-scheduler.yaml index a2ccc08da5dd6..800595862baa5 100644 --- a/content/en/examples/admin/sched/my-scheduler.yaml +++ b/content/en/examples/admin/sched/my-scheduler.yaml @@ -17,6 +17,19 @@ roleRef: name: system:kube-scheduler apiGroup: rbac.authorization.k8s.io --- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: my-scheduler-as-volume-scheduler +subjects: +- kind: ServiceAccount + name: my-scheduler + namespace: kube-system +roleRef: + kind: ClusterRole + name: system:volume-scheduler + apiGroup: rbac.authorization.k8s.io +--- apiVersion: apps/v1 kind: Deployment metadata: diff --git a/content/en/examples/application/hpa/php-apache.yaml b/content/en/examples/application/hpa/php-apache.yaml index c73ae7d631b58..f3f1ef5d4f912 100644 --- a/content/en/examples/application/hpa/php-apache.yaml +++ b/content/en/examples/application/hpa/php-apache.yaml @@ -2,7 +2,6 @@ apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: php-apache - namespace: default spec: scaleTargetRef: apiVersion: apps/v1 diff --git a/content/en/examples/controllers/fluentd-daemonset-update.yaml b/content/en/examples/controllers/fluentd-daemonset-update.yaml new file mode 100644 index 0000000000000..dcf08d4fc9545 --- /dev/null +++ b/content/en/examples/controllers/fluentd-daemonset-update.yaml @@ -0,0 +1,48 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: fluentd-elasticsearch + namespace: kube-system + labels: + k8s-app: fluentd-logging +spec: + selector: + matchLabels: + name: fluentd-elasticsearch + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + template: + metadata: + labels: + name: fluentd-elasticsearch + spec: + tolerations: + # this toleration is to have the daemonset runnable on master nodes + # remove it if your masters can't run pods + - key: node-role.kubernetes.io/master + effect: NoSchedule + containers: + - name: fluentd-elasticsearch + image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 + resources: + limits: + memory: 200Mi + requests: + cpu: 100m + memory: 200Mi + volumeMounts: + - name: varlog + mountPath: /var/log + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + terminationGracePeriodSeconds: 30 + volumes: + - name: varlog + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers diff --git a/content/en/examples/controllers/fluentd-daemonset.yaml b/content/en/examples/controllers/fluentd-daemonset.yaml new file mode 100644 index 0000000000000..0e1e7d33450b9 --- /dev/null +++ b/content/en/examples/controllers/fluentd-daemonset.yaml @@ -0,0 +1,42 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: fluentd-elasticsearch + namespace: kube-system + labels: + k8s-app: fluentd-logging +spec: + selector: + matchLabels: + name: fluentd-elasticsearch + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + template: + metadata: + labels: + name: fluentd-elasticsearch + spec: + tolerations: + # this toleration is to have the daemonset runnable on master nodes + # remove it if your masters can't run pods + - key: node-role.kubernetes.io/master + effect: NoSchedule + containers: + - name: fluentd-elasticsearch + image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 + volumeMounts: + - name: varlog + mountPath: /var/log + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + terminationGracePeriodSeconds: 30 + volumes: + - name: varlog + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers diff --git a/content/en/includes/user-guide-migration-notice.md b/content/en/includes/user-guide-migration-notice.md deleted file mode 100644 index 366a05907cda5..0000000000000 --- a/content/en/includes/user-guide-migration-notice.md +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - -
      -

      NOTICE

      -

      As of March 14, 2017, the Kubernetes SIG-Docs-Maintainers group have begun migration of the User Guide content as announced previously to the SIG Docs community through the kubernetes-sig-docs group and kubernetes.slack.com #sig-docs channel.

      -

      The user guides within this section are being refactored into topics within Tutorials, Tasks, and Concepts. Anything that has been moved will have a notice placed in its previous location as well as a link to its new location. The reorganization implements a new table of contents and should improve the documentation's findability and readability for a wider range of audiences.

      -

      For any questions, please contact: kubernetes-sig-docs@googlegroups.com

      -
      diff --git a/content/en/training/_index.html b/content/en/training/_index.html index 53922a9879030..f2f28f9713313 100644 --- a/content/en/training/_index.html +++ b/content/en/training/_index.html @@ -7,14 +7,22 @@ class: training --- -
      -
      -
      -

      Build your cloud native career

      -

      Kubernetes is at the core of the cloud native movement. Training and certifications from the Linux Foundation and our training partners lets you invest in your career, learn Kubernetes, and make your cloud native projects successful.

      -
      +
      +
      +
      +
      + +
      +
      + +
      +
      +

      Build your cloud native career

      +

      Kubernetes is at the core of the cloud native movement. Training and certifications from the Linux Foundation and our training partners lets you invest in your career, learn Kubernetes, and make your cloud native projects successful.

      +
      +
      -
      +
      @@ -60,7 +68,7 @@

      Learn with the Linux Foundation

      The Linux Foundation offers instructor-led and self-paced courses for all aspects of the Kubernetes application development and operations lifecycle.

      -

      +

      See Courses
      @@ -71,25 +79,27 @@

      Learn with the Linux Foundation

      Get Kubernetes Certified

      -
      -
      -
      - Certified Kubernetes Application Developer (CKAD) -
      -

      The Certified Kubernetes Application Developer exam certifies that users can design, build, configure, and expose cloud native applications for Kubernetes.

      -
      - Go to Certification -
      -
      -
      -
      -
      - Certified Kubernetes Administrator (CKA) -
      -

      The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.

      -
      - Go to Certification -
      +
      +
      +
      +
      + Certified Kubernetes Application Developer (CKAD) +
      +

      The Certified Kubernetes Application Developer exam certifies that users can design, build, configure, and expose cloud native applications for Kubernetes.

      +
      + Go to Certification +
      +
      +
      +
      +
      + Certified Kubernetes Administrator (CKA) +
      +

      The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.

      +
      + Go to Certification +
      +
      diff --git a/content/es/docs/concepts/containers/container-lifecycle-hooks.md b/content/es/docs/concepts/containers/container-lifecycle-hooks.md index 94e3e95e644df..74fdc721ccf37 100644 --- a/content/es/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/es/docs/concepts/containers/container-lifecycle-hooks.md @@ -39,7 +39,7 @@ por lo que debe completarse antes de que la llamada para eliminar el contenedor No se le pasa ningún parámetro. Puedes encontrar información más detallada sobre el comportamiento de finalización de un contenedor -[Finalización de Pods](/docs/es/concepts/workloads/pods/pod/#termination-of-pods). +[Finalización de Pods](/docs/concepts/workloads/pods/pod/#termination-of-pods). ### Implementación de controladores de hooks diff --git a/content/fr/docs/concepts/services-networking/endpoint-slices.md b/content/fr/docs/concepts/services-networking/endpoint-slices.md new file mode 100644 index 0000000000000..b06117cc00032 --- /dev/null +++ b/content/fr/docs/concepts/services-networking/endpoint-slices.md @@ -0,0 +1,122 @@ +--- +reviewers: +title: EndpointSlices +feature: + title: EndpointSlices + description: > + Suivi évolutif des réseaux Endpoints dans un cluster Kubernetes. + +content_template: templates/concept +weight: 10 +--- + + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +_EndpointSlices_ offrent une méthode simple pour suivre les Endpoints d'un réseau au sein d'un cluster de Kubernetes. Ils offrent une alternative plus évolutive et extensible aux Endpoints. + +{{% /capture %}} + +{{% capture body %}} + +## Resource pour EndpointSlice {#endpointslice-resource} + +Dans Kubernetes, un EndpointSlice contient des reférences à un ensemble de Endpoints. +Le controleur d'EndpointSlice crée automatiquement des EndpointSlices pour un Service quand un {{< glossary_tooltip text="sélecteur" term_id="selector" >}} est spécifié. +Ces EnpointSlices vont inclure des références à n'importe quels Pods qui correspond aux selecteur de Service. +EndpointSlices groupent ensemble les Endpoints d'un réseau par combinaisons uniques de Services et de Ports. + +Par exemple, voici un échantillon d'une resource EndpointSlice pour le Kubernetes Service `exemple`. + +```yaml +apiVersion: discovery.k8s.io/v1beta1 +kind: EndpointSlice +metadata: + name: exemple-abc + labels: + kubernetes.io/service-name: exemple +addressType: IPv4 +ports: + - name: http + protocol: TCP + port: 80 +endpoints: + - addresses: + - "10.1.2.3" + conditions: + ready: true + hostname: pod-1 + topology: + kubernetes.io/hostname: node-1 + topology.kubernetes.io/zone: us-west2-a +``` + +Les EndpointSlices géré par le contrôleur d'EndpointSlice n'auront, par défaut, pas plus de 100 Endpoints chacun. +En dessous de cette échelle, EndpointSlices devrait mapper 1:1 les Endpoints et les Service et devrait avoir une performance similaire. + +EndpointSlices peuvent agir en tant que source de vérité pour kube-proxy quand il s'agit du routage d'un trafic interne. +Lorsqu'ils sont activés, ils devraient offrir une amélioration de performance pour les services qui ont une grand quantité d'Endpoints. + +### Types d'addresses + +Les EndpointSlices supportent 3 types d'addresses: + +* IPv4 +* IPv6 +* FQDN (Fully Qualified Domain Name) - [serveur entièrement nommé] + +### Topologie + +Chaque Endpoint dans un EnpointSlice peut contenir des informations de topologie pertinentes. +Ceci est utilisé pour indiqué où se trouve un Endpoint, qui contient les informations sur le Node, zone et region correspondante. Lorsque les valeurs sont disponibles, les labels de Topologies suivantes seront définies par le contrôleur EndpointSlice: + +* `kubernetes.io/hostname` - Nom du Node sur lequel l'Endpoint se situe. +* `topology.kubernetes.io/zone` - Zone dans laquelle l'Endpoint se situe. +* `topology.kubernetes.io/region` - Region dans laquelle l'Endpoint se situe. + +Le contrôleur EndpointSlice surveille les Services et les Pods pour assurer que leurs correspondances avec les EndpointSlices sont à jour. +Le contrôleur gère les EndpointSlices pour tous les Services qui ont un sélecteur - [référence: {{< glossary_tooltip text="sélecteur" term_id="selector" >}}] - specifié. Celles-ci représenteront les IPs des Pods qui correspond au sélecteur. + +### Capacité d'EndpointSlices + +Les EndpointSlices sont limités a une capacité de 100 Endpoints chacun, par défaut. Vous pouvez configurer ceci avec l'indicateur `--max-endpoints-per-slice` {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} jusqu'à un maximum de 1000. + +### Distribution d'EndpointSlices + +Chaque EndpointSlice a un ensemble de ports qui s'applique à tous les Endpoints dans la resource. +Lorsque les ports nommés sont utilisés pour un Service, les Pods peuvent se retrouver avec différents port cible pour le même port nommé, nécessitant différents EndpointSlices. + +Le contrôleur essaie de remplir l'EndpointSlice aussi complètement que possible, mais ne les rééquilibre pas activement. La logique du contrôleur est assez simple: + +1. Itérer à travers les EnpointSlices existants, retirer les Endpoints qui ne sont plus voulus et mettre à jour les Endpoints qui ont changés. +2. Itérer à travers les EndpointSlices qui ont été modifiés dans la première étape et les remplir avec n'importe quel Endpoint nécéssaire. +3. S'il reste encore des Endpoints neufs à ajouter, essayez de les mettre dans une slice qui n'a pas été changé et/ou en crée de nouveaux. + +Par-dessus tout, la troisième étape priorise la limitation de mises à jour d'EnpointSlice sur une distribution complètement pleine d'EndpointSlices. Par exemple, si il y avait 10 nouveaux Endpoints à ajouter et 2 EndpointSlices qui peuvent contenir 5 Endpoints en plus chacun; cette approche créera un nouveau EndpointSlice au lieu de remplir les EndpointSlice existants. +C'est à dire, une seule création EndpointSlice est préférable à plusieurs mises à jour d'EndpointSlice. + +Avec kube-proxy exécuté sur chaque Node et surveillant EndpointSlices, chaque changement d'un EndpointSlice devient relativement coûteux puisqu'ils seront transmis à chaque Node du cluster. +Cette approche vise à limiter le nombre de modifications qui doivent être envoyées à chaque Node, même si ça peut causer plusieurs EndpointSlices non remplis. + +En pratique, cette distribution bien peu idéale devrait être rare. La plupart des changements traités par le contrôleur EndpointSlice sera suffisamment petite pour tenir dans un EndpointSlice existant, et sinon, un nouveau EndpointSlice aura probablement été bientôt nécessaire de toute façon. Les mises à jour continues des déploiements fournissent également une compaction naturelle des EndpointSlices avec tous leurs pods et les Endpoints correspondants qui se feront remplacer. + +## Motivation + +L'API des Endpoints fournit une méthode simple et facile à suivre pour les Endpoints dans Kubernetes. +Malheureusement, comme les clusters Kubernetes et Services sont devenus plus larges, les limitations de cette API sont devenues plus visibles. +Plus particulièrement, ceux-ci comprenaient des limitations liés au dimensionnement vers un plus grand nombre d'Endpoint d'un réseau. + +Puisque tous les Endpoints d'un réseau pour un Service ont été stockés dans une seule ressource Endpoints, ces ressources pourraient devenir assez lourdes. +Cela a affecté les performances des composants Kubernetes (notamment le plan de contrôle) et a causé une grande quantité de trafic réseau et de traitements lorsque les Endpoints changent. +Les EndpointSlices aident à atténuer ces problèmes ainsi qu'à fournir une plate-forme extensible pour des fonctionnalités supplémentaires telles que le routage topologique. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Activer EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices) +* Lire [Connecter des applications aux Services](/docs/concepts/services-networking/connect-applications-service/) + +{{% /capture %}} \ No newline at end of file diff --git a/content/fr/docs/concepts/services-networking/service.md b/content/fr/docs/concepts/services-networking/service.md index 84605c4593655..12b6453a6f242 100644 --- a/content/fr/docs/concepts/services-networking/service.md +++ b/content/fr/docs/concepts/services-networking/service.md @@ -312,7 +312,7 @@ Si vous utilisez uniquement DNS pour découvrir l'IP du cluster pour un service, ### DNS -Vous pouvez (et devrait presque toujours) configurer un service DNS pour votre cluster Kubernetes à l'aide d'un [add-on](/docs/concepts/cluster-administration/addons/). +Vous pouvez (et devriez presque toujours) configurer un service DNS pour votre cluster Kubernetes à l'aide d'un [add-on](/docs/concepts/cluster-administration/addons/). Un serveur DNS prenant en charge les clusters, tel que CoreDNS, surveille l'API Kubernetes pour les nouveaux services et crée un ensemble d'enregistrements DNS pour chacun. Si le DNS a été activé dans votre cluster, tous les pods devraient automatiquement être en mesure de résoudre les services par leur nom DNS. @@ -384,7 +384,7 @@ Si vous définissez le champ `type` sur` NodePort`, le plan de contrôle Kuberne Chaque nœud assure le proxy de ce port (le même numéro de port sur chaque nœud) vers votre service. Votre service signale le port alloué dans son champ `.spec.ports[*].nodePort`. -Si vous souhaitez spécifier une ou des adresses IP particulières pour proxyer le port, vous pouvez définir l'indicateur `--nodeport-addresses` dans kube-proxy sur des blocs IP particuliers; cela est pris en charge depuis Kubernetes v1.10. +Si vous souhaitez spécifier une ou des adresses IP particulières pour proxyfier le port, vous pouvez définir l'indicateur `--nodeport-addresses` dans kube-proxy sur des blocs IP particuliers; cela est pris en charge depuis Kubernetes v1.10. Cet indicateur prend une liste délimitée par des virgules de blocs IP (par exemple 10.0.0.0/8, 192.0.2.0/25) pour spécifier les plages d'adresses IP que kube-proxy doit considérer comme locales pour ce nœud. Par exemple, si vous démarrez kube-proxy avec l'indicateur `--nodeport-addresses=127.0.0.0/8`, kube-proxy sélectionne uniquement l'interface de boucle locale pour les services NodePort. diff --git a/content/fr/docs/concepts/storage/persistent-volumes.md b/content/fr/docs/concepts/storage/persistent-volumes.md index e62c996ddb5e0..291a67fe80e69 100644 --- a/content/fr/docs/concepts/storage/persistent-volumes.md +++ b/content/fr/docs/concepts/storage/persistent-volumes.md @@ -334,6 +334,11 @@ spec: server: 172.17.0.2 ``` +{{< note >}} +Des logiciels additionnels supportant un type de montage de volume pourraient être nécessaires afin d'utiliser un PersistentVolume depuis un cluster. +Dans l'exemple d'un PersistentVolume de type NFS, le logiciel additionnel `/sbin/mount.nfs` est requis pour permettre de monter des systèmes de fichiers de type NFS. +{{< /note >}} + ### Capacité Généralement, un PV aura une capacité de stockage spécifique. diff --git a/content/fr/docs/home/_index.md b/content/fr/docs/home/_index.md index 584ad33232658..10e038dfa16f9 100644 --- a/content/fr/docs/home/_index.md +++ b/content/fr/docs/home/_index.md @@ -6,7 +6,7 @@ description: Documentation francophone de Kubernetes noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "Accueil" main_menu: true weight: 10 diff --git a/content/fr/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/fr/docs/tasks/configure-pod-container/pull-image-private-registry.md new file mode 100644 index 0000000000000..471448889e741 --- /dev/null +++ b/content/fr/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -0,0 +1,208 @@ +--- +title: Récupération d'une image d'un registre privé +content_template: templates/task +weight: 100 +--- + +{{% capture overview %}} + +Cette page montre comment créer un Pod qui utilise un Secret pour récupérer une image d'un registre privé. + +{{% /capture %}} + +{{% capture prerequisites %}} + +* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +* Pour faire cet exercice, vous avez besoin d'un +[Docker ID](https://docs.docker.com/docker-id/) et un mot de passe. + +{{% /capture %}} + +{{% capture steps %}} + +## Connectez-vous à Docker + +Sur votre ordinateur, vous devez vous authentifier à un registre afin de récupérer une image privée : + +```shell +docker login +``` + +Une fois que c'est fait, entrez votre nom d'utilisateur et votre mot de passe Docker. + +Le processus de connexion crée ou met à jour un fichier `config.json` qui contient un token d'autorisation. + +Consultez le fichier `config.json` : + +```shell +cat ~/.docker/config.json +``` + +La sortie comporte une section similaire à celle-ci : + +```json +{ + "auths": { + "https://index.docker.io/v1/": { + "auth": "c3R...zE2" + } + } +} +``` + +{{< note >}} +Si vous utilisez le credentials store de Docker, vous ne verrez pas cette entrée `auth` mais une entrée `credsStore` avec le nom du Store comme valeur. +{{< /note >}} + +## Créez un Secret basé sur les identifiants existants du Docker {#registry-secret-existing-credentials} + +Le cluster Kubernetes utilise le type Secret de `docker-registry` pour s'authentifier avec +un registre de conteneurs pour y récupérer une image privée. + +Si vous avez déjà lancé `docker login`, vous pouvez copier ces identifiants dans Kubernetes + +```shell +kubectl create secret generic regcred \ + --from-file=.dockerconfigjson= \ + --type=kubernetes.io/dockerconfigjson +``` + +Si vous avez besoin de plus de contrôle (par exemple, pour définir un Namespace ou un label sur le nouveau secret), vous pouvez alors personnaliser le secret avant de le stocker. +Assurez-vous de : + +- Attribuer la valeur `.dockerconfigjson` dans le nom de l'élément data +- Encoder le fichier docker en base64 et colle cette chaîne, non interrompue, comme valeur du champ `data[".dockerconfigjson"]`. +- Mettre `type` à `kubernetes.io/dockerconfigjson`. + +Exemple: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: myregistrykey + namespace: awesomeapps +data: + .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== +type: kubernetes.io/dockerconfigjson +``` + +Si vous obtenez le message d'erreur `error: no objects passed to create`, cela peut signifier que la chaîne encodée en base64 est invalide. +Si vous obtenez un message d'erreur comme `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...`, cela signifie que la chaîne encodée en base64 a été décodée avec succès, mais n'a pas pu être interprétée comme un fichier `.docker/config.json`. + +## Créez un Secret en fournissant les identifiants sur la ligne de commande + +Créez ce secret, en le nommant `regcred` : + +```shell +kubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email= +``` + +où : + +* `` est votre FQDN de registre de docker privé. (https://index.docker.io/v1/ for DockerHub) +* `` est votre nom d'utilisateur Docker. +* `` est votre mot de passe Docker. +* `` est votre email Docker. + +Vous avez réussi à définir vos identifiants Docker dans le cluster comme un secret appelé `regcred`. + +{{< note >}} +Saisir des secrets sur la ligne de commande peut les conserver dans l'historique de votre shell sans protection, et ces secrets peuvent également être visibles par d'autres utilisateurs sur votre ordinateur pendant l'exécution de `kubectl`. +{{< /note >}} + + +## Inspection du secret `regcred` + +Pour comprendre le contenu du Secret `regcred` que vous venez de créer, commencez par visualiser le Secret au format YAML : + +```shell +kubectl get secret regcred --output=yaml +``` + +La sortie est similaire à celle-ci : + +```yaml +apiVersion: v1 +kind: Secret +metadata: + ... + name: regcred + ... +data: + .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0= +type: kubernetes.io/dockerconfigjson +``` + +La valeur du champ `.dockerconfigjson` est une représentation en base64 de vos identifiants Docker. + +Pour comprendre ce que contient le champ `.dockerconfigjson`, convertissez les données secrètes en un format lisible : + +```shell +kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode +``` + +La sortie est similaire à celle-ci : + +```json +{"auths":{"your.private.registry.example.com":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com","auth":"c3R...zE2"}}} +``` + +Pour comprendre ce qui se cache dans le champ `auth', convertissez les données encodées en base64 dans un format lisible : + +```shell +echo "c3R...zE2" | base64 --decode +``` + +La sortie en tant que nom d'utilisateur et mot de passe concaténés avec un `:`, est similaire à ceci : + +```none +janedoe:xxxxxxxxxxx +``` + +Remarquez que les données secrètes contiennent le token d'autorisation similaire à votre fichier local `~/.docker/config.json`. + +Vous avez réussi à définir vos identifiants de Docker comme un Secret appelé `regcred` dans le cluster. + +## Créez un Pod qui utilise votre Secret + +Voici un fichier de configuration pour un Pod qui a besoin d'accéder à vos identifiants Docker dans `regcred` : + +{{< codenew file="pods/private-reg-pod.yaml" >}} + +Téléchargez le fichier ci-dessus : + +```shell +wget -O my-private-reg-pod.yaml https://k8s.io/examples/pods/private-reg-pod.yaml +``` + +Dans le fichier `my-private-reg-pod.yaml`, remplacez `` par le chemin d'accès à une image dans un registre privé tel que + +```none +your.private.registry.example.com/janedoe/jdoe-private:v1 +``` + +Pour récupérer l'image du registre privé, Kubernetes a besoin des identifiants. +Le champ `imagePullSecrets` dans le fichier de configuration spécifie que Kubernetes doit obtenir les informations d'identification d'un Secret nommé `regcred`. + +Créez un Pod qui utilise votre secret et vérifiez que le Pod est bien lancé : + +```shell +kubectl apply -f my-private-reg-pod.yaml +kubectl get pod private-reg +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Pour en savoir plus sur les [Secrets](/docs/concepts/configuration/secret/). +* Pour en savoir plus sur l'[utilisation d'un registre privé](/docs/concepts/containers/images/#using-a-private-registry). +* Pour en savoir plus sur l'[ajout d'un imagePullSecrets à un compte de service](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account). +* Voir [kubectl crée un Secret de registre de docker](/docs/reference/generated/kubectl/kubectl-commands/#-em-secret-docker-registry-em-). +* Voir [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core). +* Voir le champ `imagePullSecrets` de [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core). + +{{% /capture %}} + diff --git a/content/fr/docs/tasks/tools/install-minikube.md b/content/fr/docs/tasks/tools/install-minikube.md index 9ea0ecfc7c79b..e704e1bfa161d 100644 --- a/content/fr/docs/tasks/tools/install-minikube.md +++ b/content/fr/docs/tasks/tools/install-minikube.md @@ -15,22 +15,22 @@ Cette page vous montre comment installer [Minikube](/fr/docs/tutorials/hello-min {{% capture prerequisites %}} -La virtualisation VT-x ou AMD-v doit être activée dans le BIOS de votre machine. - {{< tabs name="minikube_before_you_begin" >}} {{% tab name="Linux" %}} -Pour vérifier si la virtualisation est prise en charge sur Linux, exécutez la commande suivante et vérifiez que la sortie n'est pas vide: +Pour vérifier si la virtualisation est prise en charge sur Linux, exécutez la commande suivante et vérifiez que la sortie n'est pas vide : ``` -egrep --color 'vmx|svm' /proc/cpuinfo +grep -E --color 'vmx|svm' /proc/cpuinfo ``` {{% /tab %}} + {{% tab name="macOS" %}} Pour vérifier si la virtualisation est prise en charge sur macOS, exécutez la commande suivante sur votre terminal. ``` -sysctl -a | grep machdep.cpu.features +sysctl -a | grep -E --color 'machdep.cpu.features|VMX' ``` Si vous trouvez `VMX` dans la sortie, la fonction VT-x est supportée sur votre OS. {{% /tab %}} + {{% tab name="Windows" %}} Pour vérifier si la virtualisation est prise en charge sur Windows 8 et au-delà, exécutez la commande suivante sur votre terminal Windows ou à l'invite de commande. ``` @@ -44,6 +44,12 @@ Hyper-V Requirements: VM Monitor Mode Extensions: Yes Data Execution Prevention Available: Yes ``` +Si vous voyez la sortie suivante, votre système a déjà un hyperviseur installé et vous pouvez ignorer l'étape suivante. +``` +Configuration requise pour Hyper-V: un hyperviseur a été détecté. Les fonctionnalités requises pour Hyper-V ne seront pas affichées. +``` + + {{% /tab %}} {{< /tabs >}} @@ -51,110 +57,205 @@ Hyper-V Requirements: VM Monitor Mode Extensions: Yes {{% capture steps %}} -## Installer un hyperviseur +# Installer Minikube -Si vous n'avez pas déjà un hyperviseur installé, installez-le maintenant pour votre système d'exploitation: +{{< tabs name="tab_with_md" >}} +{{% tab name="Linux" %}} -Système d'exploitation | Hyperviseurs supportés -:----------------|:--------------------- -macOS | [VirtualBox](https://www.virtualbox.org/wiki/Downloads), [VMware Fusion](https://www.vmware.com/products/fusion), [HyperKit](https://github.com/moby/hyperkit) -Linux | [VirtualBox](https://www.virtualbox.org/wiki/Downloads), [KVM](http://www.linux-kvm.org/) -Windows | [VirtualBox](https://www.virtualbox.org/wiki/Downloads), [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install) +### Installer kubectl -{{< note >}} -Minikube supporte également une option `--vm-driver=none` qui exécute les composants Kubernetes sur la machine hôte et non dans une VM. L'utilisation de ce pilote nécessite Docker et un environnement Linux mais pas un hyperviseur. -{{< /note >}} +Installez kubectl en suivant les instructions de la section [Installer et configurer kubectl](/fr/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux). -## Installer kubectl +### Installer un hyperviseur -* Installez kubectl en suivant les instructions de la section [Installer et configurer kubectl](/fr/docs/tasks/tools/install-kubectl/). +Si vous n'avez pas déjà un hyperviseur installé, installez-le maintenant pour votre système d'exploitation : -## Installer Minikube +• [KVM](http://www.linux-kvm.org/), qui utilise également QEMU -### macOS +• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) -La façon la plus simple d'installer Minikube sur macOS est d'utiliser [Homebrew](https://brew.sh): +Minikube supporte également une option `--vm-driver=none` qui exécute les composants Kubernetes sur la machine hôte et dans pas dans une VM. +L'utilisation de ce pilote nécessite [Docker](https://www.docker.com/products/docker-desktop) et un environnement Linux mais pas un hyperviseur. + +Si vous utilisez le pilote `none` dans Debian ou un dérivé, utilisez les paquets` .deb` pour +Docker plutôt que le package snap, qui ne fonctionne pas avec Minikube. +Vous pouvez télécharger les packages `.deb` depuis [Docker](https://www.docker.com/products/docker-desktop). + +{{< caution >}} +Le pilote VM `none` peut entraîner des problèmes de sécurité et de perte de données. +Avant d'utiliser `--driver=none`, consultez [cette documentation] (https://minikube.sigs.k8s.io/docs/reference/drivers/none/) pour plus d'informations. +{{}} + +Minikube prend également en charge un `vm-driver=podman` similaire au pilote Docker. Podman est exécuté en tant que superutilisateur (utilisateur root), c'est le meilleur moyen de garantir que vos conteneurs ont un accès complet à toutes les fonctionnalités disponibles sur votre système. + +{{< caution >}} +Le pilote `podman` nécessite l’exécution des conteneurs en tant que root car les comptes d’utilisateurs normaux n’ont pas un accès complet à toutes les fonctionnalités du système d’exploitation que leurs conteneurs pourraient avoir besoin d’exécuter. +{{}} + +### Installer Minikube à l'aide d'un package + +Il existe des packages * expérimentaux * pour Minikube; vous pouvez trouver des packages Linux (AMD64) +depuis la page [releases](https://github.com/kubernetes/minikube/releases) de Minikube sur GitHub. + +Utilisez l'outil de package de votre distribution Linux pour installer un package approprié. + +### Installez Minikube par téléchargement direct + +Si vous n'installez pas via un package, vous pouvez télécharger +un binaire autonome et l'utiliser. ```shell -brew install minikube +curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \ + && chmod +x minikube ``` -Vous pouvez aussi l'installer sur macOS en téléchargeant un binaire statique: +Voici un moyen simple d'ajouter l'exécutable Minikube à votre path : ```shell -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \ - && chmod +x minikube +sudo mkdir -p /usr/local/bin/ +sudo install minikube /usr/local/bin/ ``` -Voici une façon simple d'ajouter l'exécutable de Minikube à votre path: +### Installer Minikube en utilisant Homebrew + +Une autre alternative, vous pouvez installer Minikube en utilisant Linux [Homebrew] (https://docs.brew.sh/Homebrew-on-Linux) : ```shell -sudo mv minikube /usr/local/bin +brew install minikube ``` -### Linux +{{% /tab %}} +{{% tab name="macOS" %}} +### Installer kubectl -{{< note >}} -Ce document vous montre comment installer Minikube sur Linux en utilisant un binaire statique. -Pour d'autres méthodes d'installation sous Linux, reportez-vous à la section [Minikube documentation](https://minikube.sigs.k8s.io/docs/start/linux/). -{{< /note >}} +Installez kubectl en suivant les instructions de la section [Installer et configurer kubectl](/fr/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos). -Vous pouvez installer Minikube sur Linux en téléchargeant un binaire statique: +### Installer un hyperviseur + +Si vous n'avez pas encore installé d'hyperviseur, installez-en un maintenant : + +• [HyperKit](https://github.com/moby/hyperkit) + +• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) + +• [VMware Fusion](https://www.vmware.com/products/fusion) + +### Installer Minikube +La façon la plus simple d'installer Minikube sur macOS est d'utiliser [Homebrew](https://brew.sh): ```shell -curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \ +brew install minikube +``` + +Vous pouvez aussi l'installer sur macOS en téléchargeant un binaire statique : + +```shell +curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \ && chmod +x minikube ``` -Voici une façon simple d'ajouter l'exécutable de Minikube à votre path: +Voici une façon simple d'ajouter l'exécutable de Minikube à votre path : ```shell -sudo cp minikube /usr/local/bin && rm minikube +sudo mv minikube /usr/local/bin ``` -### Windows +{{% /tab %}} +{{% tab name="Windows" %}} +### Installer kubectl + +Installez kubectl en suivant les instructions de la section [Installer et configurer kubectl](/fr/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows). + +### Installer un hyperviseur + +Si vous n'avez pas encore installé d'hyperviseur, installez-en un maintenant : + +• [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install) + +• [VirtualBox](https://www.virtualbox.org/wiki/Downloads) {{< note >}} -Pour exécuter Minikube sur Windows, vous devez d'abord installer [VirtualBox](https://www.virtualbox.org/) ou [Hyper-V](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v). Hyper-V peut être utilisé avec trois versions de Windows 10 : Windows 10 Enterprise, Windows 10 Professional et Windows 10 Education. Voir le dépôt GitHub officiel de Minikube pour plus [d'informations sur l'installation](https://github.com/kubernetes/minikube/#installation). -{{< /note >}} +Hyper-V peut fonctionner sur trois versions de Windows 10: Windows 10 Entreprise, Windows 10 Professionnel et Windows 10 Éducation. +{{}} + +### Installer Minikube en utilisant Chocolatey La façon la plus simple d'installer Minikube sur Windows est d'utiliser [Chocolatey](https://chocolatey.org/) (exécuté avec les droits administrateur) : ```shell -choco install minikube kubernetes-cli +choco install minikube ``` Une fois l'installation de Minikube terminée, fermez la session CLI en cours et redémarrez. Minikube devrait avoir été ajouté à votre path automatiquement. -#### Installation manuelle de Windows +### Installer Minikube avec Windows Installer + +Pour installer manuellement Minikube sur Windows à l'aide de [Windows Installer](https://docs.microsoft.com/en-us/windows/desktop/msi/windows-installer-portal), téléchargez [`minikube-installer.exe`](https://github.com/kubernetes/minikube/releases/latest) et exécutez l'Installer. + +#### Installer Minikube manuellement Pour installer Minikube manuellement sur Windows, téléchargez [`minikube-windows-amd64`](https://github.com/kubernetes/minikube/releases/latest), renommez-le en `minikube.exe`, et ajoutez-le à votre path. -#### Windows Installer +{{% /tab %}} +{{< /tabs >}} -Pour installer manuellement Minikube sur Windows à l'aide de [Windows Installer](https://docs.microsoft.com/en-us/windows/desktop/msi/windows-installer-portal), téléchargez [`minikube-installer.exe`](https://github.com/kubernetes/minikube/releases/latest) et exécutez l'Installer. {{% /capture %}} {{% capture whatsnext %}} -* [Exécutez Kubernetes localement via Minikube](/docs/setup/minikube/) +* [Exécutez Kubernetes localement via Minikube](/fr/docs/setup/learning-environment/minikube/) {{% /capture %}} +## Confirmer l'installation + +Pour confirmer la réussite de l'installation d'un hyperviseur et d'un mini-cube, vous pouvez exécuter la commande suivante pour démarrer un cluster Kubernetes local : + +{{< note >}} + +Pour définir le `--driver` avec` minikube start`, entrez le nom de l'hyperviseur que vous avez installé en minuscules où `` est mentionné ci-dessous. Une liste complète des valeurs `--driver` est disponible dans [la documentation spécifiant le pilote VM](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver). + +{{}} + +```shell +minikube start --driver= +``` + +Une fois `minikube start` terminé, exécutez la commande ci-dessous pour vérifier l'état du cluster : + +```shell +minikube status +``` + +Si votre cluster est en cours d'exécution, la sortie de `minikube status` devrait être similaire à : + +``` +host: Running +kubelet: Running +apiserver: Running +kubeconfig: Configured +``` + +Après avoir vérifié si Minikube fonctionne avec l'hyperviseur choisi, vous pouvez continuer à utiliser Minikube ou arrêter votre cluster. Pour arrêter votre cluster, exécutez : + +```shell +minikube stop +``` + ## Tout nettoyer pour recommencer à zéro -Si vous avez déjà installé minikube, exécutez: +Si vous avez déjà installé minikube, exécutez : ```shell minikube start ``` -Si cette commande renvoie une erreur: +Si cette commande renvoie une erreur : ```shell machine does not exist ``` -Vous devez supprimer les fichiers de configuration: +Vous devez supprimer les fichiers de configuration : ```shell rm -rf ~/.minikube ``` diff --git a/content/fr/examples/pods/private-reg-pod.yaml b/content/fr/examples/pods/private-reg-pod.yaml new file mode 100644 index 0000000000000..4029588dd0758 --- /dev/null +++ b/content/fr/examples/pods/private-reg-pod.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Pod +metadata: + name: private-reg +spec: + containers: + - name: private-reg-container + image: + imagePullSecrets: + - name: regcred + diff --git a/content/fr/includes/user-guide-migration-notice.md b/content/fr/includes/user-guide-migration-notice.md deleted file mode 100644 index 34949ae60634e..0000000000000 --- a/content/fr/includes/user-guide-migration-notice.md +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - -
      -

      NOTICE

      -

      À compter du 14 mars 2017, le groupe Kubernetes SIG-Docs-Maintainers a commencé la migration du contenu du guide de l'utilisateur, comme annoncé précédemment. SIG Docs community à travers le groupe kubernetes-sig-docs et le canal Slack kubernetes.slack.com #sig-docs .

      -

      Les guides de l’utilisateur dans cette section sont en train d’être refondus dans des rubriques dans Tutoriels, Tâches et Concepts. Tout ce qui a été déplacé aura un avis placé à son emplacement précédent, ainsi qu'un lien vers son nouvel emplacement. La réorganisation met en œuvre une nouvelle table des matières et devrait faciliter la recherche et améliorer la lisibilité de la documentation pour un public plus large.

      -

      Pour toute question, veuillez contacter:kubernetes-sig-docs@googlegroups.com

      -
      diff --git a/content/id/docs/concepts/_index.md b/content/id/docs/concepts/_index.md index 31a1404f48b7f..cd135ca14d696 100644 --- a/content/id/docs/concepts/_index.md +++ b/content/id/docs/concepts/_index.md @@ -17,11 +17,11 @@ kamu belajar lebih dalam bagaimana cara kerja Kubernetes. ## Ikhtisar -Untuk menggunakan Kubernetes, kamu menggunakan obyek-obyek *Kubernetes API* untuk merepresentasikan +Untuk menggunakan Kubernetes, kamu menggunakan objek-objek *Kubernetes API* untuk merepresentasikan *state* yang diinginkan: apa yang aplikasi atau *workload* lain yang ingin kamu jalankan, *image* kontainer yang digunakan, jaringan atau *resource disk* apa yang ingin kamu sediakan, dan lain sebagainya. Kamu membuat *state* yang diinginkan dengan cara membuat -obyek dengan menggunakan API Kubernetes, dan biasanya menggunakan `command-line interface`, yaitu `kubectl`. +objek dengan menggunakan API Kubernetes, dan biasanya menggunakan `command-line interface`, yaitu `kubectl`. Kamu juga dapat secara langsung berinteraksi dengan klaster untuk membuat atau mengubah *state* yang kamu inginkan. @@ -38,16 +38,16 @@ suatu aplikasi, dan lain sebagainya. *Control Plane* Kubernetes terdiri dari sek * **[kubelet](/docs/admin/kubelet/)**, yang menjadi perantara komunikasi dengan *master*. * **[kube-proxy](/docs/admin/kube-proxy/)**, sebuah *proxy* yang merupakan representasi jaringan yang ada pada setiap *node*. -## Obyek Kubernetes +## Objek Kubernetes Kubernetes memiliki beberapa abstraksi yang merepresentasikan *state* dari sistem kamu: apa yang aplikasi atau *workload* lain yang ingin kamu jalankan, jaringan atau *resource disk* apa yang ingin kamu sediakan, serta beberapa informasi lain terkait apa yang sedang klaster kamu lakukan. -Abstraksi ini direpresentasikan oleh obyek yang tersedia di API Kubernetes; -lihat [ikhtisar obyek-obyek Kubernetes](/docs/concepts/abstractions/overview/) +Abstraksi ini direpresentasikan oleh objek yang tersedia di API Kubernetes; +lihat [ikhtisar objek-objek Kubernetes](/docs/concepts/abstractions/overview/) untuk penjelasan yang lebih mendetail. -Obyek mendasar Kubernetes termasuk: +Objek mendasar Kubernetes termasuk: * [Pod](/docs/concepts/workloads/pods/pod-overview/) * [Service](/docs/concepts/services-networking/service/) @@ -55,7 +55,7 @@ Obyek mendasar Kubernetes termasuk: * [Namespace](/docs/concepts/overview/working-with-objects/namespaces/) Sebagai tambahan, Kubernetes memiliki beberapa abstraksi yang lebih tinggi yang disebut kontroler. -Kontroler merupakan obyek mendasar dengan fungsi tambahan, contoh dari kontroler ini adalah: +Kontroler merupakan objek mendasar dengan fungsi tambahan, contoh dari kontroler ini adalah: * [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) * [Deployment](/docs/concepts/workloads/controllers/deployment/) @@ -67,14 +67,14 @@ Kontroler merupakan obyek mendasar dengan fungsi tambahan, contoh dari kontroler Berbagai bagian *Control Plane* Kubernetes, seperti *master* dan *process-process* kubelet, mengatur bagaimana Kubernetes berkomunikasi dengan klaster kamu. *Control Plane* -menjaga seluruh *record* dari obyek Kubernetes serta terus menjalankan -iterasi untuk melakukan manajemen *state* obyek. *Control Plane* akan memberikan respon +menjaga seluruh *record* dari objek Kubernetes serta terus menjalankan +iterasi untuk melakukan manajemen *state* objek. *Control Plane* akan memberikan respon apabila terdapat perubahan pada klaster kamu dan mengubah *state* saat ini agar sesuai dengan *state* yang diinginkan. Contohnya, ketika kamu menggunakan API Kubernetes untuk membuat sebuah *Deployment*, kamu memberikan sebuah *state* baru yang harus dipenuhi oleh sistem. *Control Plane* -kemudian akan mencatat obyek apa saja yang dibuat, serta menjalankan instruksi yang kamu berikan +kemudian akan mencatat objek apa saja yang dibuat, serta menjalankan instruksi yang kamu berikan dengan cara melakukan `start` aplikasi dan melakukan `scheduling` aplikasi tersebut pada *node*, dengan kata lain mengubah *state* saat ini agar sesuai dengan *state* yang diinginkan. @@ -92,7 +92,7 @@ kamu berkomunikasi dengan *master* klaster Kubernetes kamu. menjalankan aplikasi kamu. Master mengontrol setiap node; kamu akan jarang berinteraksi dengan *node* secara langsung. -#### Metadata obyek +#### Metadata objek * [Anotasi](/docs/concepts/overview/working-with-objects/annotations/) diff --git a/content/id/docs/concepts/architecture/controller.md b/content/id/docs/concepts/architecture/controller.md new file mode 100644 index 0000000000000..4ce6974b34412 --- /dev/null +++ b/content/id/docs/concepts/architecture/controller.md @@ -0,0 +1,178 @@ +--- +title: Controller +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Dalam bidang robotika dan otomatisasi, _control loop_ atau kontrol tertutup adalah +lingkaran tertutup yang mengatur keadaan suatu sistem. + +Berikut adalah salah satu contoh kontrol tertutup: termostat di sebuah ruangan. + +Ketika kamu mengatur suhunya, itu mengisyaratkan ke termostat +tentang *keadaan yang kamu inginkan*. Sedangkan suhu kamar yang sebenarnya +adalah *keadaan saat ini*. Termostat berfungsi untuk membawa keadaan saat ini +mendekati ke keadaan yang diinginkan, dengan menghidupkan atau mematikan +perangkat. + +Di Kubernetes, _controller_ adalah kontrol tertutup yang mengawasi keadaan klaster +{{< glossary_tooltip term_id="cluster" text="klaster" >}} kamu, lalu membuat atau meminta +perubahan jika diperlukan. Setiap _controller_ mencoba untuk memindahkan status +klaster saat ini mendekati keadaan yang diinginkan. + +{{< glossary_definition term_id="controller" length="short">}} + +{{% /capture %}} + + +{{% capture body %}} + +## Pola _controller_ + +Sebuah _controller_ melacak sekurang-kurangnya satu jenis sumber daya dari +Kubernetes. +[objek-objek](/docs/concepts/overview/working-with-objects/kubernetes-objects/) ini +memiliki *spec field* yang merepresentasikan keadaan yang diinginkan. Satu atau +lebih _controller_ untuk *resource* tersebut bertanggung jawab untuk membuat +keadaan sekarang mendekati keadaan yang diinginkan. + +_Controller_ mungkin saja melakukan tindakan itu sendiri; namun secara umum, di +Kubernetes, _controller_ akan mengirim pesan ke +{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} yang +mempunyai efek samping yang bermanfaat. Kamu bisa melihat contoh-contoh +di bawah ini. + +{{< comment >}} +Beberapa _controller_ bawaan, seperti _controller namespace_, bekerja pada objek +yang tidak memiliki *spec*. Agar lebih sederhana, halaman ini tidak +menjelaskannya secara detail. +{{< /comment >}} + +### Kontrol melalui server API + +_Controller_ {{< glossary_tooltip term_id="job" >}} adalah contoh dari _controller_ +bawaan dari Kubernetes. _Controller_ bawaan tersebut mengelola status melalui +interaksi dengan server API dari suatu klaster. + +Job adalah sumber daya dalam Kubernetes yang menjalankan a +{{< glossary_tooltip term_id="pod" >}}, atau mungkin beberapa Pod sekaligus, +untuk melakukan sebuah pekerjaan dan kemudian berhenti. + +(Setelah [dijadwalkan](../../../../en/docs/concepts/scheduling/), objek Pod +akan menjadi bagian dari keadaan yang diinginkan oleh kubelet). + +Ketika _controller job_ melihat tugas baru, maka _controller_ itu memastikan bahwa, +di suatu tempat pada klaster kamu, kubelet dalam sekumpulan Node menjalankan +Pod-Pod dengan jumlah yang benar untuk menyelesaikan pekerjaan. _Controller job_ +tidak menjalankan sejumlah Pod atau kontainer apa pun untuk dirinya sendiri. +Namun, _controller job_ mengisyaratkan kepada server API untuk membuat atau +menghapus Pod. Komponen-komponen lain dalam +{{< glossary_tooltip text="control plane" term_id="control-plane" >}} +bekerja berdasarkan informasi baru (adakah Pod-Pod baru untuk menjadwalkan dan +menjalankan pekerjan), dan pada akhirnya pekerjaan itu selesai. + +Setelah kamu membuat Job baru, status yang diharapkan adalah bagaimana +pekerjaan itu bisa selesai. _Controller job_ membuat status pekerjaan saat ini +agar mendekati dengan keadaan yang kamu inginkan: membuat Pod yang melakukan +pekerjaan yang kamu inginkan untuk Job tersebut, sehingga Job hampir +terselesaikan. + +_Controller_ juga memperbarui objek yang mengkonfigurasinya. Misalnya: setelah +pekerjaan dilakukan untuk Job tersebut, _controller job_ memperbarui objek Job +dengan menandainya `Finished`. + +(Ini hampir sama dengan bagaimana beberapa termostat mematikan lampu untuk +mengindikasikan bahwa kamar kamu sekarang sudah berada pada suhu yang kamu +inginkan). + +### Kontrol Langsung + +Berbeda dengan sebuah Job, beberapa dari _controller_ perlu melakukan perubahan +sesuatu di luar dari klaster kamu. + +Sebagai contoh, jika kamu menggunakan kontrol tertutup untuk memastikan apakah +cukup {{< glossary_tooltip text="Node" term_id="node" >}} +dalam kluster kamu, maka _controller_ memerlukan sesuatu di luar klaster saat ini +untuk mengatur Node-Node baru apabila dibutuhkan. + +_controller_ yang berinteraksi dengan keadaan eksternal dapat menemukan keadaan +yang diinginkannya melalui server API, dan kemudian berkomunikasi langsung +dengan sistem eksternal untuk membawa keadaan saat ini mendekat keadaan yang +diinginkan. + +(Sebenarnya ada sebuah _controller_ yang melakukan penskalaan node secara +horizontal dalam klaster kamu. Silahkan lihat +[_autoscaling_ klaster](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling)). + +## Status sekarang berbanding status yang diinginkan {#sekarang-banding-diinginkan} + +Kubernetes mengambil pandangan sistem secara _cloud-native_, dan mampu menangani +perubahan yang konstan. + +Klaster kamu dapat mengalami perubahan kapan saja pada saat pekerjaan sedang +berlangsung dan kontrol tertutup secara otomatis memperbaiki setiap kegagalan. +Hal ini berarti bahwa, secara potensi, klaster kamu tidak akan pernah mencapai +kondisi stabil. + +Selama _controller_ dari klaster kamu berjalan dan mampu membuat perubahan yang +bermanfaat, tidak masalah apabila keadaan keseluruhan stabil atau tidak. + +## Perancangan + +Sebagai prinsip dasar perancangan, Kubernetes menggunakan banyak _controller_ yang +masing-masing mengelola aspek tertentu dari keadaan klaster. Yang paling umum, +kontrol tertutup tertentu menggunakan salah satu jenis sumber daya +sebagai suatu keadaan yang diinginkan, dan memiliki jenis sumber daya yang +berbeda untuk dikelola dalam rangka membuat keadaan yang diinginkan terjadi. + +Sangat penting untuk memiliki beberapa _controller_ sederhana daripada hanya satu +_controller_ saja, dimana satu kumpulan monolitik kontrol tertutup saling +berkaitan satu sama lain. Karena _controller_ bisa saja gagal, sehingga Kubernetes +dirancang untuk memungkinkan hal tersebut. + +Misalnya: _controller_ pekerjaan melacak objek pekerjaan (untuk menemukan +adanya pekerjaan baru) dan objek Pod (untuk menjalankan pekerjaan tersebut dan +kemudian melihat lagi ketika pekerjaan itu sudah selesai). Dalam hal ini yang +lain membuat pekerjaan, sedangkan _controller_ pekerjaan membuat Pod-Pod. + +{{< note >}} +Ada kemungkinan beberapa _controller_ membuat atau memperbarui jenis objek yang +sama. Namun di belakang layar, _controller_ Kubernetes memastikan bahwa mereka +hanya memperhatikan sumbr daya yang terkait dengan sumber daya yang mereka +kendalikan. + +Misalnya, kamu dapat memiliki Deployment dan Job; dimana keduanya akan membuat +Pod. _Controller Job_ tidak akan menghapus Pod yang dibuat oleh Deployment kamu, +karena ada informasi ({{< glossary_tooltip term_id="label" text="labels" >}}) +yang dapat oleh _controller_ untuk membedakan Pod-Pod tersebut. +{{< /note >}} + +## Berbagai cara menjalankan beberapa _controller_ {#menjalankan-_controller_} + +Kubernetes hadir dengan seperangkat _controller_ bawaan yang berjalan di dalam +{{< glossary_tooltip term_id="kube-controller-manager" >}}. Beberapa _controller_ +bawaan memberikan perilaku inti yang sangat penting. + +_Controller Deployment_ dan _controller Job_ adalah contoh dari _controller_ yang +hadir sebagai bagian dari Kubernetes itu sendiri (_controller_ "bawaan"). +Kubernetes memungkinkan kamu menjalankan _control plane_ yang tangguh, sehingga +jika ada _controller_ bawaan yang gagal, maka bagian lain dari _control plane_ akan +mengambil alih pekerjaan. + +Kamu juga dapat menemukan pengontrol yang berjalan di luar _control plane_, untuk +mengembangkan lebih jauh Kubernetes. Atau, jika mau, kamu bisa membuat +_controller_ baru sendiri. Kamu dapat menjalankan _controller_ kamu sendiri sebagai +satu kumpulan dari beberapa Pod, atau bisa juga sebagai bagian eksternal dari +Kubernetes. Manakah yang paling sesuai akan tergantung pada apa yang _controller_ +khusus itu lakukan. + +{{% /capture %}} + +{{% capture whatsnext %}} +* Silahkan baca tentang [_control plane_ Kubernetes](/docs/concepts/#kubernetes-control-plane) +* Temukan beberapa dasar tentang [objek-objek Kubernetes](/docs/concepts/#kubernetes-objects) +* Pelajari lebih lanjut tentang [Kubernetes API](/docs/concepts/overview/kubernetes-api/) +* Apabila kamu ingin membuat _controller_ sendiri, silakan lihat [pola perluasan](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) dalam memperluas Kubernetes. +{{% /capture %}} diff --git a/content/id/docs/concepts/cluster-administration/manage-deployment.md b/content/id/docs/concepts/cluster-administration/manage-deployment.md index d53b4106a620a..5a94792af2da0 100644 --- a/content/id/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/id/docs/concepts/cluster-administration/manage-deployment.md @@ -255,7 +255,7 @@ Servis _frontend_ akan meliputi kedua set replika dengan menentukan subset bersa tier: frontend ``` -Anda dapat mengatur jumlah replika rilis _stable_ dan _canary_ untuk menentukan rasio dari tiap rilis yang akan menerima _traffic production live_ (dalam kasus ini 3:1). +Kamu dapat mengatur jumlah replika rilis _stable_ dan _canary_ untuk menentukan rasio dari tiap rilis yang akan menerima _traffic production live_ (dalam kasus ini 3:1). Ketika telah yakin, kamu dapat memindahkan _track stable_ ke rilis baru dan menghapus _canary_. Untuk contoh yang lebih jelas, silahkan cek [tutorial melakukan deploy Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary). @@ -322,7 +322,7 @@ kubectl scale deployment/my-nginx --replicas=1 deployment.extensions/my-nginx scaled ``` -Sekarang anda hanya memiliki satu _pod_ yang dikelola oleh deployment. +Sekarang kamu hanya memiliki satu _pod_ yang dikelola oleh deployment. ```shell kubectl get pods -l app=nginx diff --git a/content/id/docs/concepts/configuration/resource-bin-packing.md b/content/id/docs/concepts/configuration/resource-bin-packing.md new file mode 100644 index 0000000000000..26798dccfd779 --- /dev/null +++ b/content/id/docs/concepts/configuration/resource-bin-packing.md @@ -0,0 +1,217 @@ +--- +title: Bin Packing Sumber Daya untuk Sumber Daya Tambahan +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="1.16" state="alpha" >}} + +_Kube-scheduler_ dapat dikonfigurasikan untuk mengaktifkan pembungkusan rapat +(_bin packing_) sumber daya bersama dengan sumber daya tambahan melalui fungsi prioritas +`RequestedToCapacityRatioResourceAllocation`. Fungsi-fungsi prioritas dapat digunakan +untuk menyempurnakan _kube-scheduler_ sesuai dengan kebutuhan. + +{{% /capture %}} + +{{% capture body %}} + +## Mengaktifkan _Bin Packing_ menggunakan RequestedToCapacityRatioResourceAllocation + +Sebelum Kubernetes 1.15, _kube-scheduler_ digunakan untuk memungkinkan mencetak +skor berdasarkan rasio permintaan terhadap kapasitas sumber daya utama seperti +CPU dan Memori. Kubernetes 1.16 menambahkan parameter baru ke fungsi prioritas +yang memungkinkan pengguna untuk menentukan sumber daya beserta dengan bobot +untuk setiap sumber daya untuk memberi nilai dari Node berdasarkan rasio +permintaan terhadap kapasitas. Hal ini memungkinkan pengguna untuk _bin pack_ +sumber daya tambahan dengan menggunakan parameter yang sesuai untuk meningkatkan +pemanfaatan sumber daya yang langka dalam klaster yang besar. Perilaku +`RequestedToCapacityRatioResourceAllocation` dari fungsi prioritas dapat +dikontrol melalui pilihan konfigurasi yang disebut ` RequestToCapacityRatioArguments`. +Argumen ini terdiri dari dua parameter yaitu `shape` dan `resources`. Shape +memungkinkan pengguna untuk menyempurnakan fungsi menjadi yang paling tidak +diminta atau paling banyak diminta berdasarkan nilai `utilization` dan `score`. +Sumber daya terdiri dari `name` yang menentukan sumber daya mana yang dipertimbangkan +selama penilaian dan `weight` yang menentukan bobot masing-masing sumber daya. + +Di bawah ini adalah contoh konfigurasi yang menetapkan `requestedToCapacityRatioArguments` +pada perilaku _bin packing_ untuk sumber daya tambahan` intel.com/foo` dan `intel.com/bar` + +```json +{ + "kind" : "Policy", + "apiVersion" : "v1", + + ... + + "priorities" : [ + + ... + + { + "name": "RequestedToCapacityRatioPriority", + "weight": 2, + "argument": { + "requestedToCapacityRatioArguments": { + "shape": [ + {"utilization": 0, "score": 0}, + {"utilization": 100, "score": 10} + ], + "resources": [ + {"name": "intel.com/foo", "weight": 3}, + {"name": "intel.com/bar", "weight": 5} + ] + } + } + } + ], + } +``` + +**Fitur ini dinonaktifkan secara _default_** + +### Tuning RequestedToCapacityRatioResourceAllocation Priority Function + +`shape` digunakan untuk menentukan perilaku dari fungsi `RequestedToCapacityRatioPriority`. + +```yaml + {"utilization": 0, "score": 0}, + {"utilization": 100, "score": 10} +``` + +Argumen di atas memberikan Node nilai 0 jika utilisasi 0% dan 10 untuk utilisasi 100%, +yang kemudian mengaktfikan perilaku _bin packing_. Untuk mengaktifkan dari paling +yang tidak diminta, nilainya harus dibalik sebagai berikut. + +```yaml + {"utilization": 0, "score": 100}, + {"utilization": 100, "score": 0} +``` + +`resources` adalah parameter opsional yang secara _default_ diatur ke: + +``` yaml +"resources": [ + {"name": "CPU", "weight": 1}, + {"name": "Memory", "weight": 1} + ] +``` + +Ini dapat digunakan untuk menambahkan sumber daya tambahan sebagai berikut: + +```yaml +"resources": [ + {"name": "intel.com/foo", "weight": 5}, + {"name": "CPU", "weight": 3}, + {"name": "Memory", "weight": 1} + ] +``` + +Parameter `weight` adalah opsional dan diatur ke 1 jika tidak ditentukan. +Selain itu, `weight` tidak dapat diatur ke nilai negatif. + +### Bagaimana Fungsi Prioritas RequestedToCapacityRatioResourceAllocation Menilai Node + +Bagian ini ditujukan bagi kamu yang ingin memahami secara detail internal +dari fitur ini. +Di bawah ini adalah contoh bagaimana nilai dari Node dihitung untuk satu kumpulan +nilai yang diberikan. + +``` +Sumber daya yang diminta + +intel.com/foo : 2 +Memory: 256MB +CPU: 2 + +Bobot dari sumber daya + +intel.com/foo : 5 +Memory: 1 +CPU: 3 + +FunctionShapePoint {{0, 0}, {100, 10}} + +Spesifikasi dari Node 1 + +Tersedia: + +intel.com/foo : 4 +Memory : 1 GB +CPU: 8 + +Digunakan: + +intel.com/foo: 1 +Memory: 256MB +CPU: 1 + + +Nilai Node: + +intel.com/foo = resourceScoringFunction((2+1),4) + = (100 - ((4-3)*100/4) + = (100 - 25) + = 75 + = rawScoringFunction(75) + = 7 + +Memory = resourceScoringFunction((256+256),1024) + = (100 -((1024-512)*100/1024)) + = 50 + = rawScoringFunction(50) + = 5 + +CPU = resourceScoringFunction((2+1),8) + = (100 -((8-3)*100/8)) + = 37.5 + = rawScoringFunction(37.5) + = 3 + +NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3) + = 5 + + +Spesifikasi dari Node 2 + +Tersedia: + +intel.com/foo: 8 +Memory: 1GB +CPU: 8 + +Digunakan: + +intel.com/foo: 2 +Memory: 512MB +CPU: 6 + + +Nilai Node: + +intel.com/foo = resourceScoringFunction((2+2),8) + = (100 - ((8-4)*100/8) + = (100 - 25) + = 50 + = rawScoringFunction(50) + = 5 + +Memory = resourceScoringFunction((256+512),1024) + = (100 -((1024-768)*100/1024)) + = 75 + = rawScoringFunction(75) + = 7 + +CPU = resourceScoringFunction((2+6),8) + = (100 -((8-8)*100/8)) + = 100 + = rawScoringFunction(100) + = 10 + +NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3) + = 7 + +``` + +{{% /capture %}} diff --git a/content/id/docs/concepts/containers/container-environment-variables.md b/content/id/docs/concepts/containers/container-environment.md similarity index 98% rename from content/id/docs/concepts/containers/container-environment-variables.md rename to content/id/docs/concepts/containers/container-environment.md index 2a44dcbdcd26e..55c1bea6cb635 100644 --- a/content/id/docs/concepts/containers/container-environment-variables.md +++ b/content/id/docs/concepts/containers/container-environment.md @@ -1,5 +1,5 @@ --- -title: Variabel Environment Kontainer +title: Kontainer Environment content_template: templates/concept weight: 20 --- diff --git a/content/id/docs/concepts/containers/overview.md b/content/id/docs/concepts/containers/overview.md new file mode 100644 index 0000000000000..7ec5ef55d5e53 --- /dev/null +++ b/content/id/docs/concepts/containers/overview.md @@ -0,0 +1,49 @@ +--- +title: Ikhtisar Kontainer +content_template: templates/concept +weight: 1 +--- + +{{% capture overview %}} + +Kontainer adalah teknologi untuk mengemas kode (yang telah dikompilasi) menjadi +suatu aplikasi beserta dengan dependensi-dependensi yang dibutuhkannya pada saat +dijalankan. Setiap kontainer yang Anda jalankan dapat diulang; standardisasi +dengan menyertakan dependensinya berarti Anda akan mendapatkan perilaku yang +sama di mana pun Anda menjalankannya. + +Kontainer memisahkan aplikasi dari infrastruktur host yang ada dibawahnya. Hal +ini membuat penyebaran lebih mudah di lingkungan cloud atau OS yang berbeda. + +{{% /capture %}} + +{{% capture body %}} + +## Image-Image Kontainer + +[Kontainer image](/docs/concepts/containers/images/) meruapakan paket perangkat lunak +yang siap dijalankan, mengandung semua yang diperlukan untuk menjalankan +sebuah aplikasi: kode dan setiap *runtime* yang dibutuhkan, *library* dari +aplikasi dan sistem, dan nilai *default* untuk penganturan yang penting. + +Secara desain, kontainer tidak bisa berubah: Anda tidak dapat mengubah kode +dalam kontainer yang sedang berjalan. Jika Anda memiliki aplikasi yang +terkontainerisasi dan ingin melakukan perubahan, maka Anda perlu membuat +kontainer baru dengan menyertakan perubahannya, kemudian membuat ulang kontainer +dengan memulai dari _image_ yang sudah diubah. + +## Kontainer _runtime_ + +Kontainer *runtime* adalah perangkat lunak yang bertanggung jawab untuk +menjalankan kontainer. Kubernetes mendukung beberapa kontainer *runtime*: +{{< glossary_tooltip term_id="docker" >}}, +{{< glossary_tooltip term_id="containerd" >}}, +{{< glossary_tooltip term_id="cri-o" >}}, dan semua implementasi dari +[Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md). + +## Selanjutnya + +- Baca tentang [image-image kontainer](https://kubernetes.io/docs/concepts/containers/images/) +- Baca tentang [Pod](https://kubernetes.io/docs/concepts/workloads/pods/) + +{{% /capture %}} \ No newline at end of file diff --git a/content/id/docs/concepts/extend-kubernetes/operator.md b/content/id/docs/concepts/extend-kubernetes/operator.md new file mode 100644 index 0000000000000..dd9b80348570d --- /dev/null +++ b/content/id/docs/concepts/extend-kubernetes/operator.md @@ -0,0 +1,146 @@ +--- +title: Pola Operator +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Operator adalah ekstensi perangkat lunak untuk Kubernetes yang memanfaatkan +[_custom resource_](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +untuk mengelola aplikasi dan komponen-komponennya. Operator mengikuti prinsip +Kubernetes, khususnya dalam hal [_control loop_](/docs/concepts/#kubernetes-control-plane). + +{{% /capture %}} + + +{{% capture body %}} + +## Motivasi + +Pola dari Operator bertujuan untuk menangkap tujuan utama dari Operator manusia +yang mengelola layanan atau suatu kumpulan layanan. Operator manusia yang +menjaga spesifik aplikasi dan layanan memiliki pengetahuan yang mendalam tentang +bagaimana sistem harus berperilaku, bagaimana cara menyebarkannya, dan +bagaimana bereaksi jika ada masalah. + +Orang-orang yang menjalankan _workload-workload_ di Kubernetes pada umumnya suka +menggunakan otomatisasi untuk menangani tugas-tugas yang berulang. Pola +Operator menangkap bagaimana kamu dapat menulis kode untuk mengotomatiskan +sebuah tugas di luar batas apa yang dapat disediakan oleh Kubernetes itu +sendiri. + +## Operator di Kubernetes + +Kubernetes didesain untuk otomasi. Secara di luar nalar, kamu mendapatkan banyak +otomatisasi bawaan dari komponen inti Kubernetes. Kamu dapat menggunakan +Kubernetes untuk mengotomasikan penyebaran dan menjalankan _workload-workload_, *dan* +kamu juga dapat mengotomasikan cara Kubernetes melakukan pekerjaan itu. + +Konsep dari {{< glossary_tooltip text="controller" term_id="controller" >}} +Kubernetes memungkinkan kamu memperluas perilaku klaster tanpa harus mengubah +kode dari Kubernetes itu sendiri. + +Operator adalah klien API dari Kubernetes yang bertindak sebagai _controller_ +untuk [_custome resource_](/docs/concepts/api-extension/custom-resources/). + +## Contoh Operator {#contoh} + +Beberapa hal yang dapat kamu gunakan untuk mengotomasi Operator meliputi: + +* menyebarkan aplikasi sesuai dengan permintaan +* mengambil dan memulihkan backup status dari sebuah aplikasi +* menangani pembaruan kode aplikasi termasuk dengan perubahan terkait seperti + skema basis data atau pengaturan konfigurasi tambahan +* mempublikasikan layanan ke sebuah aplikasi yang tidak mendukung API Kubernetes + untuk menemukan mereka +* mensimulasikan kegagalan pada seluruh atau sebagian klaster kamu untuk + menguji resiliensinya +* memilih suatu pemimpin untuk aplikasi yang terdistribusi tanpa adanya proses + pemilihan anggota secara internal + +Seperti apa sebuah Operator dalam kasus yang lebih terperinci? Berikut ini +adalah contoh yang lebih detail: + +1. Sebuah _custom resource_ bernama SampleDB, bisa kamu konfigurasi ke + dalam klaster. +2. Sebuah Deployment memastikan sebuah Pod berjalan dimana didalamnya + berisi bagian _controller_ dari Operator. +3. Kontainer Image dari kode Operator. +4. Kode _controller_ yang menanyakan pada *control-plane* untuk mencari tahu + apakah itu sumber daya SampleDB telah dikonfigurasi. +5. Inti dari Operator adalah kode untuk memberi tahu server API bagaimana + membuatnya kondisi sebenarnya sesuai dengan sumber daya yang dikonfigurasi. +   * Jika kamu menambahkan SampleDB baru, Operator menyiapkan + PersistentVolumeClaims untuk menyediakan penyimpanan basis data yang + tahan lama, sebuah StatefulSet untuk menjalankan SampleDB dan pekerjaan + untuk menangani konfigurasi awal. +   * Jika kamu menghapusnya, Operator mengambil _snapshot_, lalu memastikannya +     StatefulSet dan Volume juga dihapus. +6. Operator juga mengelola backup basis data yang reguler. Untuk setiap resource + SampleDB, Operator menentukan kapan membuat Pod yang dapat terhubung +   ke database dan mengambil backup. Pod-Pod ini akan bergantung pada ConfigMap +   dan / atau sebuah Secret yang memiliki basis data koneksi dan kredensial. +7. Karena Operator bertujuan untuk menyediakan otomatisasi yang kuat untuk + resource yang dikelola, maka akan ada kode pendukung tambahan. Sebagai contoh + , kode memeriksa untuk melihat apakah basis data menjalankan versi yang + lama dan, jika demikian, kode membuat objek Job yang melakukan pembaruan untuk + kamu. + +## Menyebarkan Operator + +Cara paling umum untuk menyebarkan Operator adalah dengan menambahkan +CustomResourceDefinition dan _controller_ yang berkaitan ke dalam klaster kamu. +_Controller_ biasanya akan berjalan di luar +{{< glossary_tooltip text="control plane" term_id="control-plane" >}}, +seperti kamu akan menjalankan aplikasi apa pun yang dikontainerisasi. +Misalnya, kamu bisa menjalankan _controller_ di klaster kamu sebagai sebuah +Deployment. + +## Menggunakan Operator {#menggunakan operator} + +Setelah Operator disebarkan, kamu akan menggunakannya dengan menambahkan, +memodifikasi, atau menghapus jenis sumber daya yang digunakan Operator tersebut. +Melanjutkan contoh diatas, kamu akan menyiapkan Deployment untuk Operator itu +sendiri, dan kemudian: + +```shell +kubectl get SampleDB # find configured databases + +kubectl edit SampleDB/example-database # manually change some settings +``` + +…dan itu saja! Operator akan berhati-hati dalam menerapkan perubahan +serta menjaga layanan yang ada dalam kondisi yang baik. + +## Menulis Operator Kamu Sendiri {#menulis-operator} + +Jika tidak ada Operator dalam ekosistem yang mengimplementasikan perilaku kamu +inginkan, kamu dapat kode kamu sendiri. Dalam [Selanjutnya](#selanjutnya) kamu +akan menemukan beberapa tautan ke _library_ dan perangkat yang dapat kamu gunakan +untuk menulis Operator _Cloud Native_ kamu sendiri. + +Kamu juga dapat mengimplementasikan Operator (yaitu, _Controller_) dengan +menggunakan bahasa / _runtime_ yang dapat bertindak sebagai +[klien dari API Kubernetes](/docs/reference/using-api/client-libraries/). + +{{% /capture %}} + +{{% capture Selanjutnya %}} + +* Memahami lebih lanjut tentang [_custome resources_](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +* Temukan "ready-made" _operators_ dalam [OperatorHub.io](https://operatorhub.io/) + untuk memenuhi use case kamu +* Menggunakan perangkat yang ada untuk menulis Operator kamu sendiri, misalnya: + * menggunakan [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) + * menggunakan [kubebuilder](https://book.kubebuilder.io/) + * menggunakan [Metacontroller](https://metacontroller.app/) bersama dengan + `WebHooks` yang kamu implementasikan sendiri + * menggunakan the [Operator _Framework_](https://github.com/operator-framework/getting-started) +* [Terbitkan](https://operatorhub.io/) Operator kamu agar dapat digunakan oleh + orang lain +* Baca [artikel asli dari CoreOS](https://coreos.com/blog/introducing-operators.html) + yang memperkenalkan pola Operator +* Baca sebuah [artikel](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) + dari Google Cloud soal panduan terbaik membangun Operator +{{% /capture %}} diff --git a/content/id/docs/concepts/scheduling/_index.md b/content/id/docs/concepts/scheduling/_index.md new file mode 100644 index 0000000000000..8903577124f58 --- /dev/null +++ b/content/id/docs/concepts/scheduling/_index.md @@ -0,0 +1,5 @@ +--- +title: "Penjadwalan" +weight: 90 +--- + diff --git a/content/id/docs/concepts/scheduling/kube-scheduler.md b/content/id/docs/concepts/scheduling/kube-scheduler.md new file mode 100644 index 0000000000000..bf6fd768d8ef0 --- /dev/null +++ b/content/id/docs/concepts/scheduling/kube-scheduler.md @@ -0,0 +1,102 @@ +--- +title: Penjadwal Kubernetes +content_template: templates/concept +weight: 50 +--- + +{{% capture overview %}} + +Dalam Kubernetes, _scheduling_ atau penjadwalan ditujukan untuk memastikan +{{< glossary_tooltip text="Pod" term_id="pod" >}} mendapatkan +{{< glossary_tooltip text="Node" term_id="node" >}} sehingga +{{< glossary_tooltip term_id="kubelet" >}} dapat menjalankannya. + +{{% /capture %}} + +{{% capture body %}} + +## Ikhtisar Penjadwalan {#penjadwalan} + +Sebuah penjadwal mengawasi Pod yang baru saja dibuat dan belum ada Node yang +dialokasikan untuknya. Untuk setiap Pod yang ditemukan oleh penjadwal, maka +penjadwal tersebut bertanggung jawab untuk menemukan Node terbaik untuk +menjalankan Pod. Penjadwal dapat menetapkan keputusan penempatan ini dengan +mempertimbangkan prinsip-prinsip penjadwalan yang dijelaskan di bawah ini. + +Jika kamu ingin memahami mengapa Pod ditempatkan pada Node tertentu, atau jika +kamu berencana untuk mengimplementasikan penjadwal kustom sendiri, halaman ini +akan membantu kamu belajar tentang penjadwalan. + +## Kube-scheduler + +[_Kube-scheduler_](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/) +adalah penjadwal standar untuk Kubernetes dan dijalankan sebagai bagian dari +{{< glossary_tooltip text="_control plane_" term_id="control-plane" >}}. +_Kube-scheduler_ dirancang agar jika kamu mau dan perlu, kamu bisa menulis +komponen penjadwalan kamu sendiri dan menggunakannya. + +Untuk setiap Pod yang baru dibuat atau Pod yang tak terjadwal lainnya, +_kube-scheduler_ memilih Node yang optimal untuk menjalankannya. Namun, setiap +kontainer masuk Pod memiliki persyaratan sumber daya yang berbeda dan setiap Pod +juga memiliki persyaratan yang berbeda juga. Oleh karena itu, Node yang ada +perlu dipilih sesuai dengan persyaratan khusus penjadwalan. + +Dalam sebuah Klaster, Node yang memenuhi persyaratan penjadwalan untuk suatu Pod +disebut Node _feasible_. Jika tidak ada Node yang cocok, maka Pod tetap tidak +terjadwal sampai penjadwal yang mampu menempatkannya. + +Penjadwal menemukan Node-Node yang layak untuk sebuah Pod dan kemudian +menjalankan sekumpulan fungsi untuk menilai Node-Node yang layak dan mengambil +satu Node dengan skor tertinggi di antara Node-Node yang layak untuk menjalankan +Pod. Penjadwal kemudian memberi tahu server API tentang keputusan ini dalam +proses yang disebut dengan _binding_. + +Beberapa faktor yang perlu dipertimbangkan untuk keputusan penjadwalan termasuk +persyaratan sumber daya individu dan kolektif, aturan kebijakan / perangkat keras / +lunak, spesifikasi persamaan dan anti-persamaan, lokalitas data, interferensi +antar Workloads, dan sebagainya. + +### Pemilihan node pada kube-scheduler {#kube-scheduler-implementation} + +_Kube-scheduler_ memilih node untuk pod dalam 2 langkah operasi: + +1. Filtering +2. Scoring + +Langkah _filtering_ menemukan sekumpulan Nodes yang layak untuk menjadwalkan +Pod. Misalnya, penyarin PodFitsResources memeriksa apakah Node kandidat +memiliki sumber daya yang cukup untuk memenuhi permintaan spesifik sumber daya dari +Pod. Setelah langkah ini, daftar Node akan berisi Node-node yang sesuai; +seringkali, akan terisi lebih dari satu. Jika daftar itu kosong, maka Pod itu +tidak (belum) dapat dijadwalkan. + +Pada langkah _scoring_, penjadwal memberi peringkat pada Node-node yang tersisa +untuk memilih penempatan paling cocok untuk Pod. Penjadwal memberikan skor +untuk setiap Node yang sudah tersaring, memasukkan skor ini pada aturan +penilaian yang aktif. + +Akhirnya, _kube-scheduler_ memberikan Pod ke Node dengan peringkat tertinggi. +Jika ada lebih dari satu node dengan skor yang sama, maka _kube-scheduler_ +memilih salah satunya secara acak. + +Ada dua cara yang didukung untuk mengkonfigurasi perilaku penyaringan dan +penilaian oleh penjadwal: + +1. [Aturan Penjadwalan](/docs/reference/scheduling/policies) yang memungkinkan + kamu untuk mengkonfigurasi _Predicates_ untuk pemfilteran dan _Priorities_ + untuk penilaian. +1. [Profil Penjadwalan](/docs/reference/scheduling/profiles) yang memungkinkan + kamu mengkonfigurasi _Plugin_ yang menerapkan tahapan penjadwalan berbeda, + termasuk: `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, dan + lainnya. Kamu juga bisa mengonfigurasi _kube-scheduler_ untuk menjalankan + profil yang berbeda. + +{{% /capture %}} +{{% capture whatsnext %}} +* Baca tentang [penyetelan performa penjadwal](/docs/concepts/scheduling/scheduler-perf-tuning/) +* Baca tentang [pertimbangan penyebarang topologi pod](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +* Baca [referensi dokumentasi](/docs/reference/command-line-tools-reference/kube-scheduler/) untuk _kube-scheduler_ +* Pelajari tentang [mengkonfigurasi beberapa penjadwal](/docs/tasks/administer-cluster/configure-multiple-schedulers/) +* Pelajari tentang [aturan manajemen topologi](/docs/tasks/administer-cluster/topology-manager/) +* Pelajari tentang [pengeluaran tambahan Pod](/docs/concepts/configuration/pod-overhead/) +{{% /capture %}} diff --git a/content/id/docs/concepts/scheduling/scheduler-perf-tuning.md b/content/id/docs/concepts/scheduling/scheduler-perf-tuning.md new file mode 100644 index 0000000000000..11f9a230779d7 --- /dev/null +++ b/content/id/docs/concepts/scheduling/scheduler-perf-tuning.md @@ -0,0 +1,160 @@ +--- +title: Penyetelan Kinerja Penjadwal +content_template: templates/concept +weight: 70 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.14" state="beta" >}} + +[kube-scheduler](/docs/concepts/scheduling/kube-scheduler/#kube-scheduler) +merupakan penjadwal (_scheduler_) Kubernetes bawaan yang bertanggung jawab +terhadap penempatan Pod-Pod pada seluruh Node di dalam sebuah klaster. + +Node-Node di dalam klaster yang sesuai dengan syarat-syarat penjadwalan dari +sebuah Pod disebut sebagai Node-Node layak (_feasible_). Penjadwal mencari Node-Node +layak untuk sebuah Pod dan kemudian menjalankan fungsi-fungsi untuk menskor Node-Node tersebut, memilih sebuah Node dengan skor tertinggi di antara +Node-Node layak lainnya, di mana Pod akan dijalankan. Penjadwal kemudian memberitahu +API server soal keputusan ini melalui sebuah proses yang disebut _Binding_. + +Laman ini menjelaskan optimasi penyetelan (_tuning_) kinerja yang relevan +untuk klaster Kubernetes berskala besar. + +{{% /capture %}} + +{{% capture body %}} + +Pada klaster berskala besar, kamu bisa menyetel perilaku penjadwal +untuk menyeimbangkan hasil akhir penjadwalan antara latensi (seberapa cepat Pod-Pod baru ditempatkan) +dan akurasi (seberapa akurat penjadwal membuat keputusan penjadwalan yang tepat). + +Kamu bisa mengonfigurasi setelan ini melalui pengaturan `percentageOfNodesToScore` pada kube-scheduler. +Pengaturan KubeSchedulerConfiguration ini menentukan sebuah ambang batas untuk +penjadwalan Node-Node di dalam klaster kamu. + +### Pengaturan Ambang Batas + +Opsi `percentageOfNodesToScore` menerima semua angka numerik antara 0 dan 100. +Angka 0 adalah angka khusus yang menandakan bahwa kube-scheduler harus menggunakan +nilai bawaan. +Jika kamu mengatur `percentageOfNodesToScore` dengan angka di atas 100, kube-scheduler +akan membulatkan ke bawah menjadi 100. + +Untuk mengubah angkanya, sunting berkas konfigurasi kube-scheduler (biasanya `/etc/kubernetes/config/kube-scheduler.yaml`), +lalu _ulang kembali_ kube-scheduler. + +Setelah kamu selesai menyunting, jalankan perintah +```bash +kubectl get componentstatuses +``` +untuk memverifikasi komponen kube-scheduler berjalan dengan baik (_healthy_). Keluarannya kira-kira seperti ini: +``` +NAME STATUS MESSAGE ERROR +controller-manager Healthy ok +scheduler Healthy ok +... +``` + +## Ambang Batas Penskoran Node {#persentase-penskoran-node} + +Untuk meningkatan kinerja penjadwalan, kube-scheduler dapat berhenti mencari +Node-Node yang layak saat sudah berhasil menemukannya. Pada klaster berskala besar, +hal ini menghemat waktu dibandingkan dengan pendekatan awam yang mengecek setiap Node. + +Kamu bisa mengatur ambang batas untuk menentukan berapa banyak jumlah Node minimal yang dibutuhkan, sebagai +persentase bagian dari seluruh Node di dalam klaster kamu. kube-scheduler akan mengubahnya menjadi +bilangan bulat berisi jumlah Node. Saat penjadwalan, jika kube-scheduler mengidentifikasi +cukup banyak Node-Node layak untuk melewati jumlah persentase yang diatur, maka kube-scheduler +akan berhenti mencari Node-Node layak dan lanjut ke [fase penskoran] (/docs/concepts/scheduling/kube-scheduler/#kube-scheduler-implementation). + +[Bagaimana penjadwal mengecek Node](#bagaimana-penjadwal-mengecek-node) menjelaskan proses ini secara detail. + +### Ambang Batas Bawaan + +Jika kamu tidak mengatur sebuah ambang batas, maka Kubernetes akan +menghitung sebuah nilai menggunakan pendekatan linier, yaitu 50% untuk klaster dengan 100 Node, +serta 10% untuk klaster dengan 5000 Node. + +Artinya, kube-scheduler selalu menskor paling tidak 5% dari klaster kamu, terlepas dari +seberapa besar klasternya, kecuali kamu secara eksplisit mengatur `percentageOfNodesToScore` +menjadi lebih kecil dari 5. + +Jika kamu ingin penjadwal untuk memasukkan seluruh Node di dalam klaster ke dalam penskoran, +maka aturlah `percentageOfNodesToScore` menjadi 100. + +## Contoh + +Contoh konfigurasi di bawah ini mengatur `percentageOfNodesToScore` menjadi 50%. + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1alpha1 +kind: KubeSchedulerConfiguration +algorithmSource: + provider: DefaultProvider + +... + +percentageOfNodesToScore: 50 +``` + + +## Menyetel percentageOfNodesToScore + +`percentageOfNodesToScore` merupakan angka 1 sampai 100 dengan +nilai bawaan yang dihitung berdasarkan ukuran klaster. Di sini juga terdapat +batas bawah yang telah ditetapkan, yaitu 50 Node. + +{{< note >}}Pada klaster dengan kurang dari 50 Node layak, penjadwal masih +terus memeriksa seluruh Node karena Node-Node layak belum mencukupi supaya +penjadwal dapat menghentikan proses pencarian lebih awal. + +Pada klaster kecil, jika kamu mengatur `percentageOfNodesToScore` dengan angka kecil, +pengaturan ini hampir atau sama sekali tidak berpengaruh, karena alasan yang sama. + +Jika klaster kamu punya ratusan Node, gunakan angka bawaan untuk opsi konfigurasi ini. +Mengubah angkanya kemungkinan besar tidak akan mengubah kinerja penjadwal secara berarti. +{{< /note >}} + +Sebuah catatan penting yang perlu dipertimbangkan saat mengatur angka ini adalah +ketika klaster dengan jumlah Node sedikit diperiksa untuk kelayakan, beberapa Node +tidak dikirim untuk diskor bagi sebuah Pod. Hasilnya, sebuah Node yang mungkin memiliki +nilai lebih tinggi untuk menjalankan Pod tersebut bisa saja tidak diteruskan ke fase penskoran. +Hal ini berdampak pada penempatan Pod yang kurang ideal. + +Kamu sebaiknya menghindari pengaturan `percentageOfNodesToScore` menjadi sangat rendah, +agar kube-scheduler tidak seringkali membuat keputusan penempatan Pod yang buruk. +Hindari pengaturan persentase di bawah 10%, kecuali _throughput_ penjadwal sangat penting +untuk aplikasi kamu dan skor dari Node tidak begitu penting. Dalam kata lain, kamu +memilih untuk menjalankan Pod pada Node manapun selama Node tersebut layak. + +## Bagaimana Penjadwal Mengecek Node + +Bagian ini ditujukan untuk kamu yang ingin mengerti bagaimana fitur ini bekerja secara internal. + +Untuk memberikan semua Node di dalam klaster sebuah kesempatan yang adil untuk +dipertimbangkan dalam menjalankan Pod, penjadwal mengecek Node satu persatu +secara _round robin_. Kamu dapat membayangkan Node-Node ada di dalam sebuah array. +Penjadwal mulai dari indeks array pertama dan mengecek kelayakan dari Node sampai +jumlahnya telah mencukupi sesuai dengan `percentageOfNodesToScore`. Untuk Pod berikutnya, +penjadwal melanjutkan dari indeks array Node yang terhenti ketika memeriksa +kelayakan Node-Node untuk Pod sebelumnya. + +Jika Node-Node berada di beberapa zona, maka penjadwal akan mengecek Node satu persatu +pada seluruh zona untuk memastikan bahwa Node-Node dari zona berbeda masuk dalam pertimbangan +kelayakan. Sebagai contoh, ada 6 Node di dalam 2 zona: + +``` +Zona 1: Node 1, Node 2, Node 3, Node 4 +Zona 2: Node 5, Node 6 +``` + +Penjadwal mempertimbangkan kelayakan dari Node-Node tersebut dengan urutan berikut: + +``` +Node 1, Node 5, Node 2, Node 6, Node 3, Node 4 +``` + +Setelah semua Node telah dicek, penjadwal akan kembali pada Node 1. + +{{% /capture %}} diff --git a/content/id/docs/concepts/scheduling/scheduling-framework.md b/content/id/docs/concepts/scheduling/scheduling-framework.md new file mode 100644 index 0000000000000..de1772286fa55 --- /dev/null +++ b/content/id/docs/concepts/scheduling/scheduling-framework.md @@ -0,0 +1,249 @@ +--- +title: Kerangka Kerja Penjadwalan (Scheduling Framework) +content_template: templates/concept +weight: 60 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="1.15" state="alpha" >}} + +Kerangka kerja penjadwalan (_Scheduling Framework_) adalah arsitektur yang dapat +dipasang (_pluggable_) pada penjadwal Kubernetes untuk membuat kustomisasi +penjadwal lebih mudah. Hal itu dilakukan dengan menambahkan satu kumpulan "plugin" +API ke penjadwal yang telah ada. _Plugin_ dikompilasi ke dalam penjadwal. +Beberapa API memungkinkan sebagian besar fitur penjadwalan diimplementasikan +sebagai _plugin_, sambil tetap mempertahankan penjadwalan "inti" sederhana dan +terpelihara. Silahkan merujuk pada [proposal desain dari kerangka penjadwalan] +[kep] untuk informasi teknis lebih lanjut tentang desain kerangka kerja +tersebut. + +[kep]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md + +{{% /capture %}} + +{{% capture body %}} + +# Alur kerja kerangka kerja + +Kerangka kerja penjadwalan mendefinisikan beberapa titik ekstensi. _Plugin_ penjadwal +mendaftar untuk dipanggil di satu atau lebih titik ekstensi. Beberapa _plugin_ ini +dapat mengubah keputusan penjadwalan dan beberapa hanya bersifat informasi. + +Setiap upaya untuk menjadwalkan satu Pod dibagi menjadi dua fase, **Siklus Penjadwalan (_Scheduling Cycle_)** dan **Siklus Pengikatan (_Binding Cycle_)**. + +## Siklus Penjadwalan dan Siklus Pengikatan + +Siklus penjadwalan memilih sebuah Node untuk Pod, dan siklus pengikatan menerapkan +keputusan tersebut ke klaster. Secara bersama-sama, siklus penjadwalan dan siklus +pengikatan diartikan sebagai sebuah "konteks penjadwalan (_scheduling context_)". + +Siklus penjadwalan dijalankan secara serial, sementara siklus pengikatan dapat +berjalan secara bersamaan. + +Siklus penjadwalan atau pengikatan dapat dibatalkan jika Pod telah ditentukan +untuk tidak terjadwalkan atau jika terdapat kesalahan internal. Pod akan +dikembalikan ke antrian dan dicoba lagi. + +## Titik-titik ekstensi + +Gambar berikut menunjukkan konteks penjadwalan Pod dan titik-titik ekstensi +yang diperlihatkan oleh kerangka penjadwalan. Dalam gambar ini "Filter" +setara dengan "Predicate" dan "Scoring" setara dengan "Priority Function". + +Satu _plugin_ dapat mendaftar di beberapa titik ekstensi untuk melakukan pekerjaan +yang lebih kompleks atau _stateful_. + +{{< figure src="/images/docs/scheduling-framework-extensions.png" title="Titik-titik ekstensi dari kerangka kerja Penjadwalan" >}} + +### QueueSort {#queue-sort} + +_Plugin_ ini digunakan untuk mengurutkan Pod-Pod dalam antrian penjadwalan. _Plugin_ +QueueSort pada dasarnya menyediakan fungsi `Less (Pod1, Pod2)`. Hanya satu jenis +_plugin_ QueueSort yang dapat diaktifkan dalam waktu yang bersamaan. + +### PreFilter {#pre-filter} + +_Plugin_ ini digunakan untuk melakukan pra-proses informasi tentang Pod, atau untuk +memeriksa tertentu kondisi yang harus dipenuhi oleh klaster atau Pod. Jika +_plugin_ PreFilter menghasilkan hasil yang salah, siklus penjadwalan dibatalkan. + +### Filter + +_Plugin_ ini digunakan untuk menyaring Node yang tidak dapat menjalankan Pod. +Untuk setiap Node, penjadwal akan memanggil _plugin_ Filter sesuai dengan urutan +mereka dikonfigurasi. Jika ada _plugin_ Filter menandai Node menjadi _infeasible_, +maka _plugin_ yang lainnya tidak akan dipanggil untuk Node itu. Node-Node dapat dievaluasi +secara bersamaan. + +### PreScore {#pre-score} + +_Plugin_ ini digunakan untuk melakukan pekerjaan "pra-penilaian", yang +menghasilkan keadaan yang dapat dibagi untuk digunakan oleh _plugin-plugin_ Score. +Jika _plugin_ PreScore mengeluarkan hasil salah, maka siklus penjadwalan dibatalkan. + +### Score {#score} + +_Plugin_ ini digunakan untuk menentukan peringkat Node yang telah melewati fase +penyaringan. Penjadwal akan memanggil setiap _plugin_ Score untuk setiap Node. +Akan ada kisaran bilangan bulat yang telah ditetapkan untuk mewakili skor +minimum dan maksimum. Setelah fase [NormalizeScore](#normalize-scoring), +penjadwal akan menggabungkan skor Node dari semua _plugin_ sesuai dengan bobot +_plugin_ yang telah dikonfigurasi. + +### NormalizeScore {#normalize-score} + +_Plugin_ ini digunakan untuk memodifikasi skor sebelum penjadwal menghitung +peringkat akhir Node-Node. _Plugin_ yang mendaftar untuk titik ekstensi ini akan +dipanggil dengan hasil [Score](#score) dari _plugin_ yang sama. Hal ini dilakukan +sekali untuk setiap _plugin_ dan setiap siklus penjadwalan. + +Sebagai contoh, anggaplah sebuah _plugin_ `BlinkingLightScorer` memberi peringkat +pada Node-Node berdasarkan berapa banyak kedipan lampu yang mereka miliki. + +```go +func ScoreNode(_ *v1.pod, n *v1.Node) (int, error) { + return getBlinkingLightCount(n) +} +``` + +Namun, jumlah maksimum kedipan lampu mungkin kecil jika dibandingkan dengan +`NodeScoreMax`. Untuk memperbaikinya, `BlinkingLightScorer` juga harus mendaftar +untuk titik ekstensi ini. + +```go +func NormalizeScores(scores map[string]int) { + highest := 0 + for _, score := range scores { + highest = max(highest, score) + } + for node, score := range scores { + scores[node] = score*NodeScoreMax/highest + } +} +``` + +Jika ada _plugin_ NormalizeScore yang menghasilkan hasil yang salah, maka siklus +penjadwalan dibatalkan. + +{{< note >}} +_Plugin_ yang ingin melakukan pekerjaan "pra-pemesanan" harus menggunakan +titik ekstensi NormalizeScore. +{{< /note >}} + +### Reserve + +Ini adalah titik ekstensi yang bersifat informasi. _Plugin_ yang mempertahankan +keadaan _runtime_ (alias "_stateful plugins_") harus menggunakan titik ekstensi ini +untuk diberitahukan oleh penjadwal ketika sumber daya pada suatu Node dicadangkan +untuk Pod yang telah disiapkan. Proses ini terjadi sebelum penjadwal benar-benar +mengikat Pod ke Node, dan itu ada untuk mencegah kondisi balapan (_race conditions_) +ketika penjadwal menunggu agar pengikatan berhasil. + +Ini adalah langkah terakhir dalam siklus penjadwalan. Setelah Pod berada dalam +status dicadangkan, maka itu akan memicu _plugin_ [Unreserve](#unreserve) +(apabila gagal) atau _plugin_ [PostBind](#post-bind) (apabila sukses) +di akhir siklus pengikatan. + +### Permit + +_Plugin_ Permit dipanggil pada akhir siklus penjadwalan untuk setiap Pod +untuk mencegah atau menunda pengikatan ke Node kandidat. _Plugin_ Permit dapat +melakukan salah satu dari ketiga hal ini: + +1. **approve** \ +     Setelah semua _plugin_ Permit menyetujui sebuah Pod, Pod tersebut akan dikirimkan untuk diikat. + +2. **deny** \ +     Jika ada _plugin_ Permit yang menolak sebuah Pod, Pod tersebut akan dikembalikan ke + antrian penjadwalan. Hal ini akan memicu _plugin_ [Unreserve](#unreserve). + +3. **wait** (dengan batas waktu) \ +     Jika _plugin_ Permit menghasilkan "wait", maka Pod disimpan dalam +     daftar Pod "yang menunggu" internal, dan siklus pengikatan Pod ini dimulai tetapi akan langsung diblokir +     sampai mendapatkan [_approved_](#frameworkhandle). Jika waktu tunggu habis, ** wait ** menjadi ** deny ** +     dan Pod dikembalikan ke antrian penjadwalan, yang memicu _plugin_ [Unreserve](#unreserve). + +{{< note >}} +Ketika setiap _plugin_ dapat mengakses daftar Pod-Pod "yang menunggu" dan menyetujuinya +(silahkan lihat [`FrameworkHandle`](#frameworkhandle)), kami hanya mengharapkan +_plugin_ Permit untuk menyetujui pengikatan Pod dalam kondisi "menunggu" yang +telah dipesan. Setelah Pod disetujui, akan dikirim ke fase [PreBind](#pre-bind). +{{< /note >}} + +### PreBind {#pre-bind} + +_Plugin_ ini digunakan untuk melakukan pekerjaan apa pun yang diperlukan sebelum +Pod terikat. Sebagai contoh, _plugin_ PreBind dapat menyediakan _network volume_ +dan melakukan _mounting_ pada Node target sebelum mengizinkan Pod berjalan di +sana. + +Jika ada _plugin_ PreBind yang menghasilkan kesalahan, maka Pod [ditolak](#unreserve) +dan kembali ke antrian penjadwalan. + +### Bind + +_Plugin_ ini digunakan untuk mengikat Pod ke Node. _Plugin-plugin_ Bind tidak akan +dipanggil sampai semua _plugin_ PreBind selesai. Setiap _plugin_ Bind dipanggil +sesuai urutan saat dikonfigurasi. _Plugin_ Bind dapat memilih untuk menangani +atau tidak Pod yang diberikan. Jika _plugin_ Bind memilih untuk menangani Pod, +** _plugin_ Bind yang tersisa dilewati **. + +### PostBind {#post-bind} + +Ini adalah titik ekstensi bersifat informasi. _Plugin-plugin_ PostBind dipanggil +setelah sebuah Pod berhasil diikat. Ini adalah akhir dari siklus pengikatan, dan +dapat digunakan untuk membersihkan sumber daya terkait. + +### Unreserve + +Ini adalah titik ekstensi bersifat informasi. Jika sebuah Pod telah dipesan dan +kemudian ditolak di tahap selanjutnya, maka _plugin-plugin_ Unreserve akan +diberitahu. _Plugin_ Unreserve harus membersihkan status yang terkait dengan Pod +yang dipesan. + +_Plugin_ yang menggunakan titik ekstensi ini sebaiknya juga harus digunakan +[Reserve](#unreserve). + +## _Plugin_ API + +Ada dua langkah untuk _plugin_ API. Pertama, _plugin_ harus mendaftar dan mendapatkan +konfigurasi, kemudian mereka menggunakan antarmuka titik ekstensi. Antarmuka (_interface_) +titik ekstensi memiliki bentuk sebagai berikut. + +```go +type Plugin interface { + Name() string +} + +type QueueSortPlugin interface { + Plugin + Less(*v1.pod, *v1.pod) bool +} + +type PreFilterPlugin interface { + Plugin + PreFilter(context.Context, *framework.CycleState, *v1.pod) error +} + +// ... +``` + +## Konfigurasi _plugin_ + +Kamu dapat mengaktifkan atau menonaktifkan _plugin_ dalam konfigurasi penjadwal. +Jika kamu menggunakan Kubernetes v1.18 atau yang lebih baru, kebanyakan +[plugin-plugin penjadwalan](/docs/reference/scheduling/profiles/#scheduling-plugins) +sudah digunakan dan diaktifkan secara bawaan. + +Selain _plugin-plugin_ bawaan, kamu juga dapat mengimplementasikan _plugin-plugin_ penjadwalan +kamu sendiri dan mengonfigurasinya bersama-sama dengan _plugin-plugin_ bawaan. +Kamu bisa mengunjungi [plugin-plugin penjadwalan](https://github.com/kubernetes-sigs/scheduler-plugins) +untuk informasi lebih lanjut. + +Jika kamu menggunakan Kubernetes v1.18 atau yang lebih baru, kamu dapat +mengonfigurasi sekumpulan _plugin_ sebagai profil penjadwal dan kemudian menetapkan +beberapa profil agar sesuai dengan berbagai jenis beban kerja. Pelajari lebih +lanjut di [multi profil](/docs/reference/scheduling/profiles/#multiple-profiles). + +{{% /capture %}} diff --git a/content/id/docs/concepts/security/overview.md b/content/id/docs/concepts/security/overview.md index 2a9e8ebe05f55..e6a00f1cf560f 100644 --- a/content/id/docs/concepts/security/overview.md +++ b/content/id/docs/concepts/security/overview.md @@ -1,5 +1,5 @@ --- -title: Ikhtisar Keamanan _Cloud Native_ +title: Ikhtisar Keamanan Cloud Native content_template: templates/concept weight: 1 --- diff --git a/content/id/docs/concepts/services-networking/dual-stack.md b/content/id/docs/concepts/services-networking/dual-stack.md new file mode 100644 index 0000000000000..0c3993b5ca42b --- /dev/null +++ b/content/id/docs/concepts/services-networking/dual-stack.md @@ -0,0 +1,140 @@ +--- +title: Dual-stack IPv4/IPv6 +feature: + title: Dual-stack IPv4/IPv6 + description: > + Pengalokasian alamat IPv4 dan IPv6 untuk Pod dan Service + +content_template: templates/concept +weight: 70 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.16" state="alpha" >}} + +_Dual-stack_ IPv4/IPv6 memungkinkan pengalokasian alamat IPv4 dan IPv6 untuk +{{< glossary_tooltip text="Pod" term_id="pod" >}} dan {{< glossary_tooltip text="Service" term_id="service" >}}. + +Jika kamu mengaktifkan jaringan _dual-stack_ IPv4/IPv6 untuk klaster Kubernetes +kamu, klaster akan mendukung pengalokasian kedua alamat IPv4 dan IPv6 secara +bersamaan. + +{{% /capture %}} + +{{% capture body %}} + +## Fitur-fitur yang didukung + +Mengaktifkan _dual-stack_ IPv4 / IPv6 pada klaster Kubernetes kamu untuk +menyediakan fitur-fitur berikut ini: + +* Jaringan Pod _dual-stack_ (pengalokasian sebuah alamat IPv4 dan IPv6 untuk setiap Pod) +* Service yang mendukung IPv4 dan IPv6 (setiap Service hanya untuk satu keluarga alamat) +* Perutean Pod ke luar klaster (misalnya Internet) melalui antarmuka IPv4 dan IPv6 + +## Prasyarat + +Prasyarat berikut diperlukan untuk menggunakan _dual-stack_ IPv4/IPv6 pada +klaster Kubernetes : + +* Kubernetes versi 1.16 atau yang lebih baru +* Dukungan dari penyedia layanan untuk jaringan _dual-stack_ (Penyedia layanan _cloud_ atau yang lainnya harus dapat menyediakan antarmuka jaringan IPv4/IPv6 yang dapat dirutekan) untuk Node Kubernetes +* Sebuah _plugin_ jaringan yang mendukung _dual-stack_ (seperti Kubenet atau Calico) +* Kube-proxy yang berjalan dalam mode IPVS + +## Mengaktifkan _dual-stack_ IPv4/IPv6 + +Untuk mengaktifkan _dual-stack_ IPv4/IPv6, aktifkan [gerbang fitur (_feature gate_)](/docs/reference/command-line-tools-reference/feature-gates/) `IPv6DualStack` +untuk komponen-komponen yang relevan dari klaster kamu, dan tetapkan jaringan +_dual-stack_ pada klaster: + + * kube-controller-manager: + * `--feature-gates="IPv6DualStack=true"` + * `--cluster-cidr=,` misalnya `--cluster-cidr=10.244.0.0/16,fc00::/24` + * `--service-cluster-ip-range=,` + * `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` nilai bawaannya adalah /24 + untuk IPv4 dan /64 untuk IPv6 + * kubelet: + * `--feature-gates="IPv6DualStack=true"` + * kube-proxy: + * `--proxy-mode=ipvs` + * `--cluster-cidr=,` + * `--feature-gates="IPv6DualStack=true"` + +{{< caution >}} +Jika kamu menentukan blok alamat IPv6 yang lebih besar dari /24 melalui +`--cluster-cidr` pada baris perintah, maka penetapan tersebut akan gagal. +{{< /caution >}} + +## Service + +Jika klaster kamu mengaktifkan jaringan _dual-stack_ IPv4/IPv6, maka kamu dapat +membuat {{}} dengan +alamat IPv4 atau IPv6. Kamu dapat memilih keluarga alamat untuk clusterIP +Service kamu dengan mengatur bagian, `.spec.ipFamily`, pada Service tersebut. +Kamu hanya dapat mengatur bagian ini saat membuat Service baru. Mengatur bagian +`.spec.ipFamily` bersifat opsional dan hanya boleh digunakan jika kamu berencana +untuk mengaktifkan {{}} +dan {{}} IPv4 dan IPv6 +pada klaster kamu. Konfigurasi bagian ini bukanlah syarat untuk lalu lintas +[_egress_] (#lalu-lintas-egress). + +{{< note >}} +Keluarga alamat bawaan untuk klaster kamu adalah keluarga alamat dari rentang +clusterIP Service pertama yang dikonfigurasi melalui opsi +`--service-cluster-ip-range` pada kube-controller-manager. +{{< /note >}} + +Kamu dapat mengatur `.spec.ipFamily` menjadi salah satu dari: + + * `IPv4`: Dimana server API akan mengalokasikan IP dari `service-cluster-ip-range` yaitu `ipv4` + * `IPv6`: Dimana server API akan mengalokasikan IP dari `service-cluster-ip-range` yaitu `ipv6` + +Spesifikasi Service berikut ini tidak memasukkan bagian `ipFamily`. +Kubernetes akan mengalokasikan alamat IP (atau yang dikenal juga sebagai +"_cluster IP_") dari `service-cluster-ip-range` yang dikonfigurasi pertama kali +untuk Service ini. + +{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} + +Spesifikasi Service berikut memasukkan bagian `ipFamily`. Sehingga Kubernetes +akan mengalokasikan alamat IPv6 (atau yang dikenal juga sebagai "_cluster IP_") +dari `service-cluster-ip-range` yang dikonfigurasi untuk Service ini. + +{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} + +Sebagai perbandingan, spesifikasi Service berikut ini akan dialokasikan sebuah alamat +IPv4 (atau yang dikenal juga sebagai "_cluster IP_") dari `service-cluster-ip-range` +yang dikonfigurasi untuk Service ini. + +{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}} + +### Tipe _LoadBalancer_ + +Penyedia layanan _cloud_ yang mendukung IPv6 untuk pengaturan beban eksternal, +Mengatur bagian `type` menjadi` LoadBalancer` sebagai tambahan terhadap mengatur bagian +`ipFamily` menjadi `IPv6` menyediakan sebuah _cloud load balancer_ untuk Service kamu. + +## Lalu lintas _egress_ + +Penggunaan blok alamat IPv6 yang dapat dirutekan dan yang tidak dapat dirutekan +secara publik diperbolehkan selama {{}} +dari penyedia layanan dapat mengimplementasikan transportasinya. Jika kamu memiliki +Pod yang menggunakan IPv6 yang dapat dirutekan secara publik dan ingin agar Pod +mencapai tujuan di luar klaster (misalnya Internet publik), kamu harus mengatur +IP samaran untuk lalu lintas keluar dan balasannya. [_ip-masq-agent_](https://github.com/kubernetes-incubator/ip-masq-agent) +bersifat _dual-stack aware_, jadi kamu bisa menggunakan ip-masq-agent untuk +_masquerading_ IP dari klaster _dual-stack_. + +## Masalah-masalah yang diketahui + +* Kubenet memaksa pelaporan posisi IP untuk IPv4,IPv6 IP (--cluster-cidr) + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Validasi jaringan _dual-stack_ IPv4/IPv6](/docs/tasks/network/validate-dual-stack) + +{{% /capture %}} diff --git a/content/id/docs/concepts/services-networking/endpoint-slices.md b/content/id/docs/concepts/services-networking/endpoint-slices.md new file mode 100644 index 0000000000000..158918a915ecd --- /dev/null +++ b/content/id/docs/concepts/services-networking/endpoint-slices.md @@ -0,0 +1,184 @@ +--- +title: EndpointSlice +feature: + title: EndpointSlice + description: > + Pelacakan _endpoint_ jaringan yang dapat diskalakan pada klaster Kubernetes. + +content_template: templates/concept +weight: 15 +--- + + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +EndpointSlice menyediakan sebuah cara yang mudah untuk melacak _endpoint_ jaringan dalam sebuah +klaster Kubernetes. EndpointSlice memberikan alternatif yang lebih _scalable_ dan lebih dapat diperluas dibandingkan dengan Endpoints. + +{{% /capture %}} + +{{% capture body %}} + +## Motivasi + +Endpoints API telah menyediakan sebuah cara yang mudah dan sederhana untuk +melacak _endpoint_ jaringan pada Kubernetes. Sayangnya, seiring dengan besarnya klaster Kubernetes +dan Service, batasan-batasan yang dimiliki API tersebut semakin terlihat. +Terutama, hal tersebut termasuk kendala-kendala mengenai proses _scaling_ _endpoint_ jaringan +dalam jumlah yang besar. + +Karena semua _endpoint_ jaringan untuk sebuah Service disimpan dalam satu sumber daya +Endpoints, sumber daya tersebut dapat menjadi cukup besar. Hal itu dapat mempengaruhi kinerja +dari komponen-komponen Kubernetes (terutama _master control plane_) dan menyebabkan +lalu lintas jaringan dan pemrosesan yang cukup besar ketika Endpoints berubah. +EndpointSlice membantu kamu menghindari masalah-masalah tersebut dan juga menyediakan platform +yang dapat diperluas untuk fitur-fitur tambahan seperti _topological routing_. + +## Sumber daya EndpointSlice + +Pada Kubernetes, sebuah EndpointSlice memiliki referensi-referensi terhadap sekumpulan _endpoint_ +jaringan. _Controller_ EndpointSlice secara otomatis membuat EndpointSlice +untuk sebuah Service Kubernetes ketika sebuah {{< glossary_tooltip text="selektor" +term_id="selector" >}} dituliskan. EndpointSlice tersebut akan memiliki +referensi-referensi menuju Pod manapun yang cocok dengan selektor pada Service tersebut. EndpointSlice mengelompokkan +_endpoint_ jaringan berdasarkan kombinasi Service dan Port yang unik. +Nama dari sebuah objek EndpointSlice haruslah berupa +[nama subdomain DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) yang sah. + +Sebagai contoh, berikut merupakan sampel sumber daya EndpointSlice untuk sebuah Service Kubernetes +yang bernama `example`. + +```yaml +apiVersion: discovery.k8s.io/v1beta1 +kind: EndpointSlice +metadata: + name: example-abc + labels: + kubernetes.io/service-name: example +addressType: IPv4 +ports: + - name: http + protocol: TCP + port: 80 +endpoints: + - addresses: + - "10.1.2.3" + conditions: + ready: true + hostname: pod-1 + topology: + kubernetes.io/hostname: node-1 + topology.kubernetes.io/zone: us-west2-a +``` + +Secara bawaan, setiap EndpointSlice yang dikelola oleh _controller_ EndpointSlice tidak akan memiliki +lebih dari 100 _endpoint_. Di bawah skala tersebut, EndpointSlice akan memetakan 1:1 +dengan Endpoints dan Service dan akan memiliki kinerja yang sama. + +EndpointSlice dapat bertindak sebagai sumber kebenaran untuk kube-proxy sebagai acuan mengenai +bagaimana cara untuk merutekan lalu lintas jaringan internal. Ketika diaktifkan, EndpointSlice semestinya memberikan peningkatan +kinerja untuk Service yang memiliki Endpoints dalam jumlah besar. + +### Tipe-tipe Alamat + +EndpointSlice mendukung tiga tipe alamat: + +* IPv4 +* IPv6 +* FQDN (_Fully Qualified Domain Name_) + +### Topologi + +Setiap _endpoint_ pada EndpointSlice dapat memiliki informasi topologi yang relevan. +Hal ini digunakan untuk mengindikasikan di mana _endpoint_ berada, berisi informasi mengenai +Node yang bersangkutan, zona, dan wilayah. Ketika nilai-nilai tersebut tersedia, +label-label Topology berikut akan ditambahkan oleh _controller_ EndpointSlice: + +* `kubernetes.io/hostname` - Nama dari Node tempat _endpoint_ berada. +* `topology.kubernetes.io/zone` - Zona tempat _endpoint_ berada. +* `topology.kubernetes.io/region` - Region tempat _endpoint_ berada. + +Nilai-nilai dari label-label berikut berasal dari sumber daya yang diasosiasikan dengan tiap +_endpoint_ pada sebuah _slice_. Label _hostname_ merepresentasikan nilai dari kolom NodeName + pada Pod yang bersangkutan. Label zona dan wilayah merepresentasikan nilai +dari label-label dengan nama yang sama pada Node yang bersangkutan. + +### Pengelolaan + +Secara bawaan, EndpointSlice dibuat dan dikelola oleh _controller_ +EndpointSlice. Ada berbagai macam kasus lain untuk EndpointSlice, seperti +implementasi _service mesh_, yang memungkinkan adanya entitas atau _controller_ lain +yang dapat mengelola beberapa EndpointSlice sekaligus. Untuk memastikan beberapa entitas dapat +mengelola EndpointSlice tanpa mengganggu satu sama lain, sebuah +label `endpointslice.kubernetes.io/managed-by` digunakan untuk mengindikasikan entitas +yang mengelola sebuah EndpointSlice. _Controller_ EndpointSlice akan menambahkan +`endpointslice-controller.k8s.io` sebagai nilai dari label tersebut pada seluruh +EndpointSlice yang dikelolanya. Entitas lain yang mengelola EndpointSlice juga diharuskan untuk +menambahkan nilai yang unik untuk label tersebut. + +### Kepemilikan + +Pada kebanyakan kasus, EndpointSlice akan dimiliki oleh Service yang diikutinya. Hal ini diindikasikan dengan referensi pemilik pada tiap EndpointSlice dan +juga label `kubernetes.io/service-name` yang memudahkan pencarian seluruh +EndpointSlice yang dimiliki oleh sebuah Service. + +## _Controller_ EndpointSlice + +_Controller_ EndpointSlice mengamati Service dan Pod untuk memastikan EndpointSlice +yang bersangkutan berada dalam kondisi terkini. _Controller_ EndpointSlice akan mengelola EndpointSlice untuk +setiap Service yang memiliki selektor. Ini akan merepresentasikan IP dari Pod +yang cocok dengan selektor dari Service tersebut. + +### Ukuran EndpointSlice + +Secara bawaan, jumlah _endpoint_ yang dapat dimiliki tiap EndpointSlice dibatasi sebanyak 100 _endpoint_. Kamu dapat +mengaturnya melalui opsi `--max-endpoints-per-slice` {{< glossary_tooltip +text="kube-controller-manager" term_id="kube-controller-manager" >}} sampai dengan +jumlah maksimum sebanyak 1000 _endpoint_. + +### Distribusi EndpointSlice + +Tiap EndpointSlice memiliki sekumpulan _port_ yang berlaku untuk seluruh _endpoint_ dalam sebuah sumber daya. Ketika nama _port_ digunakan untuk sebuah Service, Pod mungkin mendapatkan +nomor target _port_ yang berbeda-beda untuk nama _port_ yang sama, sehingga membutuhkan +EndpointSlice yang berbeda. Hal ini mirip dengan logika mengenai bagaimana _subset_ dikelompokkan +dengan Endpoints. + +_Controller EndpointSlice_ akan mencoba untuk mengisi EndpointSlice sebanyak mungkin, tetapi tidak +secara aktif melakukan _rebalance_ terhadap EndpointSlice tersebut. Logika dari _controller_ cukup sederhana: + +1. Melakukan iterasi terhadap EndpointSlice yang sudah ada, menghapus _endpoint_ yang sudah tidak lagi + dibutuhkan dan memperbarui _endpoint_ yang sesuai yang mungkin telah berubah. +2. Melakukan iterasi terhadap EndpointSlice yang sudah dimodifikasi pada langkah pertama dan + mengisinya dengan _endpoint_ baru yang dibutuhkan. +3. Jika masih tersisa _endpoint_ baru untuk ditambahkan, mencoba untuk menambahkannya pada + _slice_ yang tidak berubah sebelumnya dan/atau membuat _slice_ yang baru. + +Terlebih penting, langkah ketiga memprioritaskan untuk membatasi pembaruan EndpointSlice terhadap +distribusi dari EndpointSlice yang benar-benar penuh. Sebagai contoh, jika ada 10 +_endpoint_ baru untuk ditambahkan dan ada 2 EndpointSlice yang masing-masing memiliki ruang untuk 5 _endpoint_ baru, +pendekatan ini akan membuat sebuah EndpointSlice baru daripada mengisi 2 +EndpointSlice yang sudah ada. Dengan kata lain, pembuatan sebuah EndpointSlice +lebih diutamakan daripada pembaruan beberapa EndpointSlice. + +Dengan kube-proxy yang berjalan pada tiap Node dan mengamati EndpointSlice, setiap perubahan +pada sebuah EndpointSlice menjadi sangat mahal karena hal tersebut akan dikirimkan ke +setiap Node dalam klaster. Pendekatan ini ditujukan untuk membatasi jumlah +perubahan yang perlu dikirimkan ke setiap Node, meskipun hal tersebut berdampak pada banyaknya +EndpointSlice yang tidak penuh. + +Pada praktiknya, distribusi yang kurang ideal seperti ini akan jarang ditemukan. Kebanyakan perubahan yang diproses oleh _controller_ EndpointSlice akan cukup kecil untuk dapat masuk pada +EndpointSlice yang sudah ada, dan jika tidak, cepat atau lambat sebuah EndpointSlice baru +akan segera dibutuhkan. Pembaruan bertahap (_rolling update_) dari Deployment juga menyediakan sebuah proses +pengemasan ulang EndpointSlice yang natural seiring dengan digantikannya seluruh Pod dan _endpoint_ yang +bersangkutan. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Mengaktifkan EndpointSlice](/docs/tasks/administer-cluster/enabling-endpointslices) +* Baca [Menghubungkan Aplikasi dengan Service](/docs/concepts/services-networking/connect-applications-service/) + +{{% /capture %}} diff --git a/content/id/docs/concepts/services-networking/service-topology.md b/content/id/docs/concepts/services-networking/service-topology.md new file mode 100644 index 0000000000000..f73eedafbb364 --- /dev/null +++ b/content/id/docs/concepts/services-networking/service-topology.md @@ -0,0 +1,206 @@ +--- +title: Topologi Service (Service Topology) +feature: + title: Topologi Service (Service Topology) + description: > + Rute lalu lintas layanan berdasarkan topologi klaster. + +content_template: templates/concept +weight: 10 +--- + + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +Topologi Service memungkinkan Service untuk +merutekan lalu lintas jaringan berdasarkan topologi Node dalam klaster. Misalnya, suatu +layanan dapat menentukan lalu lintas jaringan yang lebih diutamakan untuk dirutekan ke +beberapa _endpoint_ yang berada pada Node yang sama dengan klien, atau pada +_availability zone_ yang sama. + +{{% /capture %}} + +{{% capture body %}} + +## Pengantar + +Secara _default_, lalu lintas jaringan yang dikirim ke `ClusterIP` atau` NodePort` dari Service +dapat dialihkan ke alamat _backend_ untuk Service tersebut. Sejak Kubernetes 1.7 +dimungkinkan untuk merutekan lalu lintas jaringan "eksternal" ke Pod yang berjalan di +Node yang menerima lalu lintas jaringan, tetapi fitur ini tidak didukung untuk `ClusterIP` dari +Service, dan topologi yang lebih kompleks — seperti rute zonasi — +belum memungkinkan. Fitur topologi Service mengatasi kekurangan ini dengan +mengizinkan pembuat layanan untuk mendefinisikan kebijakan dalam merutekan lalu lintas jaringan +berdasarkan label Node untuk Node-Node asal dan tujuan. + +Dengan menggunakan label Node yang sesuai antara asal dan tujuan, operator dapat +menunjuk kelompok Node yang "lebih dekat" dan "lebih jauh" antara satu sama lain, +dengan menggunakan metrik apa pun yang masuk akal untuk memenuhi persyaratan +dari operator itu. Untuk sebagian besar operator di _cloud_ publik, misalnya, ada +preferensi untuk menjaga layanan lalu lintas jaringan dalam zona yang sama, karena lalu lintas jaringan +antar zona memiliki biaya yang dibebankan, sementara lalu lintas jaringan +dalam zona yang sama tidak ada biaya. Kebutuhan umum lainnya termasuk kemampuan untuk merutekan +lalu lintas jaringan ke Pod lokal yang dikelola oleh sebuah DaemonSet, atau menjaga lalu lintas jaringan ke +Node yang terhubung ke _top-of-rack switch_ yang sama untuk mendapatkan +latensi terendah. + + +## Menggunakan Topologi Service + +Jika klaster kamu mengaktifkan topologi layanan kamu dapat mengontrol rute lalu lintas jaringan Service +dengan mengatur bagian `topologyKeys` pada spesifikasi Service. Bagian ini +adalah daftar urutan label-label Node yang akan digunakan untuk mengurutkan _endpoint_ +saat mengakses Service ini. Lalu lintas jaringan akan diarahkan ke Node yang nilai +label pertamanya cocok dengan nilai dari Node asal untuk label yang sama. Jika +tidak ada _backend_ untuk Service pada Node yang sesuai, maka label kedua akan +dipertimbangkan, dan seterusnya, sampai tidak ada label yang tersisa. + +Jika tidak ditemukan kecocokan, lalu lintas jaringan akan ditolak, sama seperti jika tidak ada +sama sekali _backend_ untuk Service tersebut. Artinya, _endpoint_ dipilih +berdasarkan kunci topologi yang pertama yang tersedia pada _backend_. Jika dalam +bagian ini ditentukan dan semua entri tidak memiliki _backend_ yang sesuai dengan +topologi klien, maka Service tidak memiliki _backend_ untuk klien dan koneksi harus +digagalkan. Nilai khusus `" * "` dapat digunakan untuk mengartikan "topologi +apa saja". Nilai _catch-all_ ini, jika digunakan, maka hanya sebagai +nilai terakhir dalam daftar. + +Jika `topologyKeys` tidak ditentukan atau kosong, tidak ada batasan topologi +yang akan diterapkan. + +Seandainya sebuah klaster dengan Node yang dilabeli dengan nama _host_ , +nama zona, dan nama wilayah mereka, maka kamu dapat mengatur nilai +`topologyKeys` dari sebuah Service untuk mengarahkan lalu lintas jaringan seperti berikut ini. + +* Hanya ke _endpoint_ dalam Node yang sama, gagal jika tidak ada _endpoint_ pada + Node: `[" kubernetes.io/hostname "]`. +* Lebih memilih ke _endpoint_ dalam Node yang sama, jika tidak ditemukan maka ke _endpoint_ pada +  zona yang sama, diikuti oleh wilayah yang sama, dan selain itu gagal: + `[" kubernetes.io/hostname ", "topology.kubernetes.io/zone", "topology.kubernetes.io/region"] `. +  Ini mungkin berguna, misalnya, dalam kasus di mana lokalitas data sangat penting. +* Lebih memilih ke _endpoint_ dalam zona yang sama, tetapi memilih _endpoint_ mana saja yang + tersedia apabila tidak ada yang tersedia dalam zona ini: +  `[" topology.kubernetes.io/zone "," * "]`. + + +## Batasan + +* Topologi Service tidak kompatibel dengan `externalTrafficPolicy=Local`, dan +   karena itu Service tidak dapat menggunakan kedua fitur ini sekaligus. Dimungkinkan untuk menggunakan +   kedua fitur pada klaster yang sama untuk Service yang berbeda, bukan untuk +   Service yang sama. + +* Untuk saat ini kunci topologi yang valid hanya terbatas pada `kubernetes.io/hostname`, +   `topology.kubernetes.io/zone`, dan` topology.kubernetes.io/region`, tetapi akan +   digeneralisasikan ke label Node yang lain di masa depan. + +* Kunci topologi harus merupakan kunci label yang valid dan paling banyak hanya 16 kunci yang dapat ditentukan. + +* Nilai _catch-all_, `" * "`, harus menjadi nilai terakhir pada kunci topologi, jika +   nilai itu digunakan. + + +## Contoh + +Berikut ini adalah contoh umum penggunaan fitur topologi Service. + +### Hanya pada _endpoint_ pada Node lokal + +Service yang hanya merutekan ke _endpoint_ pada Node lokal. Jika tidak ada _endpoint_ pada Node, lalu lintas jaringan akan dihentikan: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" +``` + +### Lebih memilih _endpoint_ pada Node lokal + +Service yang lebih memilih _endpoint_ pada Node lokal tetapi akan memilih ke _endpoint_ +dalam klaster jika _endpoint_ pada Node lokal tidak ada: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" + - "*" +``` + + +### Hanya untuk _endpoint_ pada Zona atau Wilayah + +Service yang lebih memilih _endpoint_ zona daripada wilayah. Jika tidak ada _endpoint_ pada +keduanya, make lalu lintas jaringan akan dihentikan. + + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "topology.kubernetes.io/zone" + - "topology.kubernetes.io/region" +``` + +### Lebih memilih _endpoint_ pada Node Lokal, Zonal, terakhir Regional + +Service yang lebih memilih _endpoint_ pada Node lokal, zonal, kemudian regional +tetapi jika tetap tidak ditemukan maka akan memilih _endpoint_ diseluruh klaster. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" + - "topology.kubernetes.io/zone" + - "topology.kubernetes.io/region" + - "*" +``` + + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Baca tentang [mengaktifkan topologi Service](/docs/tasks/administer-cluster/enabling-service-topology) +* Baca [menghubungkan aplikasi dengan Service](/docs/concepts/services-networking/connect-applications-service/) + +{{% /capture %}} diff --git a/content/id/docs/concepts/workloads/controllers/deployment.md b/content/id/docs/concepts/workloads/controllers/deployment.md index 1f7c43d14ee40..5d8d68114141f 100644 --- a/content/id/docs/concepts/workloads/controllers/deployment.md +++ b/content/id/docs/concepts/workloads/controllers/deployment.md @@ -11,8 +11,8 @@ weight: 30 {{% capture overview %}} -Deployment menyediakan pembaruan [Pods](/docs/id/concepts/workloads/pods/pod/) dan -[ReplicaSets](/docs/id/concepts/workloads/controllers/replicaset/) secara deklaratif. +Deployment menyediakan pembaruan [Pods](/id/docs/concepts/workloads/pods/pod/) dan +[ReplicaSets](/id/docs/concepts/workloads/controllers/replicaset/) secara deklaratif. Kamu mendeskripsikan sebuah state yang diinginkan dalam Deployment, kemudian Deployment {{< glossary_tooltip term_id="controller" >}} mengubah state sekarang menjadi seperti pada deskripsi secara bertahap. Kamu dapat mendefinisikan Deployment untuk membuat ReplicaSets baru atau untuk menghapus Deployment yang sudah ada dan mengadopsi semua resourcenya untuk Deployment baru. diff --git a/content/id/docs/concepts/workloads/pods/ephemeral-containers.md b/content/id/docs/concepts/workloads/pods/ephemeral-containers.md new file mode 100644 index 0000000000000..c74e6e63b4a2e --- /dev/null +++ b/content/id/docs/concepts/workloads/pods/ephemeral-containers.md @@ -0,0 +1,224 @@ +--- +title: Kontainer Sementara (Ephemeral) +content_template: templates/concept +weight: 80 +--- + +{{% capture overview %}} + +{{< feature-state state="alpha" for_k8s_version="v1.16" >}} + +Halaman ini memberikan gambaran umum tentang kontainer sementara: satu jenis +kontainer khusus yang berjalan sementara pada {{< glossary_tooltip term_id="pod" >}} +yang sudah ada untuk melakukan tindakan yang diinisiasi oleh pengguna seperti +dalam pemecahan masalah. Kamu menggunakan kontainer sementara untuk memeriksa +layanan bukan untuk membangun aplikasi. + +{{< warning >}} +Kontainer sementara masih berada dalam fase alpha dan tidak cocok untuk +klaster produksi. Kamu harus mengharapkan adanya suatu fitur yang tidak akan +berfungsi dalam beberapa situasi tertentu, seperti saat menargetkan _namespace_ +dari suatu kontainer. Sesuai dengan Kubernetes +[_Deprecation Policy_](/docs/reference/using-api/deprecation-policy/), fitur alpha +ini dapat berubah secara signifikan di masa depan atau akan dihapus seluruhnya. +{{< /warning >}} + +{{% /capture %}} + +{{% capture body %}} + +## Memahami Kontainer Sementara + +{{< glossary_tooltip text="Pod" term_id="pod" >}} adalah blok pembangun +fundamental dalam aplikasi Kubernetes. Karena Pod diharapkan digunakan hanya +sekali dan dapat diganti, sehingga kamu tidak dapat menambahkan kontainer ke +dalam Pod setelah Pod tersebut dibuat. Sebaliknya, kamu biasanya menghapus dan +mengganti beberapa Pod dengan cara yang terkontrol melalui +{{< glossary_tooltip text="Deployment" term_id="deployment" >}}. + +Namun, kadang-kadang perlu juga untuk memeriksa keadaan Pod yang telah ada, +sebagai contoh untuk memecahkan masalah _bug_ yang sulit direproduksi. Dalam +kasus ini, kamu dapat menjalankan sebuah kontainer sementara di dalam suatu Pod +yang sudah ada untuk memeriksa statusnya dan menjalankannya segala macam +perintah. + +### Apa itu Kontainer Sementara? + +Kontainer sementara berbeda dengan kontainer lainnya karena tidak memiliki +jaminan sumber daya maupun akan eksekusi, dan mereka tidak akan pernah secara +otomatis melakukan _restart_, jadi mereka tidak sesuai untuk membangun aplikasi. +Kontainer sementara dideskripsikan dengan menggunakan ContainerSpec yang sama +dengan kontainer biasa, tetapi banyak bagian yang tidak kompatibel dan tidak +diperbolehkan untuk kontainer sementara. + +- Kontainer sementara mungkin tidak memiliki port, sehingga bagian seperti +`port`, `livenessProbe`, `readinessProbe` tidak diperbolehkan. +- Alokasi sumber daya untuk Pod tidak dapat diubah, sehingga pengaturan + sumber daya tidak diperbolehkan. +- Untuk daftar lengkap bagian yang diperbolehkan, dapat di lihat + [referensi dokumentasi Kontainer Sementara](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralcontainer-v1-core). + +Kontainer sementara dibuat dengan menggunakan _handler_ khusus +EphemeralContainers dalam API tanpa menambahkannya langsung ke `pod.spec`, +sehingga tidak memungkinan untuk menambahkan kontainer sementara dengan +menggunakan `kubectl edit`. + +Seperti dengan kontainer biasa, kamu tidak dapat mengubah atau menghapus +kontainer sementara setelah kamu memasukkannya ke dalam sebuah Pod. + +## Penggunaan Kontainer Sementara + +Kontainer sementara berguna untuk pemecahan masalah secara interaktif pada saat +`kubectl exec` tidak mencukupi karena sebuah kontainer telah hancur atau +kontainer _image_ tidak memiliki utilitas untuk _debugging_. + +Khususnya, untuk [_images_distroless_](https://github.com/GoogleContainerTools/distroless) +memungkinkan kamu untuk menyebarkan kontainer *image* minimal yang mengurangi +_surface attack_ dan paparan _bug_ dan _vulnerability_. Karena +_image distroless_ tidak mempunyai sebuah _shell_ atau utilitas _debugging_ apa +pun, sehingga sulit untuk memecahkan masalah _image distroless_ dengan +menggunakan `kubectl exec` saja. + +Saat menggunakan kontainer sementara, akan sangat membantu untuk mengaktifkan +[_process namespace sharing_](/docs/tasks/configure-pod-container/share-process-namespace/) +sehingga kamu dapat melihat proses pada kontainer lain. + +### Contoh + +{{< note >}} +Contoh-contoh pada bagian ini membutuhkan `EphemeralContainers` [feature +gate](/docs/reference/command-line-tools-reference/feature-gates/) untuk +diaktifkan, dan membutuhkan Kubernetes klien dan server versi v1.16 atau +yang lebih baru. +{{< /note >}} + +Contoh-contoh pada bagian ini menunjukkan bagaimana kontainer sementara muncul +dalam API. Kamu biasanya dapat menggunakan plugin `kubectl` untuk mengatasi +masalah untuk mengotomatiskan langkah-langkah ini. + +Kontainer sementara dibuat menggunakan _subresource_ `ephemeralcontainers` +Pod, yang dapat didemonstrasikan menggunakan `kubectl --raw`. Pertama-tama +deskripsikan kontainer sementara untuk ditambahkan dalam daftar +`EphemeralContainers`: + +```json +{ + "apiVersion": "v1", + "kind": "EphemeralContainers", + "metadata": { + "name": "example-pod" + }, + "ephemeralContainers": [{ + "command": [ + "sh" + ], + "image": "busybox", + "imagePullPolicy": "IfNotPresent", + "name": "debugger", + "stdin": true, + "tty": true, + "terminationMessagePolicy": "File" + }] +} +``` + +Untuk memperbarui kontainer yang sudah berjalan dalam `example-pod`: + +```shell +kubectl replace --raw /api/v1/namespaces/default/pods/example-pod/ephemeralcontainers -f ec.json +``` + +Ini akan menampilkan daftar baru dari seluruh kontainer sementara: + +```json +{ + "kind":"EphemeralContainers", + "apiVersion":"v1", + "metadata":{ + "name":"example-pod", + "namespace":"default", + "selfLink":"/api/v1/namespaces/default/pods/example-pod/ephemeralcontainers", + "uid":"a14a6d9b-62f2-4119-9d8e-e2ed6bc3a47c", + "resourceVersion":"15886", + "creationTimestamp":"2019-08-29T06:41:42Z" + }, + "ephemeralContainers":[ + { + "name":"debugger", + "image":"busybox", + "command":[ + "sh" + ], + "resources":{ + + }, + "terminationMessagePolicy":"File", + "imagePullPolicy":"IfNotPresent", + "stdin":true, + "tty":true + } + ] +} +``` + +Kamu dapat melihat kondisi kontainer sementara yang baru dibuat dengan +menggunakan `kubectl describe`: + +```shell +kubectl describe pod example-pod +``` + +``` +... +Ephemeral Containers: + debugger: + Container ID: docker://cf81908f149e7e9213d3c3644eda55c72efaff67652a2685c1146f0ce151e80f + Image: busybox + Image ID: docker-pullable://busybox@sha256:9f1003c480699be56815db0f8146ad2e22efea85129b5b5983d0e0fb52d9ab70 + Port: + Host Port: + Command: + sh + State: Running + Started: Thu, 29 Aug 2019 06:42:21 +0000 + Ready: False + Restart Count: 0 + Environment: + Mounts: +... +``` + +Kamu dapat mengakses kontainer sementara yang baru menggunakan +`kubectl attach`: + +```shell +kubectl attach -it example-pod -c debugger +``` + +Jika proses berbagi _namespace_ diaktifkan, kamu dapat melihat proses dari semua +kontainer dalam Pod tersebut. Misalnya, setelah mengakses, kamu jalankan +`ps` di kontainer _debugger_: + +```shell +# Jalankan ini pada _shell_ dalam _debugger_ dari kontainer sementara +ps auxww +``` +Hasilnya akan seperti ini: +``` +PID USER TIME COMMAND + 1 root 0:00 /pause + 6 root 0:00 nginx: master process nginx -g daemon off; + 11 101 0:00 nginx: worker process + 12 101 0:00 nginx: worker process + 13 101 0:00 nginx: worker process + 14 101 0:00 nginx: worker process + 15 101 0:00 nginx: worker process + 16 101 0:00 nginx: worker process + 17 101 0:00 nginx: worker process + 18 101 0:00 nginx: worker process + 19 root 0:00 /pause + 24 root 0:00 sh + 29 root 0:00 ps auxww +``` + +{{% /capture %}} diff --git a/content/id/docs/home/_index.md b/content/id/docs/home/_index.md index 45300a94fc434..cb0e2dc70f799 100644 --- a/content/id/docs/home/_index.md +++ b/content/id/docs/home/_index.md @@ -3,7 +3,7 @@ title: Dokumentasi Kubernetes noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "Home" main_menu: true weight: 10 diff --git a/content/id/docs/reference/glossary/controller.md b/content/id/docs/reference/glossary/controller.md new file mode 100755 index 0000000000000..c88dfccb147fd --- /dev/null +++ b/content/id/docs/reference/glossary/controller.md @@ -0,0 +1,30 @@ +--- +title: Controller +id: controller +date: 2018-04-12 +full_link: /docs/concepts/architecture/controller/ +short_description: > + Kontrol tertutup yang mengawasi kondisi bersama dari klaster melalui apiserver dan membuat perubahan yang mencoba untuk membawa kondisi saat ini ke kondisi yang diinginkan. + +aka: +tags: +- architecture +- fundamental +--- +Di Kubernetes, _controller_ adalah kontrol tertutup yang mengawasi kondisi +{{< glossary_tooltip term_id="cluster" text="klaster">}} anda, lalu membuat atau +meminta perubahan jika diperlukan. +Setiap _controller_ mencoba untuk memindahkan status klaster saat ini lebih +dekat ke kondisi yang diinginkan. + + + +_Controller_ mengawasi keadaan bersama dari klaster kamu melalui +{{< glossary_tooltip text="apiserver" term_id="kube-apiserver" >}} (bagian dari +{{< glossary_tooltip term_id="control-plane" >}}). + +Beberapa _controller_ juga berjalan di dalam _control plane_, menyediakan +kontrol tertutup yang merupakan inti dari operasi Kubernetes. Sebagai contoh: +_controller Deployment_, _controller daemonset_, _controller namespace_, dan +_controller volume persisten_ (dan lainnya) semua berjalan di dalam +{{< glossary_tooltip term_id="kube-controller-manager" >}}. diff --git a/content/id/docs/reference/glossary/service.md b/content/id/docs/reference/glossary/service.md new file mode 100755 index 0000000000000..d156813b0e4a3 --- /dev/null +++ b/content/id/docs/reference/glossary/service.md @@ -0,0 +1,23 @@ +--- +title: Service +id: service +date: 2020-04-05 +full_link: /docs/concepts/services-networking/service/ +short_description: > + Sebuah Cara untuk mengekspos aplikasi yang berjalan pada sebuah kumpulan Pod sebagai layanan jaringan. + +aka: +tags: +- fundamental +- core-object +--- +Suatu cara yang abstrak untuk mengekspos aplikasi yang berjalan pada sebuah kumpulan +{{}} sebagai layanan jaringan. + + +Rangkaian Pod yang ditargetkan oleh Service (biasanya) ditentukan oleh +{{}}. +Jika lebih banyak Pod ditambahkan atau dihapus, maka kumpulan Pod yang cocok +dengan Selector juga akan berubah. Service memastikan bahwa lalu lintas +jaringan dapat diarahkan ke kumpulan Pod yang ada saat ini sebagai +Workload. \ No newline at end of file diff --git a/content/id/docs/tasks/_index.md b/content/id/docs/tasks/_index.md new file mode 100644 index 0000000000000..9e213d5a99533 --- /dev/null +++ b/content/id/docs/tasks/_index.md @@ -0,0 +1,94 @@ +--- +title: Tugas (Tasks) +main_menu: true +weight: 50 +content_template: templates/concept +--- + +{{< toc >}} + +{{% capture overview %}} + +Bagian dokumentasi Kubernetes ini berisi halaman-halaman yang perlihatkan +bagaimana melakukan setiap tugas (_task_). Halaman tugas menunjukkan cara melakukan +satu hal saja, biasanya dengan memberikan urutan langkah pendek. + +{{% /capture %}} + +{{% capture body %}} + +## Antarmuka Pengguna Berbasis Web (Dashboard) + +Melakukan _deploy_ dan mengakses _dashboard_ berbasis web untuk +membantu kamu mengelola dan memantau aplikasi yang dimasukkan ke dalam container +di Kubernetes. + +## Menggunakan Baris Perintah kubectl + +Instalasi dan konfigurasi utilitas baris perintah `kubectl` yang digunakan untuk +mengelola secara langsung klaster Kubernetes. + +## Mengkonfigurasi Pod dan Container + +Melakukan tugas konfigurasi yang umum untuk Pod dan Container. + +## Menjalankan Aplikasi + +Melakukan tugas manajemen aplikasi secara umum, seperti _rolling updates_, memasukkan +informasi ke dalam Pod, dan penskalaan Pod secara horisontal. + +## Menjalankan Job + +Menjalankan Job dengan menggunakan pemrosesan paralel. + +## Mengakses Aplikasi dalam Klaster + +Mengkonfigurasi _load balancing_, _port forwarding_, atau membangun _firewall_ +atau konfigurasi DNS untuk mengakses aplikasi dalam sebuah klaster. + +## _Monitoring, Logging_, dan _Debugging_ + +Mengatur _monitoring_ (pemantauan) dan _logging_ (pencatatan) untuk memecahkan +masalah klaster atau melakukan _debug_ (pelacakan) aplikasi yang dikontainerisasi. + +## Mengakses API Kubernetes + +Mempelajari berbagai metode untuk mengakses API Kubernetes secara langsung. + +## Menggunakan TLS + +Mengkonfigurasi aplikasi kamu untuk percaya dan menggunakan klaster _Certificate +Authority_ (CA). + +## Mengelola Klaster + +Mempelajari tugas umum untuk mengelola klaster. + +## Mengelola Aplikasi yang _Stateful_ + +Melakukan tugas umum untuk mengelola aplikasi yang _Stateful_, termasuk +penskalaan, penghapusan, dan _debugging_ StatefulSets. + +## _Daemon_ Klaster + +Melakukan tugas-tugas umum untuk mengelola DaemonSet, seperti melakukan _rolling +updates_. + +## Mengelola GPU + +Mengkonfigurasi dan menjadwalkan GPU NVIDIA untuk digunakan sebagai sumber daya +oleh Node dalam sebuah klaster. + +## Mengelola _HugePages_ + +Mengkonfigurasi dan menjadwalkan _HugePages_ sebagai sumber daya yang dapat +dijadwalkan dalam sebuah klaster. + +{{% /capture %}} + +{{% capture whatsnext %}} + +Jika kamu ingin menulis halaman tugas (_task_), silahkan lihat +[Membuat Dokumentasi _Pull Request_](/docs/home/contribute/create-pull-request/). + +{{% /capture %}} diff --git a/content/id/docs/tasks/access-application-cluster/_index.md b/content/id/docs/tasks/access-application-cluster/_index.md new file mode 100755 index 0000000000000..eab23b1e9cb62 --- /dev/null +++ b/content/id/docs/tasks/access-application-cluster/_index.md @@ -0,0 +1,5 @@ +--- +title: "Mengakes Aplikasi-aplikasi di sebuah Klaster" +weight: 60 +--- + diff --git a/content/id/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/id/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md new file mode 100644 index 0000000000000..e1404100b5dd5 --- /dev/null +++ b/content/id/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -0,0 +1,201 @@ +--- +title: Menggunakan Port Forwarding untuk Mengakses Aplikasi di sebuah Klaster +content_template: templates/task +weight: 40 +min-kubernetes-server-version: v1.10 +--- + +{{% capture overview %}} + +Halaman ini menunjukkan bagaimana menggunakan `kubectl port-forward` untuk menghubungkan sebuah server Redis yang sedang berjalan di sebuah klaster Kubernetes. Tipe dari koneksi ini dapat berguna untuk melakukan _debugging_ basis data. + +{{% /capture %}} + + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +* Install [redis-cli](http://redis.io/topics/rediscli). + +{{% /capture %}} + + +{{% capture steps %}} + +## Membuat Deployment dan Service Redis + +1. Buat sebuah Deployment yang menjalankan Redis: + + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml + ``` + + Keluaran dari sebuah perintah yang sukses akan memverifikasi bahwa Deployment telah terbuat: + + ``` + deployment.apps/redis-master created + ``` + + Lihat status Pod untuk memeriksa apakah sudah siap: + + ```shell + kubectl get pods + ``` + + Keluaran menampilkan Pod yang telah terbuat: + + ``` + NAME READY STATUS RESTARTS AGE + redis-master-765d459796-258hz 1/1 Running 0 50s + ``` + + Lihat status Deployment: + + ```shell + kubectl get deployment + ``` + + Keluaran menampilkan bahwa Deployment telah terbuat: + + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + redis-master 1/1 1 1 55s + ``` + + Deployment secara otomatis mengatur sebuah ReplicaSet. + Lihat status ReplicaSet menggunakan: + + ```shell + kubectl get replicaset + ``` + + Keluaran menampilkan bahwa ReplicaSet telah terbuat: + + ``` + NAME DESIRED CURRENT READY AGE + redis-master-765d459796 1 1 1 1m + ``` + + +2. Buat sebuah Service untuk mengekspos Redis di jaringan: + + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml + ``` + + Keluaran dari perintah yang sukses akan memverifikasi bahwa Service telah terbuat: + + ``` + service/redis-master created + ``` + + Lihat Service yang telah terbuat menggunakan: + + ```shell + kubectl get service redis-master + ``` + + Keluaran menampilkan service yang telah terbuat: + + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + redis-master ClusterIP 10.0.0.213 6379/TCP 27s + ``` + +3. Periksa apakah server Redis berjalan di Pod, dan mendengarkan porta 6379: + + ```shell + # Ubah redis-master-765d459796-258hz menjadi nama Pod + kubectl get pod redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' + ``` + + Keluaran akan menampilkan porta dari Redis di Pod tersebut: + + ``` + 6379 + ``` + + (ini adalah porta TCP yang dialokasi untuk Redis di internet) + +## Meneruskan sebuah porta lokal ke sebuah porta pada Pod + +1. `kubectl port-forward` memungkinkan penggunaan nama sumber daya, seperti sebuah nama Pod, untuk memilih Pod yang sesuai untuk melakukan penerusan porta. + + + ```shell + # Ubah redis-master-765d459796-258hz menjadi nama Pod + kubectl port-forward redis-master-765d459796-258hz 7000:6379 + ``` + + yang sama seperti + + ```shell + kubectl port-forward pods/redis-master-765d459796-258hz 7000:6379 + ``` + + atau + + ```shell + kubectl port-forward deployment/redis-master 7000:6379 + ``` + + atau + + ```shell + kubectl port-forward replicaset/redis-master 7000:6379 + ``` + + atau + + ```shell + kubectl port-forward service/redis-master 7000:6379 + ``` + + Semua perintah di atas berfungsi. Keluarannya mirip dengan ini: + + ``` + I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:7000 -> 6379 + I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:7000 -> 6379 + ``` + +2. Memulai antarmuka baris perintah (*command line*) Redis: + + ```shell + redis-cli -p 7000 + ``` + +3. Pada baris perintah di Redis, masukkan perintah `ping`: + + ``` + ping + ``` + + Sebuah permintaan *ping* yang sukses akan mengembalikan: + + ``` + PONG + ``` + +{{% /capture %}} + + +{{% capture discussion %}} + +## Diskusi + +Koneksi-koneksi yang dibuat ke porta lokal 7000 diteruskan ke porta 6379 dari Pod yang menjalankan server Redis. +Dengan koneksi ini, kamu dapat menggunakan *workstation* lokal untuk melakukan *debug* basis data yang berjalan di Pod. + +{{< note >}} +`kubectl port-forward` hanya bisa diimplementasikan untuk porta TCP saja. +Dukungan untuk protokol UDP bisa dilihat di +[issue 47862](https://github.com/kubernetes/kubernetes/issues/47862). +{{< /note >}} + +{{% /capture %}} + + +{{% capture whatsnext %}} +Belajar lebih tentang [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward). +{{% /capture %}} diff --git a/content/id/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/id/docs/tasks/access-application-cluster/web-ui-dashboard.md new file mode 100644 index 0000000000000..752e43b2f9207 --- /dev/null +++ b/content/id/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -0,0 +1,168 @@ +--- +title: Antarmuka Pengguna Berbasis Web (Dashboard) +content_template: templates/concept +weight: 10 +card: + name: tasks + weight: 30 + title: Menggunakan Antarmuka Pengguna Berbasis Web Dashboard +--- + +{{% capture overview %}} + +Dashboard adalah antarmuka pengguna Kubernetes. Kamu dapat menggunakan Dashboard untuk men-_deploy_ aplikasi yang sudah dikontainerisasi ke klaster Kubernetes, memecahkan masalah pada aplikasi kamu, dan mengatur sumber daya klaster. Kamu dapat menggunakan Dashboard untuk melihat ringkasan dari aplikasi yang sedang berjalan di klaster kamu, dan juga membuat atau mengedit objek individu sumber daya Kubernetes (seperti Deployment, Job, DaemonSet, dll.). Sebagai contoh, kamu dapat mengembangkan sebuah Deployment, menginisiasi sebuah pembaruan bertahap (_rolling update_), memulai kembali sebuah Pod atau men-_deploy_ aplikasi baru menggunakan sebuah _deploy wizard_. + +Dashboard juga menyediakan informasi tentang status dari sumber daya Kubernetes di klaster kamu dan kesalahan apapun yang mungkin telah terjadi.. + +![Antarmuka Pengguna Dashboard Kubernetes](/images/docs/ui-dashboard.png) + +{{% /capture %}} + + +{{% capture body %}} + +## Men-_deploy_ Antarmuka Pengguna Dashboard + +Antarmuka Dashboard tidak ter-_deploy_ secara bawaan. Untuk men-_deploy_-nya, kamu dapat menjalankan perintah berikut: + +``` +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml +``` + +## Mengakses Antarmuka Dashboard + + +Untuk melindungi data klaster kamu, pen-_deploy_-an Dashboard menggunakan sebuah konfigurasi _RBAC_ yang minimal secara bawaan. Saat ini, Dashboard hanya mendukung otentikasi dengan _Bearer Token_. Untuk membuat token untuk demo, kamu dapat mengikuti petunjuk kita untuk [membuat sebuah contoh pengguna](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md). + +{{< warning >}} +Contoh pengguna yang telah dibuat di tutorial tersebut akan memiliki hak istimewa sebagai administrator dan hanyalah untuk tujuan pembelajaran. +{{< /warning >}} + +### Proksi antarmuka baris perintah (CLI) +Kamu dapat mengakses Dashboard menggunakan perkakas CLI kubectl dengan menjalankan perintah berikut: + +``` +kubectl proxy +``` + +Kubectl akan membuat Dashboard tersedia di http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. + +Antarmuka pengguna berbasis web tersebut hanya dapat di akses dari mesin dimana perintah tersebut dijalankan. Lihat `kubectl proxy --help` untuk lebih lanjut. + +{{< note >}} +Metode otentikasi Kubeconfig tidak mendukung penyedia identitas eksternal atau otentikasi berbasis sertifikat elektronik x509. +{{< /note >}} + +## Tampilan selamat datang + +Ketika kamu mengakses Dashboard di klaster yang kosong, kamu akan melihat laman selamat datang. Laman ini berisi tautan ke dokumen ini serta tombol untuk men-_deploy_ aplikasi pertama kamu. Selain itu, kamu dapat melihat aplikasi-aplikasi sistem mana saja yang berjalan secara bawaan di [Namespace](/docs/tasks/administer-cluster/namespaces/) `kube-system` dari klaster kamu, misalnya Dashboard itu sendiri. + +![Kubernetes Dashboard welcome page](/images/docs/ui-dashboard-zerostate.png) + +## Men-_deploy_ aplikasi yang sudah dikontainerisasi + +Dashboard memungkinkan kamu untuk membuat dan men-_deploy_ aplikasi yang sudah dikontainerisasi sebagai Deployment dan Service opsional dengan sebuah _wizard_ sederhana. Kamu secara manual dapat menentukan detail aplikasi, atau mengunggah sebuah berkas YAML atau JSON yang berisi konfigurasi aplikasi. + +Tekan tombol **CREATE** di pojok kanan atas di laman apapun untuk memulai. + +### Menentukan detail aplikasi + +_Deploy wizard_ meminta kamu untuk menyediakan informasi sebagai berikut: + +- **App name** (wajib): Nama dari aplikasi kamu. Sebuah [label](/docs/concepts/overview/working-with-objects/labels/) dengan nama tersebut akan ditambahkan ke Deployment dan Service, jika ada, akan di-_deploy_. + + Nama aplikasi harus unik di dalam [Namespace](/docs/tasks/administer-cluster/namespaces/) Kubernetes yang kamu pilih. Nama tersebut harus dimulai dengan huruf kecil, dan diakhiri dengan huruf kecil atau angka, dan hanya berisi huruf kecil, angka dan tanda hubung (-). Nama tersebut juga dibatasi hanya 24 karakter. Spasi di depan dan belakang nama tersebut diabaikan. + +- **Container image** (wajib): Tautan publik dari sebuah [_image_](/docs/concepts/containers/images/) kontainer Docker pada _registry_ apapun, atau sebuah _image_ privat (biasanya di-_hosting_ di Google Container Registry atau Docker Hub). Spesifikasi _image_ kontainer tersebut harus diakhiri dengan titik dua. + +- **Number of pods** (wajib): Berapa banyak Pod yang kamu inginkan untuk men-_deploy_ aplikasimu. Nilainya haruslah sebuah bilangan bulat positif. + + Sebuah [Deployment](/docs/concepts/workloads/controllers/deployment/) akan terbuat untuk mempertahankan jumlah Pod di klaster kamu. + +- **Service** (opsional): Untuk beberapa aplikasi (misalnya aplikasi _frontend_) kamu mungkin akan mengekspos sebuah [Service](/docs/concepts/services-networking/service/) ke alamat IP publik yang mungkin berada diluar klaster kamu(Service eksternal). Untuk Service eksternal, kamu mungkin perlu membuka lebih dari satu porta jaringan untuk mengeksposnya. Lihat lebih lanjut [di sini](/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/). + + Service lainnya yang hanya dapat diakses dari dalam klaster disebut Service internal. + + Terlepas dari jenis Service, jika kamu memilih untuk membuat sebuah Service dan Container kamu berjalan di sebuah porta(arah masuk), kamu perlu menentukan dua porta. Service akan memetakan porta(arah masuk) ke porta target yang ada di sisi Container. Service akan mengarahkan ke Pod-Pod kamu yang sudah di-_deploy_. Protokol yang didukung adalah TCP dan UDP. Nama DNS internal untuk Service ini akan sesuai dengan nama aplikasi yang telah kamu tentukan diatas. + +Jika membutuhkan, kamu dapat membuka bagian **Advanced options** di mana kamu dapat menyetel lebih banyak pengaturan: + +- **Description**: Tels yang kamu masukkan ke sini akan ditambahkan sebagai sebuah [anotasi](/docs/concepts/overview/working-with-objects/annotations/) ke Deployment dan akan ditampilkan di detail aplikasi. + +- **Labels**: [Label-label](/docs/concepts/overview/working-with-objects/labels/) bawaan yang akan digunakan untuk aplikasi kamu adalah `name` dan `version` aplikasi. Kamu dapat menentukan label lain untuk diterapkan ke Deployment, Service (jika ada), dan Pod, seperti `release`, `environment`, `tier`, `partition`, dan `track` rilis. + + Contoh: + + ```conf +release=1.0 +tier=frontend +environment=pod +track=stable +``` + +- **_Namespace_**: Kubernetes mendukung beberapa klaster virtual yang berjalan di atas klaster fisik yang sama. Klaster virtual ini disebut [Namespace](/docs/tasks/administer-cluster/namespaces/). Mereka mengizinkan kamu untuk mempartisi sumber daya ke beberapa grup yang diberi nama secara logis. + + Dashboard menampilkan semua Namespace yang tersedia dalam sebuah daftar _dropdown_, dan mengizinkan kamu untuk membuat Namespace baru. Nama yang diizinkan untuk Namespace terdiri dari maksimal 63 karakter alfanumerik dan tanda hubung (-), dan tidak boleh ada huruf kapital. + Nama dari Namespace tidak boleh terdiri dari angka saja. Jika nama Namespace disetel menjadi sebuah angka, misalnya 10, maka Pod tersebut akan ditaruh di Namespace `default`. + + Jika pembuatan Namespace berhasil, Namespace tersebut akan dipilih secara bawaan. Jika pembuatannya gagal, maka Namespace yang pertama akan terpilih. + +- **_Image Pull Secret_**: Jika kamu menggunakan _image_ kontainer Docker yang privat, mungkin diperlukan kredensial [_pull secret_](/docs/concepts/configuration/secret/). + + Dashboard menampilkan semua _secret_ yang tersedia dengan daftar _dropdown_, dan mengizinkan kamu untuk membuat _secret_ baru. Nama _secret_ tersebut harus mengikuti aturan Nama DNS, misalnya `new.image-pull.secret`. Isi dari sebuah _secret_ harus dienkode dalam bentuk _base64_ dan ditentukan dalam sebuah berkas [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). Nama kredensial dapat berisi maksimal 253 karakter. + + Jika pembuatan _image pull secret_ berhasil, _image pull secret_ tersebut akan terpilih secara bawaan. Jika gagal, maka tidak ada _secret_ yang dipilih. + +- **_CPU requirement (cores)_** dan **_Memory requirement (MiB)_**: Kamu dapat menentukan [batasan sumber daya](/docs/tasks/configure-pod-container/limit-range/) minimal untuk Container. Secara bawaan, Pod-Pod berjalan dengan CPU dan memori yang tak dibatasi. + +- **_Run command_** dan **_Run command arguments_**: Secara bawaan, Container-Container kamu akan menjalankan perintah [_entrypoint_](/docs/user-guide/containers/#containers-and-commands) bawaan dari _image_ Docker yang ditentukan. Kamu dapat menggunakan opsi _Run command_ dan _Run command arguments_ untuk mengganti bawaannya. + +- **_Run as priveleged_**: Pengaturan ini untuk menentukan sebuah proses yang berjalan dalam [_privileged container_](/docs/user-guide/pods/#privileged-mode-for-pod-containers) sepadan dengan proses yang berjalan sebagai _root_ pada _host_-nya. _Priveleged container_ dapat menggunakan kemampuan seperti memanipulasi _stack_ jaringan dan mengakses perangkat-perangkat. + +- **_Environment variables_**: Kubernetes mengekspos Service melalui [_environment variable_](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). Kamu dapat membuat _environment variable_ atau meneruskan argumen ke perintah-perintah untuk menjalankan Container dengan nilai dari _environment variable_. _Environment Variable_ dapat digunakan pada aplikasi-aplikasi untuk menemukan sebuah Service. Nilai _environment variable_ dapat merujuk ke variabel lain menggunakan sintaksis `$(VAR_NAME)`. + +### Menggungah berkas YAML atau JSON + +Kubernetes mendukung pengaturan deklaratif. Dengan cara ini, semua pengaturan disimpan dalam bentuk berkas YAML atau JSON menggunakan skema sumber daya [[API](/docs/concepts/overview/kubernetes-api/). + +Sebagai alternatif untuk menentukan detail aplikasi di _deploy wizard_, kamu dapat menentukan sendiri detail aplikasi kamu dalam berkas YAML atau JSON, dan mengunggah berkas tersebut menggunakan Dashboard. + +## Menggunakan Dashboard +Bagian ini akan menjelaskan bagian-bagian yang ada pada Antarmuka Dashboard Kubernetes; apa saja yang mereka sediakan dan bagaimana cara menggunakanya. + +### Navigation + +Ketika ada objek Kubernetes yang sudah didefinisikan di dalam klaster, Dashboard akan menampilkanya di tampilan awalnya. Secara bawaan hanya objek-objek dalam Namespace _default_ saja yang ditampilkan di sini dan kamu dapat menggantinya dengan selektor Namespace yang berada di menu navigasi. + +Dashboard menampilkan jenis objek Kubernetes dan mengelompokanya dalam beberapa kategori menu. + +#### Admin Overview +Untuk administrasi klaster dan Namespace, Dashboard menampilkan Node, Namespace dan PresistentVolume dan memiliki tampilan yang detail untuk objek-objek tersebut. Daftar Node berisi metrik penggunaan CPU dan memori yang dikumpulkan dari semua Node. Tampilan detail menampilkan metrik-metrik untuk sebuah Node, spesifikasinya, status, sumber daya yang dialokasikan, _event-event_, dan Pod-Pod yang sedang berjalan di Node tersebut. + +#### Workloads +Menampilkan semua aplikasi yang sedang berjalan di Namespace yang dipilih. Tampilan ini menampilkan aplikasi berdasarkan jenis beban kerja (misalnya, Deployment, Replica Set, Stateful Set, dll.) dan setiap jenis beban kerja memiliki tampilanya sendiri. Daftar ini merangkum informasi yang dapat ditindaklanjuti, seperti berapa banyak Pod yang siap untuk setiap Replica Set atau penggunaan memori pada sebuah Pod. + +Tampilan detail dari beban kerja menampilkan status dan informasi spesifik serta hubungan antara objek. Misalnya, Pod-Pod yang diatur oleh ReplicaSet atau, ReplicaSet-ReplicaSet baru, dan HorizontalPodAutoscaler untuk Deployment. + +#### Services +Menampilkan sumber daya Kubernetes yang mengizinkan untuk mengekspos Service-Service ke jaringan luar dan menemukannya (_service discovery_) di dalam klaster. Untuk itu, tampilan dari Service dan Ingress menunjukan Pod-Pod yang ditarget oleh mereka, _endpoint-endpoint_ internal untuk koneksi klaster, dan _endpoint-endpoint_ eksternal untuk pengguna eksternal. + +#### Storage +Tampilan Storage menampilkan sumber-sumber daya PersistentVolumeClaim yang digunakan oleh aplikasi untuk menyimpan data. + +#### Config Maps dan Secrets +Menampilkan semua sumber daya Kubernetes yang digunakan untuk pengaturan aplikasi yang sedang berjalan di klaster. Pada tampilan ini kamu dapat mengedit dan mengelola objek-objek konfigurasi dan menampilkan kredensial yang tersembunyi secara bawaan. + +#### Logs Viewer +Laman daftar dan detail Pod tertaut dengan laman penampil log (_log viewer_). Kamu dapat menelusuri log yang berasal dari Container-Container pada sebuah Pod. + +![Logs viewer](/images/docs/ui-dashboard-logs-view.png) + +{{% /capture %}} + +{{% capture whatsnext %}} + +Untuk informasi lebih lanjut, lihat +[Laman proyek Kubernetes Dashboard](https://github.com/kubernetes/dashboard). + +{{% /capture %}} diff --git a/content/id/docs/tasks/example-task-template.md b/content/id/docs/tasks/example-task-template.md new file mode 100644 index 0000000000000..d5f9c8ca27140 --- /dev/null +++ b/content/id/docs/tasks/example-task-template.md @@ -0,0 +1,52 @@ +--- +title: Contoh Template Tugas (Task) +content_template: templates/task +toc_hide: true +--- + +{{% capture overview %}} + +{{< note >}} +Pastikan juga kamu [membuat isian di daftar isi](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents) untuk dokumen baru kamu. +{{< /note >}} + +Halaman ini menunjukkan bagaimana ... + +{{% /capture %}} + +{{% capture prerequisites %}} + +* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +* Lakukan ini. +* Lakukan ini juga. + +{{% /capture %}} + +{{% capture steps %}} + +## Menjalankan ... + +1. Lakukan ini. +1. Selanjutnya lakukan ini. Bila mungkin silahkan baca [penjelasan terkait](...). + +{{% /capture %}} + +{{% capture discussion %}} + +## Memahami ... +**[Bagian opsional]** + +Berikut ini hal-hal yang menarik untuk diketahui tentang langkah-langkah yang baru saja kamu lakukan. + +{{% /capture %}} + +{{% capture whatsnext %}} + +**[Bagian optional]** + +* Pelajari tentang [menulis topik baru](/docs/home/contribute/write-new-topic/). +* Lihat [menggunakan _template_ halaman - _template_ tugas](/docs/home/contribute/page-templates/#task_template) untuk mengetahui cara menggunakan _template_ ini. + +{{% /capture %}} + + diff --git a/content/id/docs/tasks/inject-data-application/_index.md b/content/id/docs/tasks/inject-data-application/_index.md new file mode 100755 index 0000000000000..45eabcbdb0a60 --- /dev/null +++ b/content/id/docs/tasks/inject-data-application/_index.md @@ -0,0 +1,5 @@ +--- +title: "Memasukkan Data ke dalam Aplikasi" +weight: 30 +--- + diff --git a/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md new file mode 100644 index 0000000000000..a9cce7b3e096b --- /dev/null +++ b/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -0,0 +1,119 @@ +--- +title: Mendefinisikan Variabel Lingkungan untuk sebuah Kontainer +content_template: templates/task +weight: 20 +--- + +{{% capture overview %}} + +Laman ini menunjukkan bagaimana cara untuk mendefinisikan variabel lingkungan (_environment variable_) untuk sebuah Container di dalam sebuah Pod Kubernetes. + +{{% /capture %}} + + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + + +{{% capture steps %}} + +## Mendefinisikan sebuah variabel lingkungan untuk sebuah Container + +Ketika kamu membuat sebuah Pod, kamu dapat mengatur variabel lingkungan untuk Container-Container yang berjalan di dalam sebuah Pod. +Untuk mengatur variabel lingkungan, sertakan bagian `env` atau `envFrom` pada berkas konfigurasi. + +Dalam latihan ini, kamu membuat sebuah Pod yang menjalankan satu buah Container. +Berkas konfigurasi untuk Pod tersebut mendefinisikan sebuah variabel lingkungan dengan nama `DEMO_GREETING` yang bernilai `"Hello from the environment"`. +Berikut berkas konfigurasi untuk Pod tersebut: + +{{< codenew file="pods/inject/envars.yaml" >}} + +1. Buatlah sebuah Pod berdasarkan berkas konfigurasi YAML tersebut: + + ```shell + kubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml + ``` + +2. Tampilkan Pod-Pod yang sedang berjalan: + + ```shell + kubectl get pods -l purpose=demonstrate-envars + ``` + + Keluarannya mirip seperti ini: + + ``` + NAME READY STATUS RESTARTS AGE + envar-demo 1/1 Running 0 9s + ``` + +3. Dapatkan sebuah _shell_ ke Container yang sedang berjalan di Pod kamu: + + ```shell + kubectl exec -it envar-demo -- /bin/bash + ``` + +4. Di _shell_ kamu, jalankan perintah `printenv` untuk melihat daftar variabel lingkungannya. + + ```shell + root@envar-demo:/# printenv + ``` + + Keluarannya mirip seperti ini: + + ``` + NODE_VERSION=4.4.2 + EXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237 + HOSTNAME=envar-demo + ... + DEMO_GREETING=Hello from the environment + DEMO_FAREWELL=Such a sweet sorrow + ``` + +5. Untuk keluar dari _shell_ tersebut, masukkan perintah `exit`. + +{{< note >}} +Variabel-variabel lingkungan yang diatur menggunakan bagian `env` atau `envFrom` akan mengesampingkan +variabel-variabel lingkungan yang ditentukan di dalam _image_ kontainer. +{{< /note >}} + +## Menggunakan variabel-variabel lingkungan di dalam konfigurasi kamu + +Variabel-variabel lingkungan yang kamu definisikan di dalam sebuah konfigurasi Pod dapat digunakan di tempat lain dalam konfigurasi, contohnya di dalam perintah-perintah dan argumen-argumen yang kamu atur dalam Container-Container milik Pod. +Pada contoh konfigurasi berikut, variabel-variabel lingkungan `GREETING`, `HONORIFIC`, dan `NAME` disetel masing-masing menjadi `Warm greetings to`, `The Most Honorable`, dan `Kubernetes`. +Variabel-variabel lingkungan tersebut kemudian digunakan dalam argumen CLI yang diteruskan ke Container `env-print-demo`. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: print-greeting +spec: + containers: + - name: env-print-demo + image: bash + env: + - name: GREETING + value: "Warm greetings to" + - name: HONORIFIC + value: "The Most Honorable" + - name: NAME + value: "Kubernetes" + command: ["echo"] + args: ["$(GREETING) $(HONORIFIC) $(NAME)"] +``` + +Setelah dibuat, perintah `echo Warm greetings to The Most Honorable Kubernetes` dijalankan di Container tersebut. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Pelajari lebih lanjut tentang [variabel lingkungan](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). +* Pelajari tentang [menggunakan informasi rahasia sebagai variabel lingkungan](/docs/user-guide/secrets/#using-secrets-as-environment-variables). +* Lihat [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core). + +{{% /capture %}} diff --git a/content/id/docs/tasks/tools/_index.md b/content/id/docs/tasks/tools/_index.md new file mode 100755 index 0000000000000..9bbd67d8fb1ce --- /dev/null +++ b/content/id/docs/tasks/tools/_index.md @@ -0,0 +1,5 @@ +--- +title: "Menginstal Peralatan" +weight: 10 +--- + diff --git a/content/id/docs/tasks/tools/install-kubectl.md b/content/id/docs/tasks/tools/install-kubectl.md new file mode 100644 index 0000000000000..3f24efef54917 --- /dev/null +++ b/content/id/docs/tasks/tools/install-kubectl.md @@ -0,0 +1,496 @@ +--- +title: Instalasi dan Konfigurasi kubectl +content_template: templates/task +weight: 10 +card: + name: tasks + weight: 20 + title: Instalasi kubectl +--- + +{{% capture overview %}} +[Kubectl](/docs/user-guide/kubectl/) adalah perangkat barisan perintah Kubernetes yang digunakan untuk menjalankan berbagai perintah untuk kluster Kubernetes. Kamu dapat menggunakan `kubectl` untuk men-_deploy_ aplikasi, mengatur _resource_ kluster, dan melihat _log_. Daftar operasi `kubectl` dapat dilihat di [Ikhtisar kubectl](/docs/reference/kubectl/overview/). +{{% /capture %}} + +{{% capture prerequisites %}} +Kamu boleh menggunakan `kubectl` versi berapapun selama versi minornya sama atau berbeda satu. Misal, klien v1.2 masih dapat digunakan dengan v1.1, v1.2, dan 1.3 master. Menggunakan versi terbaru `kubectl` dapat menghindari permasalahan yang tidak terduga. +{{% /capture %}} + +{{% capture steps %}} + +## Instalasi kubectl di Linux + +### Instalasi binari kubectl dengan curl di Linux + +1. Unduh versi terbaru dengan perintah: + + ``` + curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl + ``` + + Untuk mengunduh versi spesifik, ganti bagian `curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt` dengan versi yang diinginkan. + + Misal, untuk mengunduh versi {{< param "fullversion" >}} di Linux, ketik: + + ``` + curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl + ``` + +1. Buat agar binari `kubectl` dapat dijalankan. + + ``` + chmod +x ./kubectl + ``` + +1. Pindahkan ke PATH komputer. + + ``` + sudo mv ./kubectl /usr/local/bin/kubectl + ``` +1. Pastikan instalasi sudah berhasil dengan melakukan pengecekan versi: + + ``` + kubectl version --client + ``` + +### Instalasi dengan paket manajer bawaan + +{{< tabs name="kubectl_install" >}} +{{< tab name="Ubuntu, Debian or HypriotOS" codelang="bash" >}} +sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 +curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - +echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list +sudo apt-get update +sudo apt-get install -y kubectl +{{< /tab >}} +{{< tab name="CentOS, RHEL or Fedora" codelang="bash" >}}cat < /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 +enabled=1 +gpgcheck=1 +repo_gpgcheck=1 +gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +EOF +yum install -y kubectl +{{< /tab >}} +{{< /tabs >}} + +### Instalasi dengan paket manajer lain + +{{< tabs name="other_kubectl_install" >}} +{{% tab name="Snap" %}} +Jika kamu menggunakan Ubuntu atau versi Linux lain yang mendukung paket manajer [snap](https://snapcraft.io/docs/core/install), `kubectl` tersedia dalam bentuk aplikasi di [snap](https://snapcraft.io/). + +```shell +snap install kubectl --classic + +kubectl version --client +``` +{{% /tab %}} +{{% tab name="Homebrew" %}} +Jika kamu menggunakan Linux dengan paket manajer [Homebrew](https://docs.brew.sh/Homebrew-on-Linux), `kubectl` sudah tersedia untuk diinstal di [Homebrew](https://docs.brew.sh/Homebrew-on-Linux#install). +```shell +brew install kubectl + +kubectl version --client +``` +{{% /tab %}} +{{< /tabs >}} + +## Instalasi kubectl di macOS + +### Instalasi binari kubectl dengan curl di macOS + +1. Unduh versi terbaru dengan perintah: + + ``` + curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl" + ``` + + Untuk mengunduh versi spesifik, ganti bagian `curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt` dengan versi yang diinginkan. + + Misal, untuk mengunduh versi {{< param "fullversion" >}} di macOS, ketik: + + ``` + curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl + ``` + +1. Buat agar binari `kubectl` dapat dijalankan. + + ``` + chmod +x ./kubectl + ``` + +1. Pindahkan ke PATH komputer. + + ``` + sudo mv ./kubectl /usr/local/bin/kubectl + ``` +1. Pastikan instalasi sudah berhasil dengan melakukan pengecekan versi: + + ``` + kubectl version --client + ``` + +### Instalasi dengan Homebrew di macOS + +Jika kamu menggunakan macOS dan paket manajer [Homebrew](https://brew.sh/), kamu dapat menginstal `kubectl` langsung dengan Homebrew. + +1. Jalankan perintah: + + ``` + brew install kubectl + ``` + atau + + ``` + brew install kubernetes-cli + ``` + +1. Pastikan instalasi sudah berhasil dengan melakukan pengecekan versi: + + ``` + kubectl version --client + ``` + +### Instalasi dengan Macports di macOS + +Jika kamu menggunakan macOS dan paket manajer [Macports](https://macports.org/), kamu dapat menginstal `kubectl` langsung dengan Macports. + +1. Jalankan perintah: + + ``` + sudo port selfupdate + sudo port install kubectl + ``` + +1. Pastikan instalasi sudah berhasil dengan melakukan pengecekan versi: + + ``` + kubectl version --client + ``` + +## Instalasi kubectl di Windows + +### Instalasi binari kubectl dengan curl di Windows + +1. Unduh versi terbaru {{< param "fullversion" >}} dari [tautan ini](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe). + + Atau jika sudah ada `curl`, jalankan perintah ini: + + ``` + curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe + ``` + + Untuk mendapatkan versi stabil terakhir (misal, untuk _scripting_), lihat di [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt). + +1. Tambahkan binary yang sudah diunduh ke PATH komputer. +1. Pastikan instalasi sudah berhasil dengan melakukan pengecekan versi: + + ``` + kubectl version --client + ``` +{{< note >}} +[Docker Desktop untuk Windows](https://docs.docker.com/docker-for-windows/#kubernetes) sudah menambahkan versi `kubectl`nya sendiri ke PATH. Jika kamu sudah menginstal Docker Desktop, kamu harus menambahkan _entry_ ke PATH sebelum yang ditambahkan oleh _installer_ Docker Desktop atau kamu dapat menghapus `kubectl` bawaan Docker Desktop. +{{< /note >}} + +### Instalasi dengan Powershell dari PSGallery + +Jika kamu menggunakan Windows dan paket manajer [Powershell Gallery](https://www.powershellgallery.com/), kamu dapat menginstal dan melakukan pembaruan `kubectl` dengan Powershell. + +1. Jalankan perintah berikut (jangan lupa untuk memasukkan `DownloadLocation`): + + ``` + Install-Script -Name install-kubectl -Scope CurrentUser -Force + install-kubectl.ps1 [-DownloadLocation ] + ``` + + {{< note >}}Jika kamu tidak menambahkan `DownloadLocation`, `kubectl` akan diinstal di dalam direktori _temp_ pengguna.{{< /note >}} + + _Installer_ akan membuat `$HOME/.kube` dan membuat berkas konfigurasi + +1. Pastikan instalasi sudah berhasil dengan melakukan pengecekan versi: + + ``` + kubectl version --client + ``` + + {{< note >}}Proses pembaruan dapat dilakukan dengan menjalankan ulang dua perintah yang terdapat pada langkah 1.{{< /note >}} + +### Instalasi di Windows menggunaakn Chocolatey atau Scoop + +Untuk menginstal `kubectl` di Windows kamu dapat menggunakan paket manajer [Chocolatey](https://chocolatey.org) atau _installer_ barisan perintah [Scoop](https://scoop.sh). +{{< tabs name="kubectl_win_install" >}} +{{% tab name="choco" %}} + + choco install kubernetes-cli + +{{% /tab %}} +{{% tab name="scoop" %}} + + scoop install kubectl + +{{% /tab %}} +{{< /tabs >}} +1. Pastikan instalasi sudah berhasil dengan melakukan pengecekan versi: + + ``` + kubectl version --client + ``` + +1. Pindah ke direktori utama: + + ``` + cd %USERPROFILE% + ``` +1. Buat direktori `.kube`: + + ``` + mkdir .kube + ``` + +1. Pindah ke direktori `.kube` yang baru saja dibuat: + + ``` + cd .kube + ``` + +1. Lakukan konfigurasi `kubectl` agar menggunakan _remote_ kluster Kubernetes: + + ``` + New-Item config -type file + ``` + + {{< note >}}Ubah berkas konfigurasi dengan editor teks pilihanmu, misal Notepad.{{< /note >}} + +## Unduh dengan menggunakan Google Cloud SDK + +Kamu dapat menginstal `kubectl` dengan menggunakan Google Cloud SDK. + +1. Instal [Google Cloud SDK](https://cloud.google.com/sdk/). +1. Jalankan perintah instalasi `kubectl`: + + ``` + gcloud components install kubectl + ``` + +1. Pastikan instalasi sudah berhasil dengan melakukan pengecekan versi: + + ``` + kubectl version --client + ``` + +## Memeriksa konfigurasi kubectl + +Agar `kubectl` dapat mengakses kluster Kubernetes, dibutuhkan sebuah [berkas kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/), yang akan otomatis dibuat ketika kamu membuat kluster baru menggunakan [kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh) atau setelah berhasil men-_deploy_ kluster Minikube. Secara _default_, konfigurasi `kubectl` disimpan di `~/.kube/config`. + +Kamu dapat memeriksa apakah konfigurasi `kubectl` sudah benar dengan mengambil _state_ kluster: + +```shell +kubectl cluster-info +``` +Jika kamu melihat respons URL maka konfigurasi kluster `kubectl` sudah benar. + +Tetapi jika kamu melihat pesan seperti di bawah maka `kubectl` belum dikonfigurasi dengan benar atau tidak dapat terhubung ke kluster Kubernetes. + +```shell +The connection to the server was refused - did you specify the right host or port? +``` + +Selanjutnya, apabila kamu ingin menjalankan kluster Kubernetes di laptop (lokal), kamu memerlukan sebuah perangkat seperti Minikube sebelum menjalankan ulang perintah yang ada di atas. + +Jika `kubectl cluster-info` mengembalikan respons URL tetapi kamu masih belum dapat mengakses ke kluster, kamu bisa menggunakan perintah di bawah untuk memeriksa apakah kluster sudah dikonfigurasi dengan benar. + +```shell +kubectl cluster-info dump +``` + +## Konfigurasi kubectl yang dapat dilakukan + +### Menyalakan _auto complete_ untuk terminal + +`kubectl` menyediakan fitur _auto complete_ untuk Bash dan Zsh yang dapat memudahkanmu ketika mengetik di terminal. + +Ikuti petunjuk di bawah untuk menyalakan _auto complete_ untuk Bash dan Zsh. + +{{< tabs name="kubectl_autocompletion" >}} + +{{% tab name="Bash di Linux" %}} + +### Pendahuluan + +_Completion script_ `kubectl` untuk Bash dapat dibuat dengan perintah `kubectl completion bash`. Masukkan skrip tersebut ke dalam terminal sebagai sumber untuk menyalakan _auto complete_ dari `kubectl`. + +Namun, _completion script_ tersebut tergantung dengan [**bash-completion**](https://github.com/scop/bash-completion), yang artinya kamu harus menginstal program tersebut terlebih dahulu (kamu dapat memeriksa apakah kamu sudah memiliki bash-completion dengan menjalankan perintah `type _init_completion`). + +### Instalasi bash-completion + +bash-completion disediakan oleh banyak manajer paket (lihat [di sini](https://github.com/scop/bash-completion#installation)). Kamu dapat menginstalnya dengan menggunakan perintah `apt-get install bash-completion` atau `yum install bash-completion`, atau dsb. + +Perintah di atas akan membuat skrip utama bash-completion di `/usr/share/bash-completion/bash_completion`. Terkadang kamu juga harus menambahkan skrip tersebut ke dalam berkas `~/.bashrc`, tergantung paket manajer yang kamu pakai. + +Untuk memastikan, muat ulang terminalmu dan jalankan `type _init_completion`. Jika perintah berhasil maka instalasi selesai. Jika tidak, tambahkan teks berikut ke dalam berkas `~/.bashrc`: + +```shell +source /usr/share/bash-completion/bash_completion +``` + +Muat ulang lagi terminalmu dan pastikan bash-completion sudah berhasil diinstal dengan menjalankan `type _init_completion`. + +### Menyalakan _auto complete_ kubectl + +Sekarang kamu harus memastikan bahwa _completion script_ untuk `kubectl` sudah dimasukkan sebagai sumber _auto complete_ di semua sesi terminal. Kamu dapat melakukannya dengan dua cara: + +- Masukkan _completion script_ sebagai sumber di berkas `~/.bashrc`: + + ```shell + echo 'source <(kubectl completion bash)' >>~/.bashrc + ``` + +- Menambahkan _completion script_ ke direktori `/etc/bash_completion.d`: + + ```shell + kubectl completion bash >/etc/bash_completion.d/kubectl + ``` + +Jika kamu menggunakan alias untuk `kubectl`, kamu masih dapat menggunakan fitur _auto complete_ dengan menjalankan perintah: + + ```shell + echo 'alias k=kubectl' >>~/.bashrc + echo 'complete -F __start_kubectl k' >>~/.bashrc + ``` + +{{< note >}} +Semua sumber _completion script_ bash-completion terdapat di `/etc/bash_completion.d`. +{{< /note >}} + +Kedua cara tersebut sama, kamu bisa mengambil salah satu cara saja. Setelah memuat ulang terminal, _auto complete_ dari `kubectl` seharusnya sudah dapat bekerja. + +{{% /tab %}} + + +{{% tab name="Bash di macOS" %}} + + +### Pendahuluan + +_Completion script_ `kubectl` untuk Bash dapat dibuat dengan perintah `kubectl completion bash`. Masukkan skrip tersebut ke dalam terminal sebagai sumber untuk menyalakan _auto complete_ dari `kubectl`. + +Namun, _completion script_ tersebut tergantung dengan [**bash-completion**](https://github.com/scop/bash-completion), yang artinya kamu harus menginstal program tersebut terlebih dahulu. + +{{< warning>}} +Terdapat dua versi bash-completion, v1 dan v2. V1 untuk Bash 3.2 (_default_ dari macOs), dan v2 untuk Bash 4.1+. _Completion script_ `kubectl` **tidak kompatibel** dengan bash-completion v1 dan Bash 3.2. Dibutuhkan **bash-completion v2** dan **Bash 4.1+** agar _completion script_ `kubectl` dapat bekerja dengan baik. Maka dari itu, kamu harus menginstal dan menggunakan Bash 4.1+ ([*panduan*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)) untuk dapat menggunakan fitur _auto complete_ dari `kubectl`. Ikuti panduan di bawah setelah kamu menginstal Bash 4.1+ (yang artinya Bash versi 4.1 atau lebih baru). +{{< /warning >}} + +### Pembaruan Bash + +Panduan di bawah berasumsi kamu menggunakan Bash 4.1+. Kamu dapat memeriksa versi Bash dengan menjalankan: + +```shell +echo $BASH_VERSION +``` + +Jika versinya sudah terlalu usang, kamu dapat menginstal/memperbaruinya dengan menggunakan Homebrew: + +```shell +brew install bash +``` + +Muat ulang terminalmu dan pastikan versi yang diharapkan sudah dipakai: + +```shell +echo $BASH_VERSION $SHELL +``` + +Homebrew biasanya akan menginstalnya di `/usr/local/bin/bash`. + +### Instalasi bash-completion + +{{< note >}} +Seperti yang sudah disebutkan, panduan di bawah berasumsi kamu menggunakan Bash 4.1+, yang berarti kamu akan menginstal bash-completion v2 (_auto complete_ dari `kubectl` tidak kompatibel dengan Bash 3.2 dan bash-completion v1). +{{< /note >}} + +Kamu dapat memeriksa apakah kamu sudah memiliki bash-completion v2 dengan perintah `type _init_completion`. Jika belum, kamu dapat menginstalnya dengan menggunakan Homebrew: + +```shell +brew install bash-completion@2 +``` + +Seperti yang disarankan keluaran perintah di atas, tambahkan teks berikut ke berkas `~/.bashrc`: + +```shell +export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" +[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" +``` + +Muat ulang terminalmu dan pastikan bash-completion v2 sudah terinstal dengan perintah `type _init_completion`. + +### Menyalakan _auto complete_ kubectl + +Sekarang kamu harus memastikan bahwa _completion script_ untuk `kubectl` sudah dimasukkan sebagai sumber _auto complete_ di semua sesi terminal. Kamu dapat melakukannya dengan beberapa cara: + +- Masukkan _completion script_ sebagai sumber di berkas `~/.bashrc`: + + ```shell + echo 'source <(kubectl completion bash)' >>~/.bashrc + ``` + +- Menambahkan _completion script_ ke direktori `/etc/bash_completion.d`: + + ```shell + kubectl completion bash >/etc/bash_completion.d/kubectl + ``` + +- Jika kamu menggunakan alias untuk `kubectl`, kamu masih dapat menggunakan fitur _auto complete_ dengan menjalankan perintah: + + ```shell + echo 'alias k=kubectl' >>~/.bashrc + echo 'complete -F __start_kubectl k' >>~/.bashrc + ``` +- Jika kamu menginstal `kubectl` dengan Homebrew (seperti yang sudah dijelaskan [di atas](#install-with-homebrew-on-macos)), maka _completion script_ untuk `kubectl` sudah berada di `/usr/local/etc/bash_completion.d/kubectl`. Kamu tidak perlu melakukan apa-apa lagi. + +{{< note >}} +bash-completion v2 yang diinstal dengan Homebrew meletakkan semua berkas nya di direktori `BASH_COMPLETION_COMPAT_DIR`, yang membuat dua cara terakhir dapat bekerja. +{{< /note >}} + +Setelah memuat ulang terminal, _auto complete_ dari `kubectl` seharusnya sudah dapat bekerja. +{{% /tab %}} + +{{% tab name="Zsh" %}} + +_Completion script_ `kubectl` untuk Zsh dapat dibuat dengan perintah `kubectl completion zsh`. Masukkan skrip tersebut ke dalam terminal sebagai sumber untuk menyalakan _auto complete_ dari `kubectl`. + +Tambahkan baris berikut di berkas `~/.zshrc` untuk menyalakan _auto complete_ dari `kubectl`: + +```shell +source <(kubectl completion zsh) +``` + +Jika kamu menggunakan alias untuk `kubectl`, kamu masih dapat menggunakan fitur _auto complete_ dengan menjalankan perintah: + +```shell +echo 'alias k=kubectl' >>~/.zshrc +echo 'complete -F __start_kubectl k' >>~/.zshrc +``` + +Setelah memuat ulang terminal, _auto complete_ dari `kubectl` seharusnya sudah dapat bekerja. + +Jika kamu mendapatkan pesan gagal seperti `complete:13: command not found: compdef`, maka tambahkan teks berikut ke awal berkas `~/.zshrc`: + +```shell +autoload -Uz compinit +compinit +``` +{{% /tab %}} +{{< /tabs >}} + +{{% /capture %}} + +{{% capture whatsnext %}} +* [Instalasi Minikube](/docs/tasks/tools/install-minikube/) +* Lihat [panduan memulai](/docs/setup/) untuk mencari tahu tentang pembuatan kluster. +* [Pelajari cara untuk menjalankan dan mengekspos aplikasimu.](/docs/tasks/access-application-cluster/service-access-application-cluster/) +* Jika kamu membutuhkan akses ke kluster yang tidak kamu buat, lihat [dokumen Sharing Cluster Access](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). +* Baca [dokumen referensi kubectl](/docs/reference/kubectl/kubectl/) +{{% /capture %}} diff --git a/content/id/docs/tutorials/hello-minikube.md b/content/id/docs/tutorials/hello-minikube.md index e3e88be10412d..b8281c4c871b8 100644 --- a/content/id/docs/tutorials/hello-minikube.md +++ b/content/id/docs/tutorials/hello-minikube.md @@ -75,7 +75,7 @@ Pod kamu dan melakukan restart saat Kontainer di dalam Pod tersebut mati. Pod menjalankan Kontainer sesuai dengan image Docker yang telah diberikan. ```shell - kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 ``` 2. Lihat Deployment: diff --git a/content/id/examples/pods/inject/envars.yaml b/content/id/examples/pods/inject/envars.yaml new file mode 100644 index 0000000000000..ebf5214376f19 --- /dev/null +++ b/content/id/examples/pods/inject/envars.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Pod +metadata: + name: envar-demo + labels: + purpose: demonstrate-envars +spec: + containers: + - name: envar-demo-container + image: gcr.io/google-samples/node-hello:1.0 + env: + - name: DEMO_GREETING + value: "Hello from the environment" + - name: DEMO_FAREWELL + value: "Such a sweet sorrow" diff --git a/content/id/examples/service/networking/dual-stack-default-svc.yaml b/content/id/examples/service/networking/dual-stack-default-svc.yaml new file mode 100644 index 0000000000000..00ed87ba196be --- /dev/null +++ b/content/id/examples/service/networking/dual-stack-default-svc.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/id/examples/service/networking/dual-stack-ipv4-svc.yaml b/content/id/examples/service/networking/dual-stack-ipv4-svc.yaml new file mode 100644 index 0000000000000..a875f44d6d060 --- /dev/null +++ b/content/id/examples/service/networking/dual-stack-ipv4-svc.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + ipFamily: IPv4 + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/id/examples/service/networking/dual-stack-ipv6-svc.yaml b/content/id/examples/service/networking/dual-stack-ipv6-svc.yaml new file mode 100644 index 0000000000000..2aa0725059bbc --- /dev/null +++ b/content/id/examples/service/networking/dual-stack-ipv6-svc.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + ipFamily: IPv6 + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/id/includes/task-tutorial-prereqs.md b/content/id/includes/task-tutorial-prereqs.md new file mode 100644 index 0000000000000..3c8abb5b091ae --- /dev/null +++ b/content/id/includes/task-tutorial-prereqs.md @@ -0,0 +1,8 @@ +Kamu harus memiliki klaster Kubernetes, dan perangkat baris perintah `kubectl` +juga harus dikonfigurasikan untuk berkomunikasi dengan klaster kamu. Jika kamu +belum punya klaster, kamu dapat membuatnya dengan menggunakan +[Minikube](/docs/setup/learning-environment/minikube/), +atau kamu dapat menggunakan salah satu tempat bermain Kubernetes ini: + +* [Katacoda](https://www.katacoda.com/courses/kubernetes/playground) +* [Bermain dengan Kubernetes](http://labs.play-with-k8s.com/) diff --git a/content/ja/docs/concepts/workloads/controllers/statefulset.md b/content/ja/docs/concepts/workloads/controllers/statefulset.md index e16488b5b882c..f3a5dfd3947b1 100644 --- a/content/ja/docs/concepts/workloads/controllers/statefulset.md +++ b/content/ja/docs/concepts/workloads/controllers/statefulset.md @@ -164,8 +164,9 @@ Kubernetes1.7とそれ以降のバージョンでは、StatefulSetは`.spec.podM `OrderedReady`なPod管理はStatefulSetにおいてデフォルトです。これは[デプロイとスケーリングの保証](#deployment-and-scaling-guarantees)に記載されている項目の振る舞いを実装します。 -#### 並行なPod管理Parallel Pod Management -`並行`なPod管理は、StatefulSetコントローラーに対して、他のPodが起動や停止される前にそのPodが完全に起動し準備完了になるか停止するのを待つことなく、Podが並行に起動もしくは停止するように指示します。 +#### 並行なPod管理 + +`Parallel`なPod管理は、StatefulSetコントローラーに対して、他のPodが起動や停止される前にそのPodが完全に起動し準備完了になるか停止するのを待つことなく、Podが並行に起動もしくは停止するように指示します。 ## アップデートストラテジー diff --git a/content/ja/docs/home/_index.md b/content/ja/docs/home/_index.md index c0f9abdb4c749..1c46fab5b7e89 100644 --- a/content/ja/docs/home/_index.md +++ b/content/ja/docs/home/_index.md @@ -4,7 +4,7 @@ title: Kubernetesドキュメント noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "ホーム" main_menu: true weight: 10 diff --git a/content/ja/docs/reference/glossary/ingress.md b/content/ja/docs/reference/glossary/ingress.md index 56b13b29402e6..55785620f1f94 100755 --- a/content/ja/docs/reference/glossary/ingress.md +++ b/content/ja/docs/reference/glossary/ingress.md @@ -2,7 +2,7 @@ title: Ingress id: ingress date: 2018-04-12 -full_link: /docs/ja/concepts/services-networking/ingress/ +full_link: /ja/docs/concepts/services-networking/ingress/ short_description: > クラスター内のServiceに対する外部からのアクセス(主にHTTP)を管理するAPIオブジェクトです。 diff --git a/content/ja/includes/user-guide-migration-notice.md b/content/ja/includes/user-guide-migration-notice.md deleted file mode 100644 index 366a05907cda5..0000000000000 --- a/content/ja/includes/user-guide-migration-notice.md +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - -
      -

      NOTICE

      -

      As of March 14, 2017, the Kubernetes SIG-Docs-Maintainers group have begun migration of the User Guide content as announced previously to the SIG Docs community through the kubernetes-sig-docs group and kubernetes.slack.com #sig-docs channel.

      -

      The user guides within this section are being refactored into topics within Tutorials, Tasks, and Concepts. Anything that has been moved will have a notice placed in its previous location as well as a link to its new location. The reorganization implements a new table of contents and should improve the documentation's findability and readability for a wider range of audiences.

      -

      For any questions, please contact: kubernetes-sig-docs@googlegroups.com

      -
      diff --git a/content/ko/_index.html b/content/ko/_index.html index 7b7dd9e059cea..6d9f6ecd1278a 100644 --- a/content/ko/_index.html +++ b/content/ko/_index.html @@ -45,7 +45,7 @@

      150+ 마이크로서비스를 쿠버네티스로 마이그레이션하는


      - Attend KubeCon in Amsterdam in July/August TBD + Attend KubeCon in Amsterdam on August 13-16, 2020


      diff --git a/content/ko/community/_index.html b/content/ko/community/_index.html new file mode 100644 index 0000000000000..015e1a0b44c3b --- /dev/null +++ b/content/ko/community/_index.html @@ -0,0 +1,236 @@ +--- +title: 커뮤니티 +layout: basic +cid: community +--- + +
      +
      + 쿠버네티스 컨퍼런스 갤러리 + 쿠버네티스 컨퍼런스 갤러리 +
      + +
      +
      +

      사용자, 기여자, 그리고 우리가 함께 구축한 문화로 구성된 쿠버네티스 커뮤니티는 이 오픈소스 프로젝트가 급부상하는 가장 큰 이유 중 하나입니다. 프로젝트 자체가 성장 하고 변화함에 따라 우리의 문화와 가치관이 계속 성장하고 변화하고 있습니다. 우리 모두는 프로젝트의 지속적인 개선과 작업 방식을 위해 함께 노력합니다. +

      우리는 이슈를 제기하고 풀 리퀘스트하고, SIG 미팅과 쿠버네티스 모임 그리고 KubeCon에 참석하고 채택과 혁신을 옹호하며, kubectl get pods 을 실행하고, 다른 수천가지 중요한 방법으로 기여하는 사람들 입니다. 여러분이 어떻게 이 놀라운 공동체의 일부가 될 수 있는지 계속 읽어보세요.

      +
      +
      + + +

      +
      +
      +
      + 쿠버네티스 컨퍼런스 갤러리 +
      + +
      + 쿠버네티스 컨퍼런스 갤러리 +
      + +
      + 쿠버네티스 컨퍼런스 갤러리 +
      + 쿠버네티스 컨퍼런스 갤러리 + + +
      + + + +
      +
      +

      +

      +

      행동 강령

      +쿠버네티스 커뮤니티는 존중과 포괄성을 중요시하며 모든 상호작용에 행동 강령을 시행합니다. 이벤트나 회의, 슬랙 또는 다른 커뮤니케이션 메커니즘에서 행동 강령의 위반이 발견되면, 쿠버네티스 행동 강령 위원회 conduct@kubernetes.io에 연락하세요. 모든 보고서는 기밀로 유지됩니다. 여기서 위원회에 대한 내용을 읽을 수 있습니다. +
      + +

      + + +더 읽어 보기 + +
      +
      +
      +
      + + + +
      +

      +

      +

      비디오

      + +
      유튜브에 많이 올라와 있어요. 다양한 주제에 대해 구독하세요.
      + + +
      + + +
      +

      +

      +

      토론

      + +
      우리는 대화를 많이 합니다. 이 모든 플랫폼에서 우리를 찾아 대화에 참여하세요.
      + +
      + +
      +Forum" + +포럼 ▶ + +
      +문서, 스택오버플로우 등을 연결하는 주제 기반 기술 토론하는 곳입니다. +
      +
      + +
      +Twitter + +트위터 ▶ + +
      블로그 게시물, 이벤트, 뉴스, 아이디어의 실시간 소식들입니다. +
      +
      + +
      +GitHub + +깃헙 ▶ + +
      +모든 프로젝트와 이슈 추적, 더불어 코드도 물론이죠. +
      +
      + +
      +Stack Overflow + +스택 오버플로우 ▶ + +
      + 모든 유스 케이스에 대한 기술적인 문제 해결입니다. + +
      +
      + + + +
      +
      +
      +

      +

      +
      +

      다가오는 이벤트

      + {{< upcoming-events >}} +
      +
      + +
      +
      +
      +

      글로벌 커뮤니티

      +전 세계 150개가 넘는 모임이 있고 성장하고 있는 가운데, 현지 kube 사람들을 찾아보세요. 가까운 곳에 없다면, 책임감을 갖고 직접 만들어보세요. +
      + +
      +모임 찾아보기 +
      +
      + +
      +
      + + + + +
      +

      +

      +

      최근 소식들

      + +
      + + +
      +



      +
      + +
      diff --git a/content/ko/community/code-of-conduct.md b/content/ko/community/code-of-conduct.md new file mode 100644 index 0000000000000..5781d0cfd7acd --- /dev/null +++ b/content/ko/community/code-of-conduct.md @@ -0,0 +1,27 @@ +--- +title: 커뮤니티 +layout: basic +cid: community +css: /css/community.css +--- + +
      +

      쿠버네티스 커뮤니티 행동 강령

      + +쿠버네티스는 +CNCF의 행동 강령을 따르고 있습니다. +커밋 214585e +에 따라 CNCF 행동 강령의 내용이 아래에 복제됩니다. +만약 최신 버전이 아닌 경우에는 +이슈를 제기해 주세요. + +이벤트나 회의, 슬랙 또는 다른 커뮤니케이션 +메커니즘에서 행동 강령을 위반한 경우 +쿠버네티스 행동 강령 위원회에 연락하세요. +conduct@kubernetes.io로 이메일을 보내 주세요. +당신의 익명성은 보호됩니다. + +
      +{{< include "/static/cncf-code-of-conduct.md" >}} +
      +
      diff --git a/content/ko/community/static/README.md b/content/ko/community/static/README.md new file mode 100644 index 0000000000000..2732334d6874c --- /dev/null +++ b/content/ko/community/static/README.md @@ -0,0 +1,2 @@ +이 디렉터리의 파일은 다른 소스에서 가져왔습니다. 새 버전으로 +교체하는 경우를 제외하고 직접 편집하지 마십시오. \ No newline at end of file diff --git a/content/ko/community/static/cncf-code-of-conduct.md b/content/ko/community/static/cncf-code-of-conduct.md new file mode 100644 index 0000000000000..6e3c838fe8c2c --- /dev/null +++ b/content/ko/community/static/cncf-code-of-conduct.md @@ -0,0 +1,45 @@ + +## CNCF 커뮤니티 행동 강령 v1.0 + +### 참여자 행동 강령 + +본 프로젝트의 기여자 및 유지 관리자로서, 환영하는 분위기의 공개 커뮤니티를 +육성하기 위하여, 저희는 이슈를 보고하고, 기술 요청을 작성하며, 문서를 업데이트하며, +pull 요청 또는 패치를 제출하고, 다른 활동에 참여하는 +모든 분들을 존중하겠다고 약속드립니다. + +저희는 경험 수준, 성별, 성 정체성과 표현, 성적 지향, +장애, 외양, 신체 크기, 인종, 민족, 나이, 종교, 또는 +국적에 상관 없이 모두가 괴롭힘 없는 환경에서 +본 프로젝트에 참여하도록 최선을 다하고 있습니다. + +참여자에게 금지하는 행동의 예는 다음과 같습니다.: + +* 성적 언어 또는 이미지 사용 +* 개인적인 공격 +* 시비 걸기 또는 모욕/경멸적인 코멘트 +* 공적 및 사적 괴롭힘 +* 분명한 허락을 받지 않은 타인의 사적 정보 출판, + 예를 들어 물리적 또는 전자 주소 +* 다른 비윤리적 또는 비전문적인 행동 + +프로젝트 유지 관리자는 본 행동 강령을 위반하는 코멘트, 협약, 강령, +위키 수정, 이슈와 다른 참여자를 제거, 수정, 삭제할 권한과 +책임을 가집니다. 본 행동 강령을 적용하여, 프로젝트 유지 관리자는 본 +프로젝트를 유지하는 모든 상황에 공정하고 일관적으로 이러한 원칙들을 +적용하기 위해 헌신해야 합니다. 프로젝트 유지 관리자는 +행동 강령이 프로젝트 팀에서 영구적으로 사라지도록 하거나 강요해서는 안됩니다. + +본 행동 강령은 프로젝트 공간과 개인이 프로젝트 또는 +그 커뮤니티를 대표하는 공적 공간에 모두 적용됩니다. + +Kubernetes에서의 폭력, 학대 또는 기타 허용되지 않는 행동 사례는 이메일 주소 를 통해 [Kubernetes 행동 강령 위원회](https://git.k8s.io/community/committee-code-of-conduct)로 신고하실 수 있습니다. 다른 프로젝트는 CNCF 프로젝트 관리자 또는 저희 중재자인 Mishi Choudhary에게 이메일 으로 연락하십시오. + +본 행동강령은 참여자 Contributor Covenant (http://contributor-covenant.org)의 +버전 1.2.0을 적용하였으며, +해당 내용은 여기 http://contributor-covenant.org/version/1/2/0/에서 확인할 수 있습니다. + +### CNCF 커뮤니티 행동 강령 + +CNCF 이벤트는 리눅스 재단의 [행동 강령](https://events.linuxfoundation.org/code-of-conduct/) 을 따르며, 해당 내용은 이벤트 페이지에서 확인할 수 있습니다. 본 강령은 위 정책과 호환할 수 있도록 설계되었으며, 또한 사건에 따라 더 많은 세부 내용을 포함합니다. \ No newline at end of file diff --git a/content/ko/docs/concepts/architecture/controller.md b/content/ko/docs/concepts/architecture/controller.md index e2feb1b4cfcf6..28bee667be596 100644 --- a/content/ko/docs/concepts/architecture/controller.md +++ b/content/ko/docs/concepts/architecture/controller.md @@ -113,17 +113,15 @@ weight: 30 디자인 원리에 따라, 쿠버네티스는 클러스터 상태의 각 특정 측면을 관리하는 많은 컨트롤러를 사용한다. 가장 일반적으로, 특정 컨트롤 루프 (컨트롤러)는 의도한 상태로서 한 종류의 리소스를 사용하고, 의도한 상태로 -만들기 위해 다른 종류의 리소스를 관리한다. +만들기 위해 다른 종류의 리소스를 관리한다. 예를 들어, 잡 컨트롤러는 +잡 오브젝트(새 작업을 발견하기 위해)와 파드 오브젝트(잡을 실행하고, 완료된 시기를 +확인하기 위해)를 추적한다. 이 경우 파드는 잡 컨트롤러가 생성하는 반면, +잡은 다른 컨트롤러가 생성한다. 컨트롤 루프들로 연결 구성된 하나의 모놀리식(monolithic) 집합보다, 간단한 컨트롤러를 여러 개 사용하는 것이 유용하다. 컨트롤러는 실패할 수 있으므로, 쿠버네티스는 이를 허용하도록 디자인되었다. -예를 들어, 잡용 컨트롤러는 잡 오브젝트(새 작업을 -발견하기 위해)와 파드 오브젝트(잡을 실행하고, 완료된 시기를 -확인하기 위해)를 추적한다. 이 경우 파드는 잡 컨트롤러가 생성하는 반면, -잡은 다른 컨트롤러가 생성한다. - {{< note >}} 동일한 종류의 오브젝트를 만들거나 업데이트하는 여러 컨트롤러가 있을 수 있다. 이면에, 쿠버네티스 컨트롤러는 컨트롤 하고 있는 리소스에 @@ -158,5 +156,5 @@ weight: 30 * [쿠버네티스 컨트롤 플레인](/ko/docs/concepts/#쿠버네티스-컨트롤-플레인)에 대해 읽기 * [쿠버네티스 오브젝트](/ko/docs/concepts/#쿠버네티스-오브젝트)의 몇 가지 기본 사항을 알아보자. * [쿠버네티스 API](/ko/docs/concepts/overview/kubernetes-api/)에 대해 더 배워 보자. -* 만약 자신만의 컨트롤러를 작성하기 원한다면, 쿠버네티스 확장하기의 [확장 패턴](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns)을 본다. +* 만약 자신만의 컨트롤러를 작성하기 원한다면, 쿠버네티스 확장하기의 [확장 패턴](/ko/docs/concepts/extend-kubernetes/extend-cluster/#익스텐션-패턴)을 본다. {{% /capture %}} diff --git a/content/ko/docs/concepts/architecture/master-node-communication.md b/content/ko/docs/concepts/architecture/master-node-communication.md index 7e70cffde284e..71a645117ec25 100644 --- a/content/ko/docs/concepts/architecture/master-node-communication.md +++ b/content/ko/docs/concepts/architecture/master-node-communication.md @@ -93,13 +93,28 @@ HTTPS 엔드포인트에 의해 제공되는 인증서를 확인하지 않으며 ### SSH 터널 -쿠버네티스는 마스터 -> 클러스터 통신 경로를 보호하는 SSH 터널을 +쿠버네티스는 마스터 → 클러스터 통신 경로를 보호하는 SSH 터널을 지원한다. 이 구성에서 apiserver는 클러스터의 각 노드에서 SSH 터널을 시작하고(포트 22번으로 수신 대기하는 ssh 서버에 연결), 터널을 통해 kubelet, 노드, 파드 또는 서비스로 향하는 모든 트래픽을 전달한다. 이 터널은 실행중인 노드의 트래픽이 외부로 노출되지 않도록 보장한다. -SSH 터널은 현재 사용 중단(deprecated)되었으므로, 무엇을 하고 있는지 알지 못하는 한 터널을 이용하지 말아야 한다. 이 통신 채널의 대체물을 설계 중이다. +SSH 터널은 현재 사용 중단(deprecated)되었으므로, 무엇을 하고 있는지 알지 못하는 +한 터널을 이용하지 말아야 한다. Konnectivity 서비스는 이 통신 +채널을 대체한다. + +### Konnectivity 서비스 +{{< feature-state for_k8s_version="v1.18" state="beta" >}} + +SSH 터널을 대체하는 Konnectivity 서비스는 마스터 → 클러스터 통신을 +위한 TCP 수준의 프록시를 제공한다. Konnectivity는 마스터 +네트워크와 클러스터 네트워크에서 실행되는 Konnectivity 서버와 +에이전트 두 부분으로 구성된다. Konnectivity 에이전트는 +Konnectivity 서버에 대한 연결을 시작하고, 유지한다. +이 연결을 통해 모든 마스터 → 클러스터로의 트래픽이 통과하게 된다. + +클러스터에서 설정하는 방법에 대해서는 [Konnectivity 서비스 구성](/docs/tasks/setup-konnectivity/) +하기를 참고한다. {{% /capture %}} diff --git a/content/ko/docs/concepts/architecture/nodes.md b/content/ko/docs/concepts/architecture/nodes.md index 423004b27efee..1ab0ba857ec96 100644 --- a/content/ko/docs/concepts/architecture/nodes.md +++ b/content/ko/docs/concepts/architecture/nodes.md @@ -69,7 +69,7 @@ kubectl describe node ] ``` -ready 컨디션의 상태가 [kube-controller-manager](/docs/admin/kube-controller-manager/)에 인수로 넘겨지는 `pod-eviction-timeout` 보다 더 길게 `Unknown` 또는 `False`로 유지되는 경우, 노드 상에 모든 파드는 노드 컨트롤러에 의해 삭제되도록 스케줄 된다. 기본 축출 타임아웃 기간은 **5분** 이다. 노드에 접근이 불가할 때와 같은 경우, apiserver는 노드 상의 kubelet과 통신이 불가하다. apiserver와의 통신이 재개될 때까지 파드 삭제에 대한 결정은 kubelet에 전해질 수 없다. 그 사이, 삭제되도록 스케줄 되어진 파드는 분할된 노드 상에서 계속 동작할 수도 있다. +ready 컨디션의 상태가 `pod-eviction-timeout` ([kube-controller-manager](/docs/admin/kube-controller-manager/)에 전달된 인수) 보다 더 길게 `Unknown` 또는 `False`로 유지되는 경우, 노드 상에 모든 파드는 노드 컨트롤러에 의해 삭제되도록 스케줄 된다. 기본 축출 타임아웃 기간은 **5분** 이다. 노드에 접근이 불가할 때와 같은 경우, apiserver는 노드 상의 kubelet과 통신이 불가하다. apiserver와의 통신이 재개될 때까지 파드 삭제에 대한 결정은 kubelet에 전해질 수 없다. 그 사이, 삭제되도록 스케줄 되어진 파드는 분할된 노드 상에서 계속 동작할 수도 있다. 1.5 이전의 쿠버네티스 버전에서는, 노드 컨트롤러가 apiserver로부터 접근 불가한 이러한 파드를 [강제 삭제](/ko/docs/concepts/workloads/pods/pod/#파드-강제-삭제) 시킬 것이다. 그러나 1.5 이상에서는, 노드 컨트롤러가 클러스터 내 동작 중지된 것을 확신할 때까지는 파드를 @@ -80,8 +80,8 @@ ready 컨디션의 상태가 [kube-controller-manager](/docs/admin/kube-controll 노드 수명주기 컨트롤러는 자동으로 컨디션을 나타내는 [테인트(taints)](/docs/concepts/configuration/taint-and-toleration/)를 생성한다. -스케줄러가 파드를 노드에 할당할 때, 스케줄러는 파드가 극복(tolerate)하는 테인트가 -아닌 한, 노드 계정의 테인트를 고려 한다. +스케줄러는 파드를 노드에 할당 할 때 노드의 테인트를 고려한다. +또한 파드는 노드의 테인트를 극복(tolerate)할 수 있는 톨러레이션(toleration)을 가질 수 있다. ### 용량과 할당가능 {#capacity} @@ -180,7 +180,7 @@ kubelet은 `NodeStatus` 와 리스 오브젝트를 생성하고 업데이트 할 보다 훨씬 길다). - kubelet은 10초마다 리스 오브젝트를 생성하고 업데이트 한다(기본 업데이트 주기). 리스 업데이트는 `NodeStatus` 업데이트와는 - 독립적으로 발생한다. + 독립적으로 발생한다. 리스 업데이트가 실패하면 kubelet에 의해 재시도하며 7초로 제한된 지수 백오프를 200 밀리초에서 부터 시작한다. #### 안정성 diff --git a/content/ko/docs/concepts/cluster-administration/certificates.md b/content/ko/docs/concepts/cluster-administration/certificates.md new file mode 100644 index 0000000000000..cadcef4b1772e --- /dev/null +++ b/content/ko/docs/concepts/cluster-administration/certificates.md @@ -0,0 +1,252 @@ +--- +title: 인증서 +content_template: templates/concept +weight: 20 +--- + + +{{% capture overview %}} + +클라이언트 인증서로 인증을 사용하는 경우 `easyrsa`, `openssl` 또는 `cfssl` +을 통해 인증서를 수동으로 생성할 수 있다. + +{{% /capture %}} + + +{{% capture body %}} + +### easyrsa + +**easyrsa** 는 클러스터 인증서를 수동으로 생성할 수 있다. + +1. easyrsa3의 패치 버전을 다운로드하여 압축을 풀고, 초기화한다. + + curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz + tar xzf easy-rsa.tar.gz + cd easy-rsa-master/easyrsa3 + ./easyrsa init-pki +1. 새로운 인증 기관(CA)을 생성한다. `--batch` 는 자동 모드를 설정한다. + `--req-cn` 는 CA의 새 루트 인증서에 대한 일반 이름(Common Name (CN))을 지정한다. + + ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass +1. 서버 인증서와 키를 생성한다. + `--subject-alt-name` 인수는 API 서버에 접근이 가능한 IP와 DNS + 이름을 설정한다. `MASTER_CLUSTER_IP` 는 일반적으로 API 서버와 + 컨트롤러 관리자 컴포넌트에 대해 `--service-cluster-ip-range` 인수로 + 지정된 서비스 CIDR의 첫 번째 IP이다. `--days` 인수는 인증서가 만료되는 + 일 수를 설정하는데 사용된다. + 또한, 아래 샘플은 기본 DNS 이름으로 `cluster.local` 을 + 사용한다고 가정한다. + + ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ + "IP:${MASTER_CLUSTER_IP},"\ + "DNS:kubernetes,"\ + "DNS:kubernetes.default,"\ + "DNS:kubernetes.default.svc,"\ + "DNS:kubernetes.default.svc.cluster,"\ + "DNS:kubernetes.default.svc.cluster.local" \ + --days=10000 \ + build-server-full server nopass +1. `pki/ca.crt`, `pki/issued/server.crt` 그리고 `pki/private/server.key` 를 디렉터리에 복사한다. +1. API 서버 시작 파라미터에 다음 파라미터를 채우고 추가한다. + + --client-ca-file=/yourdirectory/ca.crt + --tls-cert-file=/yourdirectory/server.crt + --tls-private-key-file=/yourdirectory/server.key + +### openssl + +**openssl** 은 클러스터 인증서를 수동으로 생성할 수 있다. + +1. ca.key를 2048bit로 생성한다. + + openssl genrsa -out ca.key 2048 +1. ca.key에 따라 ca.crt를 생성한다(인증서 유효 기간을 사용하려면 -days를 사용한다). + + openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt +1. server.key를 2048bit로 생성한다. + + openssl genrsa -out server.key 2048 +1. 인증서 서명 요청(Certificate Signing Request (CSR))을 생성하기 위한 설정 파일을 생성한다. + 파일에 저장하기 전에 꺾쇠 괄호(예: ``)로 + 표시된 값을 실제 값으로 대체한다(예: `csr.conf`). + `MASTER_CLUSTER_IP` 의 값은 이전 하위 섹션에서 + 설명한 대로 API 서버의 서비스 클러스터 IP이다. + 또한, 아래 샘플에서는 `cluster.local` 을 기본 DNS 도메인 + 이름으로 사용하고 있다고 가정한다. + + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + req_extensions = req_ext + distinguished_name = dn + + [ dn ] + C = <국가(country)> + ST = <도(state)> + L = <시(city)> + O = <조직(organization)> + OU = <조직 단위(organization unit)> + CN = + + [ req_ext ] + subjectAltName = @alt_names + + [ alt_names ] + DNS.1 = kubernetes + DNS.2 = kubernetes.default + DNS.3 = kubernetes.default.svc + DNS.4 = kubernetes.default.svc.cluster + DNS.5 = kubernetes.default.svc.cluster.local + IP.1 = + IP.2 = + + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth,clientAuth + subjectAltName=@alt_names +1. 설정 파일을 기반으로 인증서 서명 요청을 생성한다. + + openssl req -new -key server.key -out server.csr -config csr.conf +1. ca.key, ca.crt 그리고 server.csr을 사용해서 서버 인증서를 생성한다. + + openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ + -CAcreateserial -out server.crt -days 10000 \ + -extensions v3_ext -extfile csr.conf +1. 인증서를 본다. + + openssl x509 -noout -text -in ./server.crt + +마지막으로, API 서버 시작 파라미터에 동일한 파라미터를 추가한다. + +### cfssl + +**cfssl** 은 인증서 생성을 위한 또 다른 도구이다. + +1. 아래에 표시된 대로 커맨드 라인 도구를 다운로드하여 압축을 풀고 준비한다. + 사용 중인 하드웨어 아키텍처 및 cfssl 버전에 따라 샘플 + 명령을 조정해야 할 수도 있다. + + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 -o cfssl + chmod +x cfssl + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 -o cfssljson + chmod +x cfssljson + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64 -o cfssl-certinfo + chmod +x cfssl-certinfo +1. 아티팩트(artifact)를 보유할 디렉터리를 생성하고 cfssl을 초기화한다. + + mkdir cert + cd cert + ../cfssl print-defaults config > config.json + ../cfssl print-defaults csr > csr.json +1. CA 파일을 생성하기 위한 JSON 설정 파일을 `ca-config.json` 예시와 같이 생성한다. + + { + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "kubernetes": { + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ], + "expiry": "8760h" + } + } + } + } +1. CA 인증서 서명 요청(CSR)을 위한 JSON 설정 파일을 + `ca-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호로 표시된 + 값을 사용하려는 실제 값으로 변경한다. + + { + "CN": "kubernetes", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names":[{ + "C": "<국가(country)>", + "ST": "<도(state)>", + "L": "<시(city)>", + "O": "<조직(organization)>", + "OU": "<조직 단위(organization unit)>" + }] + } +1. CA 키(`ca-key.pem`)와 인증서(`ca.pem`)을 생성한다. + + ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca +1. API 서버의 키와 인증서를 생성하기 위한 JSON 구성파일을 + `server-csr.json` 예시와 같이 생성한다. 꺾쇠 괄호 안의 값을 + 사용하려는 실제 값으로 변경한다. `MASTER_CLUSTER_IP` 는 + 이전 하위 섹션에서 설명한 API 서버의 클러스터 IP이다. + 아래 샘플은 기본 DNS 도메인 이름으로 `cluster.local` 을 + 사용한다고 가정한다. + + { + "CN": "kubernetes", + "hosts": [ + "127.0.0.1", + "", + "", + "kubernetes", + "kubernetes.default", + "kubernetes.default.svc", + "kubernetes.default.svc.cluster", + "kubernetes.default.svc.cluster.local" + ], + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [{ + "C": "<국가(country)>", + "ST": "<도(state)>", + "L": "<시(city)>", + "O": "<조직(organization)>", + "OU": "<조직 단위(organization unit)>" + }] + } +1. API 서버 키와 인증서를 생성하면, 기본적으로 + `server-key.pem` 과 `server.pem` 파일에 각각 저장된다. + + ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ + --config=ca-config.json -profile=kubernetes \ + server-csr.json | ../cfssljson -bare server + + +## 자체 서명된 CA 인증서의 배포 + +클라이언트 노드는 자체 서명된 CA 인증서를 유효한 것으로 인식하지 않을 수 있다. +비-프로덕션 디플로이먼트 또는 회사 방화벽 뒤에서 실행되는 +디플로이먼트의 경우, 자체 서명된 CA 인증서를 모든 클라이언트에 +배포하고 유효한 인증서의 로컬 목록을 새로 고칠 수 있다. + +각 클라이언트에서, 다음 작업을 수행한다. + +```bash +sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt +sudo update-ca-certificates +``` + +``` +Updating certificates in /etc/ssl/certs... +1 added, 0 removed; done. +Running hooks in /etc/ca-certificates/update.d.... +done. +``` + +## 인증서 API + +`certificates.k8s.io` API를 사용해서 +[여기](/docs/tasks/tls/managing-tls-in-a-cluster)에 +설명된 대로 인증에 사용할 x509 인증서를 프로비전 할 수 있다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md index 2ae19a29614b1..9a5aba856bf71 100644 --- a/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -17,7 +17,6 @@ weight: 10 가이드를 고르기 전에, 몇 가지 고려사항이 있다. - 단지 자신의 컴퓨터에 쿠버네티스를 테스트를 하는지, 또는 고가용성의 멀티 노드 클러스터를 만들려고 하는지에 따라 니즈에 가장 적절한 배포판을 고르자. - - **만약 고가용성을 만들려고 한다면**, [여러 영역에서의 클러스터](/ko/docs/concepts/cluster-administration/federation/) 설정에 대해 배우자. - [구글 쿠버네티스 엔진](https://cloud.google.com/kubernetes-engine/)과 같은 **호스팅된 쿠버네티스 클러스터** 를 사용할 것인지, **자신의 클러스터에 호스팅할 것인지**? - 클러스터가 **온프레미스** 인지, 또는 **클라우드(IaaS)** 인지? 쿠버네티스는 하이브리드 클러스터를 직접적으로 지원하지는 않는다. 대신에, 사용자는 여러 클러스터를 구성할 수 있다. - **만약 온프레미스에서 쿠버네티스를 구성한다면**, 어떤 [네트워킹 모델](/docs/concepts/cluster-administration/networking/)이 가장 적합한지 고려한다. @@ -35,13 +34,13 @@ weight: 10 * 어떻게 [노드 관리](/ko/docs/concepts/architecture/nodes/)를 하는지 배워보자. -* 공유된 클러스터의 [자원 할당량](/docs/concepts/policy/resource-quotas/)을 어떻게 셋업하고 관리할 것인지 배워보자. +* 공유된 클러스터의 [리소스 쿼터](/ko/docs/concepts/policy/resource-quotas/)를 어떻게 셋업하고 관리할 것인지 배워보자. ## 클러스터 보안 * [인증서](/docs/concepts/cluster-administration/certificates/)는 다른 툴 체인을 이용하여 인증서를 생성하는 방법을 설명한다. -* [쿠버네티스 컨테이너 환경](/ko/docs/concepts/containers/container-environment-variables/)은 쿠버네티스 노드에서 Kubelet에 의해 관리되는 컨테이너 환경에 대해 설명한다. +* [쿠버네티스 컨테이너 환경](/ko/docs/concepts/containers/container-environment/)은 쿠버네티스 노드에서 Kubelet에 의해 관리되는 컨테이너 환경에 대해 설명한다. * [쿠버네티스 API에 대한 접근 제어](/docs/reference/access-authn-authz/controlling-access/)는 사용자와 서비스 계정에 어떻게 권한 설정을 하는지 설명한다. diff --git a/content/ko/docs/concepts/cluster-administration/kubelet-garbage-collection.md b/content/ko/docs/concepts/cluster-administration/kubelet-garbage-collection.md new file mode 100644 index 0000000000000..348b3776f6244 --- /dev/null +++ b/content/ko/docs/concepts/cluster-administration/kubelet-garbage-collection.md @@ -0,0 +1,86 @@ +--- +title: kubelet 가비지(Garbage) 수집 설정하기 +content_template: templates/concept +weight: 70 +--- + +{{% capture overview %}} + +가비지 수집은 사용되지 않는 이미지들과 컨테이너들을 정리하는 kubelet의 유용한 기능이다. Kubelet은 1분마다 컨테이너들에 대하여 가비지 수집을 수행하며, 5분마다 이미지들에 대하여 가비지 수집을 수행한다. + +별도의 가비지 수집 도구들을 사용하는 것은, 이러한 도구들이 존재할 수도 있는 컨테이너들을 제거함으로써 kubelet 을 중단시킬 수도 있으므로 권장하지 않는다. + +{{% /capture %}} + + +{{% capture body %}} + +## 이미지 수집 + +쿠버네티스는 cadvisor와 imageManager를 통하여 모든 이미지들의 +라이프사이클을 관리한다. + +이미지들에 대한 가비지 수집 정책에는 다음 2가지 요소가 고려된다: +`HighThresholdPercent` 와 `LowThresholdPercent`. 임계값을 초과하는 +디스크 사용량은 가비지 수집을 트리거 한다. 가비지 수집은 낮은 입계값에 도달 할 때까지 최근에 가장 적게 사용한 +이미지들을 삭제한다. + +## 컨테이너 수집 + +컨테이너에 대한 가비지 수집 정책은 세 가지 사용자 정의 변수들을 고려한다: `MinAge` 는 컨테이너를 가비지 수집 할 수 있는 최소 연령이다. `MaxPerPodContainer` 는 모든 단일 파드 (UID, 컨테이너 이름) 쌍이 가질 수 있는 +최대 비활성 컨테이너의 수량이다. `MaxContainers` 죽은 컨테이너의 최대 수량이다. 이러한 변수는 `MinAge` 를 0으로 설정하고, `MaxPerPodContainer` 와 `MaxContainers` 를 각각 0 보다 작게 설정해서 비활성화 할 수 있다. + +Kubelet은 미확인, 삭제 또는 앞에서 언급 한 플래그가 설정 한 경계를 벗어나거나, 확인되지 않은 컨테이너에 대해 조치를 취한다. 일반적으로 가장 오래된 컨테이너가 먼저 제거된다. `MaxPerPodContainer` 와 `MaxContainer` 는 파드 당 최대 컨테이너 수 (`MaxPerPodContainer`)가 허용 가능한 범위의 전체 죽은 컨테이너의 수(`MaxContainers`)를 벗어나는 상황에서 잠재적으로 서로 충돌할 수 있습니다. 이러한 상황에서 `MaxPerPodContainer` 가 조정된다: 최악의 시나리오는 `MaxPerPodContainer` 를 1로 다운그레이드하고 가장 오래된 컨테이너를 제거하는 것이다. 추가로, 삭제된 파드가 소유 한 컨테이너는 `MinAge` 보다 오래된 컨테이너가 제거된다. + +kubelet이 관리하지 않는 컨테이너는 컨테이너 가비지 수집 대상이 아니다. + +## 사용자 설정 + +사용자는 후술될 kubelet 플래그들을 통하여 이미지 가비지 수집을 조정하기 위하여 다음의 임계값을 조정할 수 있다. + +1. `image-gc-high-threshold`, 이미지 가비지 수집을 발생시키는 디스크 사용량의 비율로 +기본값은 85% 이다. +2. `image-gc-low-threshold`, 이미지 가비지 수집을 더 이상 시도하지 않는 디스크 사용량의 비율로 +기본값은 80% 이다. + +또한 사용자는 다음의 kubelet 플래그를 통해 가비지 수집 정책을 사용자 정의 할 수 있다. + +1. `minimum-container-ttl-duration`, 종료된 컨테이너가 가비지 수집 +되기 전의 최소 시간. 기본 값은 0 분이며, 이 경우 모든 종료된 컨테이너는 바로 가비지 수집의 대상이 된다. +2. `maximum-dead-containers-per-container`, 컨테이너가 보유할 수 있는 오래된 +인스턴스의 최대 수. 기본 값은 1 이다. +3. `maximum-dead-containers`, 글로벌하게 보유 할 컨테이너의 최대 오래된 인스턴스의 최대 수. +기본 값은 -1이며, 이 경우 인스턴스 수의 제한은 없다. + +컨테이너들은 유용성이 만료되기 이전에도 가비지 수집이 될 수 있다. 이러한 컨테이너들은 +문제 해결에 도움이 될 수 있는 로그나 다른 데이터를 포함하고 있을 수 있다. 컨테이너 당 적어도 +1개의 죽은 컨테이너가 허용될 수 있도록 `maximum-dead-containers-per-container` +값을 충분히 큰 값으로 지정하는 것을 권장한다. 동일한 이유로 `maximum-dead-containers` +의 값도 상대적으로 더 큰 값을 권장한다. +자세한 내용은 [해당 이슈](https://github.com/kubernetes/kubernetes/issues/13287)를 참고한다. + + +## 사용 중단(Deprecation) + +문서에 있는 몇 가지 kubelet의 가비지 수집 특징은 향후에 kubelet 축출(eviction) 기능으로 대체될 예정이다. + +포함: + +| 기존 Flag | 신규 Flag | 근거 | +| ------------- | -------- | --------- | +| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | 기존의 축출 신호로 인하여 이미지 가비지 수집이 트리거 될 수 있음 | +| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | 축출 리클레임 기능이 동일한 행동을 수행 | +| `--maximum-dead-containers` | | 컨테이너의 외부 영역에 오래된 로그가 저장되어 사용중단(deprecated)됨 | +| `--maximum-dead-containers-per-container` | | 컨테이너의 외부 영역에 오래된 로그가 저장되어 사용중단(deprecated)됨 | +| `--minimum-container-ttl-duration` | | 컨테이너의 외부 영역에 오래된 로그가 저장되어 사용중단(deprecated)됨 | +| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | 축출이 다른 리소스에 대한 디스크 임계값을 일반화 함 | +| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | 축출이 다른 리소스로의 디스크 압력전환을 일반화 함 | + + +{{% /capture %}} + +{{% capture whatsnext %}} + +자세한 내용은 [리소스 부족 처리 구성](/docs/tasks/administer-cluster/out-of-resource/)를 본다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/cluster-administration/logging.md b/content/ko/docs/concepts/cluster-administration/logging.md new file mode 100644 index 0000000000000..51526e84cb815 --- /dev/null +++ b/content/ko/docs/concepts/cluster-administration/logging.md @@ -0,0 +1,267 @@ +--- +title: 로깅 아키텍처 +content_template: templates/concept +weight: 60 +--- + +{{% capture overview %}} + +애플리케이션과 시스템 로그는 클러스터 내부에서 발생하는 상황을 이해하는 데 도움이 된다. 로그는 문제를 디버깅하고 클러스터 활동을 모니터링하는 데 특히 유용하다. 대부분의 최신 애플리케이션에는 일종의 로깅 메커니즘이 있다. 따라서, 대부분의 컨테이너 엔진은 일종의 로깅을 지원하도록 설계되었다. 컨테이너화된 애플리케이션에 가장 쉽고 가장 널리 사용되는 로깅 방법은 표준 출력과 표준 에러 스트림에 작성하는 것이다. + +그러나, 일반적으로 컨테이너 엔진이나 런타임에서 제공하는 기본 기능은 완전한 로깅 솔루션으로 충분하지 않다. 예를 들어, 컨테이너가 크래시되거나, 파드가 축출되거나, 노드가 종료된 경우에도 여전히 애플리케이션의 로그에 접근하려고 한다. 따라서, 로그는 노드, 파드 또는 컨테이너와는 독립적으로 별도의 스토리지와 라이프사이클을 가져야 한다. 이 개념을 _클러스터-레벨-로깅_ 이라고 한다. 클러스터-레벨 로깅은 로그를 저장하고, 분석하고, 쿼리하기 위해 별도의 백엔드가 필요하다. 쿠버네티스는 로그 데이터를 위한 네이티브 스토리지 솔루션을 제공하지 않지만, 기존의 많은 로깅 솔루션을 쿠버네티스 클러스터에 통합할 수 있다. + +{{% /capture %}} + + +{{% capture body %}} + +클러스터-레벨 로깅 아키텍처는 로깅 백엔드가 +클러스터 내부 또는 외부에 존재한다고 가정하여 설명한다. 클러스터-레벨 +로깅에 관심이 없는 경우에도, 노드에서 로그를 저장하고 +처리하는 방법에 대한 설명이 여전히 유용할 수 있다. + +## 쿠버네티스의 기본 로깅 + +이 섹션에서는, 쿠버네티스에서 표준 출력 스트림으로 데이터를 +출력하는 기본 로깅의 예시를 볼 수 있다. 이 데모에서는 +일부 텍스트를 초당 한 번씩 표준 출력에 쓰는 컨테이너와 함께 +[파드 명세](/examples/debug/counter-pod.yaml)를 사용한다. + +{{< codenew file="debug/counter-pod.yaml" >}} + +이 파드를 실행하려면, 다음의 명령을 사용한다. + +```shell +kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml +``` +출력은 다음과 같다. +``` +pod/counter created +``` + +로그를 가져오려면, 다음과 같이 `kubectl logs` 명령을 사용한다. + +```shell +kubectl logs counter +``` +출력은 다음과 같다. +``` +0: Mon Jan 1 00:00:00 UTC 2001 +1: Mon Jan 1 00:00:01 UTC 2001 +2: Mon Jan 1 00:00:02 UTC 2001 +... +``` + +컨테이너가 크래시된 경우, `kubectl logs` 의 `--previous` 플래그를 사용해서 컨테이너의 이전 인스턴스에 대한 로그를 검색할 수 있다. 파드에 여러 컨테이너가 있는 경우, 명령에 컨테이너 이름을 추가하여 접근하려는 컨테이너 로그를 지정해야 한다. 자세한 내용은 [`kubectl logs` 문서](/docs/reference/generated/kubectl/kubectl-commands#logs)를 참조한다. + +## 노드 레벨에서의 로깅 + +![노드 레벨 로깅](/images/docs/user-guide/logging/logging-node-level.png) + +컨테이너화된 애플리케이션이 `stdout(표준 출력)` 및 `stderr(표준 에러)` 에 쓰는 모든 것은 컨테이너 엔진에 의해 어딘가에서 처리와 리디렉션 된다. 예를 들어, 도커 컨테이너 엔진은 이 두 스트림을 [로깅 드라이버](https://docs.docker.com/engine/admin/logging/overview)로 리디렉션 한다. 이 드라이버는 쿠버네티스에서 json 형식의 파일에 작성하도록 구성된다. + +{{< note >}} +도커 json 로깅 드라이버는 각 라인을 별도의 메시지로 취급한다. 도커 로깅 드라이버를 사용하는 경우, 멀티-라인 메시지를 직접 지원하지 않는다. 로깅 에이전트 레벨 이상에서 멀티-라인 메시지를 처리해야 한다. +{{< /note >}} + +기본적으로, 컨테이너가 다시 시작되면, kubelet은 종료된 컨테이너 하나를 로그와 함께 유지한다. 파드가 노드에서 축출되면, 해당하는 모든 컨테이너도 로그와 함께 축출된다. + +노드-레벨 로깅에서 중요한 고려 사항은 로그 로테이션을 구현하여, +로그가 노드에서 사용 가능한 모든 스토리지를 사용하지 않도록 하는 것이다. 쿠버네티스는 +현재 로그 로테이션에 대한 의무는 없지만, 디플로이먼트 도구로 +이를 해결하기 위한 솔루션을 설정해야 한다. +예를 들어, `kube-up.sh` 스크립트에 의해 배포된 쿠버네티스 클러스터에는, +매시간 실행되도록 구성된 [`logrotate`](https://linux.die.net/man/8/logrotate) +도구가 있다. 예를 들어, 도커의 `log-opt` 를 사용하여 애플리케이션의 로그를 +자동으로 로테이션을 하도록 컨테이너 런타임을 설정할 수도 있다. +`kube-up.sh` 스크립트에서, 후자의 접근 방식은 GCP의 COS 이미지에 사용되며, +전자의 접근 방식은 다른 환경에서 사용된다. 두 경우 모두, +기본적으로 로그 파일이 10MB를 초과하면 로테이션이 되도록 구성된다. + +예를 들어, `kube-up.sh` 가 해당 [스크립트][cosConfigureHelper]에서 +GCP의 COS 이미지 로깅을 설정하는 방법에 대한 자세한 정보를 찾을 수 있다. + +기본 로깅 예제에서와 같이 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs)를 +실행하면, 노드의 kubelet이 요청을 처리하고 +로그 파일에서 직접 읽은 다음, 응답의 내용을 반환한다. + +{{< note >}} +현재, 일부 외부 시스템에서 로테이션을 수행한 경우, +`kubectl logs` 를 통해 최신 로그 파일의 내용만 +사용할 수 있다. 예를 들어, 10MB 파일이 있으면, `logrotate` 가 +로테이션을 수행하고 두 개의 파일이 생긴다(크기가 10MB인 파일 하나와 비어있는 파일). +그 후 `kubectl logs` 는 빈 응답을 반환한다. +{{< /note >}} + +[cosConfigureHelper]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh + +### 시스템 컴포넌트 로그 + +시스템 컴포넌트에는 컨테이너에서 실행되는 것과 컨테이너에서 실행되지 않는 두 가지 유형이 있다. +예를 들면 다음과 같다. + +* 쿠버네티스 스케줄러와 kube-proxy는 컨테이너에서 실행된다. +* Kubelet과 컨테이너 런타임(예: 도커)은 컨테이너에서 실행되지 않는다. + +systemd를 사용하는 시스템에서, kubelet과 컨테이너 런타임은 journald에 작성한다. +systemd를 사용하지 않으면, `/var/log` 디렉터리의 `.log` 파일에 작성한다. +컨테이너 내부의 시스템 컴포넌트는 기본 로깅 메커니즘을 무시하고, +항상 `/var/log` 디렉터리에 기록한다. 그것은 [klog][klog] +로깅 라이브러리를 사용한다. [로깅에 대한 개발 문서](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)에서 +해당 컴포넌트의 로깅 심각도(severity)에 대한 규칙을 찾을 수 있다. + +컨테이너 로그와 마찬가지로, `/var/log` 디렉터리의 시스템 컴포넌트 로그를 +로테이트해야 한다. `kube-up.sh` 스크립트로 구축한 쿠버네티스 클러스터에서 +로그는 매일 또는 크기가 100MB를 초과하면 +`logrotate` 도구에 의해 로테이트가 되도록 구성된다. + +[klog]: https://github.com/kubernetes/klog + +## 클러스터 레벨 로깅 아키텍처 + +쿠버네티스는 클러스터-레벨 로깅을 위한 네이티브 솔루션을 제공하지 않지만, 고려해야 할 몇 가지 일반적인 접근 방법을 고려할 수 있다. 여기 몇 가지 옵션이 있다. + +* 모든 노드에서 실행되는 노드-레벨 로깅 에이전트를 사용한다. +* 애플리케이션 파드에 로깅을 위한 전용 사이드카 컨테이너를 포함한다. +* 애플리케이션 내에서 로그를 백엔드로 직접 푸시한다. + +### 노드 로깅 에이전트 사용 + +![노드 레벨 로깅 에이전트 사용](/images/docs/user-guide/logging/logging-with-node-agent.png) + +각 노드에 _노드-레벨 로깅 에이전트_ 를 포함시켜 클러스터-레벨 로깅을 구현할 수 있다. 로깅 에이전트는 로그를 노출하거나 로그를 백엔드로 푸시하는 전용 도구이다. 일반적으로, 로깅 에이전트는 해당 노드의 모든 애플리케이션 컨테이너에서 로그 파일이 있는 디렉터리에 접근할 수 있는 컨테이너이다. + +로깅 에이전트는 모든 노드에서 실행해야 하므로, 이를 데몬셋 레플리카, 매니페스트 파드 또는 노드의 전용 네이티브 프로세스로 구현하는 것이 일반적이다. 그러나 후자의 두 가지 접근법은 더 이상 사용되지 않으며 절대 권장하지 않는다. + +쿠버네티스 클러스터는 노드-레벨 로깅 에이전트를 사용하는 것이 가장 일반적이며 권장되는 방법으로, 이는 노드별 하나의 에이전트만 생성하며, 노드에서 실행되는 애플리케이션을 변경할 필요가 없기 때문이다. 그러나, 노드-레벨 로깅은 _애플리케이션의 표준 출력과 표준 에러에 대해서만 작동한다_ . + +쿠버네티스는 로깅 에이전트를 지정하지 않지만, 쿠버네티스 릴리스에는 두 가지 선택적인 로깅 에이전트(Google 클라우드 플랫폼과 함께 사용하기 위한 [스택드라이버(Stackdriver) 로깅](/docs/user-guide/logging/stackdriver)과 [엘라스틱서치(Elasticsearch)](/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/))가 패키지로 함께 제공된다. 전용 문서에서 자세한 정보와 지침을 찾을 수 있다. 두 가지 다 사용자 정의 구성이 된 [fluentd](http://www.fluentd.org/)를 에이전트로써 노드에서 사용한다. + +### 로깅 에이전트와 함께 사이드카 컨테이너 사용 + +다음 중 한 가지 방법으로 사이드카 컨테이너를 사용할 수 있다. + +* 사이드카 컨테이너는 애플리케이션 로그를 자체 `stdout` 으로 스트리밍한다. +* 사이드카 컨테이너는 로깅 에이전트를 실행하며, 애플리케이션 컨테이너에서 로그를 가져오도록 구성된다. + +#### 사이드카 컨테이너 스트리밍 + +![스트리밍 컨테이너가 있는 사이드카 컨테이너](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png) + +사이드카 컨테이너를 자체 `stdout` 및 `stderr` 스트림으로 +스트리밍하면, 각 노드에서 이미 실행 중인 kubelet과 로깅 에이전트를 +활용할 수 있다. 사이드카 컨테이너는 파일, 소켓 +또는 journald에서 로그를 읽는다. 각 개별 사이드카 컨테이너는 자체 `stdout` +또는 `stderr` 스트림에 로그를 출력한다. + +이 방법을 사용하면 애플리케이션의 다른 부분에서 여러 로그 스트림을 +분리할 수 ​​있고, 이 중 일부는 `stdout` 또는 `stderr` 에 +작성하기 위한 지원이 부족할 수 있다. 로그를 리디렉션하는 로직은 +미미하기 때문에, 큰 오버헤드가 거의 없다. 또한, +`stdout` 및 `stderr` 가 kubelet에서 처리되므로, `kubectl logs` 와 같은 +빌트인 도구를 사용할 수 있다. + +다음의 예를 고려해보자. 파드는 단일 컨테이너를 실행하고, 컨테이너는 +서로 다른 두 가지 형식을 사용하여, 서로 다른 두 개의 로그 파일에 기록한다. 파드에 대한 +구성 파일은 다음과 같다. + +{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}} + +두 컴포넌트를 컨테이너의 `stdout` 스트림으로 리디렉션한 경우에도, 동일한 로그 +스트림에 서로 다른 형식의 로그 항목을 갖는 것은 +알아보기 힘들다. 대신, 두 개의 사이드카 컨테이너를 도입할 수 있다. 각 사이드카 +컨테이너는 공유 볼륨에서 특정 로그 파일을 테일(tail)한 다음 로그를 +자체 `stdout` 스트림으로 리디렉션할 수 있다. + +다음은 사이드카 컨테이너가 두 개인 파드에 대한 구성 파일이다. + +{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}} + +이제 이 파드를 실행하면, 다음의 명령을 실행하여 각 로그 스트림에 +개별적으로 접근할 수 있다. + +```shell +kubectl logs counter count-log-1 +``` +``` +0: Mon Jan 1 00:00:00 UTC 2001 +1: Mon Jan 1 00:00:01 UTC 2001 +2: Mon Jan 1 00:00:02 UTC 2001 +... +``` + +```shell +kubectl logs counter count-log-2 +``` +``` +Mon Jan 1 00:00:00 UTC 2001 INFO 0 +Mon Jan 1 00:00:01 UTC 2001 INFO 1 +Mon Jan 1 00:00:02 UTC 2001 INFO 2 +... +``` + +클러스터에 설치된 노드-레벨 에이전트는 추가 구성없이 +자동으로 해당 로그 스트림을 선택한다. 원한다면, 소스 컨테이너에 +따라 로그 라인을 파싱(parse)하도록 에이전트를 구성할 수 있다. + +참고로, CPU 및 메모리 사용량이 낮음에도 불구하고(cpu에 대한 몇 밀리코어의 +요구와 메모리에 대한 몇 메가바이트의 요구), 로그를 파일에 기록한 다음 +`stdout` 으로 스트리밍하면 디스크 사용량은 두 배가 될 수 있다. 단일 파일에 +쓰는 애플리케이션이 있는 경우, 일반적으로 스트리밍 +사이드카 컨테이너 방식을 구현하는 대신 `/dev/stdout` 을 대상으로 +설정하는 것이 더 낫다. + +사이드카 컨테이너를 사용하여 애플리케이션 자체에서 로테이션할 수 없는 +로그 파일을 로테이션할 수도 있다. 이 방법의 예로는 +정기적으로 logrotate를 실행하는 작은 컨테이너를 두는 것이다. +그러나, `stdout` 및 `stderr` 을 직접 사용하고 로테이션과 +유지 정책을 kubelet에 두는 것이 권장된다. + +#### 로깅 에이전트가 있는 사이드카 컨테이너 + +![로깅 에이전트가 있는 사이드카 컨테이너](/images/docs/user-guide/logging/logging-with-sidecar-agent.png) + +노드-레벨 로깅 에이전트가 상황에 맞게 충분히 유연하지 않은 경우, +애플리케이션과 함께 실행하도록 특별히 구성된 별도의 로깅 에이전트를 사용하여 +사이드카 컨테이너를 생성할 수 있다. + +{{< note >}} +사이드카 컨테이너에서 로깅 에이전트를 사용하면 +상당한 리소스 소비로 이어질 수 있다. 게다가, kubelet에 의해 +제어되지 않기 때문에, `kubectl logs` 명령을 사용하여 해당 로그에 +접근할 수 없다. +{{< /note >}} + +예를 들어, 로깅 에이전트로 fluentd를 사용하는 [스택드라이버](/docs/tasks/debug-application-cluster/logging-stackdriver/)를 +사용할 수 있다. 여기에 이 방법을 구현하는 데 사용할 수 있는 +두 가지 구성 파일이 있다. 첫 번째 파일에는 +fluentd를 구성하기 위한 [컨피그맵](/docs/tasks/configure-pod-container/configure-pod-configmap/)이 포함되어 있다. + +{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}} + +{{< note >}} +fluentd의 구성은 이 문서의 범위를 벗어난다. +fluentd를 구성하는 것에 대한 자세한 내용은, +[공식 fluentd 문서](http://docs.fluentd.org/)를 참고한다. +{{< /note >}} + +두 번째 파일은 fluentd가 실행되는 사이드카 컨테이너가 있는 파드를 설명한다. +파드는 fluentd가 구성 데이터를 가져올 수 있는 볼륨을 마운트한다. + +{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}} + +얼마 후 스택드라이버 인터페이스에서 로그 메시지를 찾을 수 있다. + +이것은 단지 예시일 뿐이며 실제로 애플리케이션 컨테이너 내의 +모든 소스에서 읽은 fluentd를 로깅 에이전트로 대체할 수 있다는 것을 +기억한다. + +### 애플리케이션에서 직접 로그 노출 + +![애플리케이션에서 직접 로그 노출](/images/docs/user-guide/logging/logging-from-application.png) + +모든 애플리케이션에서 직접 로그를 노출하거나 푸시하여 클러스터-레벨 로깅을 +구현할 수 있다. 그러나, 이러한 로깅 메커니즘의 구현은 +쿠버네티스의 범위를 벗어난다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/cluster-administration/manage-deployment.md b/content/ko/docs/concepts/cluster-administration/manage-deployment.md new file mode 100644 index 0000000000000..ea5396350f569 --- /dev/null +++ b/content/ko/docs/concepts/cluster-administration/manage-deployment.md @@ -0,0 +1,457 @@ +--- +title: 리소스 관리 +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +애플리케이션을 배포하고 서비스를 통해 노출했다. 이제 무엇을 해야 할까? 쿠버네티스는 확장과 업데이트를 포함하여, 애플리케이션 배포를 관리하는 데 도움이 되는 여러 도구를 제공한다. 더 자세히 설명할 기능 중에는 [구성 파일](/ko/docs/concepts/configuration/overview/)과 [레이블](/ko/docs/concepts/overview/working-with-objects/labels/)이 있다. + +{{% /capture %}} + + +{{% capture body %}} + +## 리소스 구성 구성하기 + +많은 애플리케이션들은 디플로이먼트 및 서비스와 같은 여러 리소스를 필요로 한다. 여러 리소스의 관리는 동일한 파일에 그룹화하여 단순화할 수 있다(YAML에서 `---` 로 구분). 예를 들면 다음과 같다. + +{{< codenew file="application/nginx-app.yaml" >}} + +단일 리소스와 동일한 방식으로 여러 리소스를 생성할 수 있다. + +```shell +kubectl apply -f https://k8s.io/examples/application/nginx-app.yaml +``` + +```shell +service/my-nginx-svc created +deployment.apps/my-nginx created +``` + +리소스는 파일에 표시된 순서대로 생성된다. 따라서, 스케줄러가 디플로이먼트와 같은 컨트롤러에서 생성한 서비스와 관련된 파드를 분산시킬 수 있으므로, 서비스를 먼저 지정하는 것이 가장 좋다. + +`kubectl apply` 는 여러 개의 `-f` 인수도 허용한다. + +```shell +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +``` + +그리고 개별 파일 대신 또는 추가로 디렉터리를 지정할 수 있다. + +```shell +kubectl apply -f https://k8s.io/examples/application/nginx/ +``` + +`kubectl` 은 접미사가 `.yaml`, `.yml` 또는 `.json` 인 파일을 읽는다. + +동일한 마이크로서비스 또는 애플리케이션 티어(tier)와 관련된 리소스를 동일한 파일에 배치하고, 애플리케이션과 연관된 모든 파일을 동일한 디렉터리에 그룹화하는 것이 좋다. 애플리케이션의 티어가 DNS를 사용하여 서로 바인딩되면, 스택의 모든 컴포넌트를 일괄로 배포할 수 있다. + +URL을 구성 소스로 지정할 수도 있다. 이는 github에 체크인된 구성 파일에서 직접 배포하는 데 편리하다. + +```shell +kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml +``` + +```shell +deployment.apps/my-nginx created +``` + +## kubectl에서의 대량 작업 + +`kubectl` 이 대량으로 수행할 수 있는 작업은 리소스 생성만이 아니다. 또한 다른 작업을 수행하기 위해, 특히 작성한 동일한 리소스를 삭제하기 위해 구성 파일에서 리소스 이름을 추출할 수도 있다. + +```shell +kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml +``` + +```shell +deployment.apps "my-nginx" deleted +service "my-nginx-svc" deleted +``` + +두 개의 리소스만 있는 경우, 리소스/이름 구문을 사용하여 커맨드 라인에서 둘다 모두 쉽게 지정할 수도 있다. + +```shell +kubectl delete deployments/my-nginx services/my-nginx-svc +``` + +리소스가 많을 경우, `-l` 또는 `--selector` 를 사용하여 지정된 셀렉터(레이블 쿼리)를 지정하여 레이블별로 리소스를 필터링하는 것이 더 쉽다. + +```shell +kubectl delete deployment,services -l app=nginx +``` + +```shell +deployment.apps "my-nginx" deleted +service "my-nginx-svc" deleted +``` + +`kubectl` 은 입력을 받아들이는 것과 동일한 구문으로 리소스 이름을 출력하므로, `$()` 또는 `xargs` 를 사용하여 작업을 쉽게 연결할 수 있다. + +```shell +kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service) +``` + +```shell +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s +``` + +위의 명령을 사용하여, 먼저 `examples/application/nginx/` 에 리소스를 생성하고 `-o name` 출력 형식으로 생성한 리소스를 출력한다(각 리소스를 resource/name으로 출력). +그런 다음 "service"만 `grep` 한 다음 `kubectl get` 으로 출력한다. + +특정 디렉터리 내의 여러 서브 디렉터리에서 리소스를 구성하는 경우, `--filename,-f` 플래그와 함께 `--recursive` 또는 `-R` 을 지정하여, 서브 디렉터리에 대한 작업을 재귀적으로 수행할 수도 있다. + +예를 들어, 리소스 유형별로 구성된 개발 환경에 필요한 모든 {{< glossary_tooltip text="매니페스트" term_id="manifest" >}}를 보유하는 `project/k8s/development` 디렉터리가 있다고 가정하자. + +``` +project/k8s/development +├── configmap +│   └── my-configmap.yaml +├── deployment +│   └── my-deployment.yaml +└── pvc + └── my-pvc.yaml +``` + +기본적으로, `project/k8s/development` 에서 대량 작업을 수행하면, 서브 디렉터리를 처리하지 않고, 디렉터리의 첫 번째 레벨에서 중지된다. 다음 명령을 사용하여 이 디렉터리에 리소스를 생성하려고 하면, 오류가 발생할 것이다. + +```shell +kubectl apply -f project/k8s/development +``` + +```shell +error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin) +``` + +대신, 다음과 같이 `--filename,-f` 플래그와 함께 `--recursive` 또는 `-R` 플래그를 지정한다. + +```shell +kubectl apply -f project/k8s/development --recursive +``` + +```shell +configmap/my-config created +deployment.apps/my-deployment created +persistentvolumeclaim/my-pvc created +``` + +`--recursive` 플래그는 `kubectl {create,get,delete,describe,rollout}` 등과 같이 `--filename,-f` 플래그를 허용하는 모든 작업에서 작동한다. + +`--recursive` 플래그는 여러 개의 `-f` 인수가 제공될 때도 작동한다. + +```shell +kubectl apply -f project/k8s/namespaces -f project/k8s/development --recursive +``` + +```shell +namespace/development created +namespace/staging created +configmap/my-config created +deployment.apps/my-deployment created +persistentvolumeclaim/my-pvc created +``` + +`kubectl` 에 대해 더 자세히 알고 싶다면, [kubectl 개요](/docs/reference/kubectl/overview/)를 참조한다. + +## 효과적인 레이블 사용 + +지금까지 사용한 예는 모든 리소스에 최대 한 개의 레이블만 적용하는 것이었다. 세트를 서로 구별하기 위해 여러 레이블을 사용해야 하는 많은 시나리오가 있다. + +예를 들어, 애플리케이션마다 `app` 레이블에 다른 값을 사용하지만, [방명록 예제](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)와 같은 멀티-티어 애플리케이션은 각 티어를 추가로 구별해야 한다. 프론트엔드는 다음의 레이블을 가질 수 있다. + +```yaml + labels: + app: guestbook + tier: frontend +``` + +Redis 마스터와 슬레이브는 프론트엔드와 다른 `tier` 레이블을 가지지만, 아마도 추가로 `role` 레이블을 가질 것이다. + +```yaml + labels: + app: guestbook + tier: backend + role: master +``` + +그리고 + +```yaml + labels: + app: guestbook + tier: backend + role: slave +``` + +레이블은 레이블로 지정된 차원에 따라 리소스를 분할하고 사용할 수 있게 한다. + +```shell +kubectl apply -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml +kubectl get pods -Lapp -Ltier -Lrole +``` + +```shell +NAME READY STATUS RESTARTS AGE APP TIER ROLE +guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend +guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend +guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend +guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master +guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestbook backend slave +guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave +my-nginx-divi2 1/1 Running 0 29m nginx +my-nginx-o0ef1 1/1 Running 0 29m nginx +``` + +```shell +kubectl get pods -lapp=guestbook,role=slave +``` +```shell +NAME READY STATUS RESTARTS AGE +guestbook-redis-slave-2q2yf 1/1 Running 0 3m +guestbook-redis-slave-qgazl 1/1 Running 0 3m +``` + +## 카나리(canary) 디플로이먼트 + +여러 레이블이 필요한 또 다른 시나리오는 동일한 컴포넌트의 다른 릴리스 또는 구성의 디플로이먼트를 구별하는 것이다. 새 릴리스가 완전히 롤아웃되기 전에 실제 운영 트래픽을 수신할 수 있도록 새로운 애플리케이션 릴리스(파드 템플리트의 이미지 태그를 통해 지정됨)의 *카나리* 를 이전 릴리스와 나란히 배포하는 것이 일반적이다. + +예를 들어, `track` 레이블을 사용하여 다른 릴리스를 구별할 수 있다. + +기본(primary), 안정(stable) 릴리스에는 값이 `stable` 인 `track` 레이블이 있다. + +```yaml + name: frontend + replicas: 3 + ... + labels: + app: guestbook + tier: frontend + track: stable + ... + image: gb-frontend:v3 +``` + +그런 다음 서로 다른 값(예: `canary`)으로 `track` 레이블을 전달하는 방명록 프론트엔드의 새 릴리스를 생성하여, 두 세트의 파드가 겹치지 않도록 할 수 있다. + +```yaml + name: frontend-canary + replicas: 1 + ... + labels: + app: guestbook + tier: frontend + track: canary + ... + image: gb-frontend:v4 +``` + + +프론트엔드 서비스는 레이블의 공통 서브셋을 선택하여(즉, `track` 레이블 생략) 두 레플리카 세트에 걸쳐 있으므로, 트래픽이 두 애플리케이션으로 리디렉션된다. + +```yaml + selector: + app: guestbook + tier: frontend +``` + +안정 및 카나리 릴리스의 레플리카 수를 조정하여 실제 운영 트래픽을 수신할 각 릴리스의 비율을 결정한다(이 경우, 3:1). +확신이 들면, 안정 릴리스의 track을 새로운 애플리케이션 릴리스로 업데이트하고 카나리를 제거할 수 있다. + +보다 구체적인 예시는, [Ghost 배포에 대한 튜토리얼](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary)을 확인한다. + +## 레이블 업데이트 + +새로운 리소스를 만들기 전에 기존 파드 및 기타 리소스의 레이블을 다시 지정해야 하는 경우가 있다. 이것은 `kubectl label` 로 수행할 수 있다. +예를 들어, 모든 nginx 파드에 프론트엔드 티어로 레이블을 지정하려면, 간단히 다음과 같이 실행한다. + +```shell +kubectl label pods -l app=nginx tier=fe +``` + +```shell +pod/my-nginx-2035384211-j5fhi labeled +pod/my-nginx-2035384211-u2c7e labeled +pod/my-nginx-2035384211-u3t6x labeled +``` + +먼저 "app=nginx" 레이블이 있는 모든 파드를 필터링한 다음, "tier=fe" 레이블을 지정한다. +방금 레이블을 지정한 파드를 보려면, 다음을 실행한다. + +```shell +kubectl get pods -l app=nginx -L tier +``` +```shell +NAME READY STATUS RESTARTS AGE TIER +my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe +my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe +my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe +``` + +그러면 파드 티어의 추가 레이블 열(`-L` 또는 `--label-columns` 로 지정)과 함께, 모든 "app=nginx" 파드가 출력된다. + +더 자세한 내용은, [레이블](/ko/docs/concepts/overview/working-with-objects/labels/) 및 [kubectl label](/docs/reference/generated/kubectl/kubectl-commands/#label)을 참고하길 바란다. + +## 어노테이션 업데이트 + +때로는 어노테이션을 리소스에 첨부하려고 할 수도 있다. 어노테이션은 도구, 라이브러리 등과 같은 API 클라이언트가 검색할 수 있는 임의의 비-식별 메타데이터이다. 이는 `kubectl annotate` 으로 수행할 수 있다. 예를 들면 다음과 같다. + +```shell +kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' +kubectl get pods my-nginx-v4-9gw19 -o yaml +``` +```shell +apiVersion: v1 +kind: pod +metadata: + annotations: + description: my frontend running nginx +... +``` + +더 자세한 내용은, [어노테이션](/ko/docs/concepts/overview/working-with-objects/annotations/) 및 [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands/#annotate) 문서를 참고하길 바란다. + +## 애플리케이션 스케일링 + +애플리케이션의 로드가 증가하거나 축소되면, `kubectl` 을 사용하여 쉽게 스케일링할 수 있다. 예를 들어, nginx 레플리카 수를 3에서 1로 줄이려면, 다음을 수행한다. + +```shell +kubectl scale deployment/my-nginx --replicas=1 +``` +```shell +deployment.extensions/my-nginx scaled +``` + +이제 디플로이먼트가 관리하는 파드가 하나만 있다. + +```shell +kubectl get pods -l app=nginx +``` +```shell +NAME READY STATUS RESTARTS AGE +my-nginx-2035384211-j5fhi 1/1 Running 0 30m +``` + +시스템이 필요에 따라 1에서 3까지의 범위에서 nginx 레플리카 수를 자동으로 선택하게 하려면, 다음을 수행한다. + +```shell +kubectl autoscale deployment/my-nginx --min=1 --max=3 +``` +```shell +horizontalpodautoscaler.autoscaling/my-nginx autoscaled +``` + +이제 nginx 레플리카가 필요에 따라 자동으로 확장되거나 축소된다. + +더 자세한 내용은, [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands/#scale), [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale) 및 [horizontal pod autoscaler](/ko/docs/tasks/run-application/horizontal-pod-autoscale/) 문서를 참고하길 바란다. + + +## 리소스 인플레이스(in-place) 업데이트 + +때로는 자신이 만든 리소스를 필요한 부분만, 중단없이 업데이트해야 할 때가 있다. + +### kubectl apply + +구성 파일 셋을 소스 제어에서 유지하는 것이 좋으며([코드로서의 구성](http://martinfowler.com/bliki/InfrastructureAsCode.html) 참조), +그렇게 하면 구성하는 리소스에 대한 코드와 함께 버전을 지정하고 유지할 수 있다. +그런 다음, [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply)를 사용하여 구성 변경 사항을 클러스터로 푸시할 수 있다. + +이 명령은 푸시하려는 구성의 버전을 이전 버전과 비교하고 지정하지 않은 속성에 대한 자동 변경 사항을 덮어쓰지 않은 채 수정한 변경 사항을 적용한다. + +```shell +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +deployment.apps/my-nginx configured +``` + +참고로 `kubectl apply` 는 이전의 호출 이후 구성의 변경 사항을 판별하기 위해 리소스에 어노테이션을 첨부한다. 호출되면, `kubectl apply` 는 리소스를 수정하는 방법을 결정하기 위해, 이전 구성과 제공된 입력 및 리소스의 현재 구성 간에 3-way diff를 수행한다. + +현재, 이 어노테이션 없이 리소스가 생성되므로, `kubectl apply` 의 첫 번째 호출은 제공된 입력과 리소스의 현재 구성 사이의 2-way diff로 대체된다. 이 첫 번째 호출 중에는, 리소스를 생성할 때 설정된 특성의 삭제를 감지할 수 없다. 이러한 이유로, 그 특성들을 삭제하지 않는다. + +`kubectl apply` 에 대한 모든 후속 호출, 그리고 `kubectl replace` 및 `kubectl edit` 와 같이 구성을 수정하는 다른 명령은, 어노테이션을 업데이트하여, `kubectl apply` 에 대한 후속 호출이 3-way diff를 사용하여 삭제를 감지하고 수행할 수 있도록 한다. + +### kubectl edit + +또는, `kubectl edit`로 리소스를 업데이트할 수도 있다. + +```shell +kubectl edit deployment/my-nginx +``` + +이것은 먼저 리소스를 `get` 하여, 텍스트 편집기에서 편집한 다음, 업데이트된 버전으로 리소스를 `apply` 하는 것과 같다. + +```shell +kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml +vi /tmp/nginx.yaml +# 편집한 다음, 파일을 저장한다. + +kubectl apply -f /tmp/nginx.yaml +deployment.apps/my-nginx configured + +rm /tmp/nginx.yaml +``` + +이를 통해 보다 중요한 변경을 더 쉽게 ​​수행할 수 있다. 참고로 `EDITOR` 또는 `KUBE_EDITOR` 환경 변수를 사용하여 편집기를 지정할 수 있다. + +더 자세한 내용은, [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) 문서를 참고하길 바란다. + +### kubectl patch + +`kubectl patch` 를 사용하여 API 오브젝트를 인플레이스 업데이트할 수 있다. 이 명령은 JSON 패치, +JSON 병합 패치 그리고 전략적 병합 패치를 지원한다. +[kubectl patch를 사용한 인플레이스 API 오브젝트 업데이트](/docs/tasks/run-application/update-api-object-kubectl-patch/)와 +[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch)를 +참조한다. + +## 파괴적(disruptive) 업데이트 + +경우에 따라, 한 번 초기화하면 업데이트할 수 없는 리소스 필드를 업데이트해야 하거나, 디플로이먼트에서 생성된 손상된 파드를 고치는 등의 재귀적 변경을 즉시 원할 수도 있다. 이러한 필드를 변경하려면, `replace --force` 를 사용하여 리소스를 삭제하고 다시 만든다. 이 경우, 원래 구성 파일을 간단히 수정할 수 있다. + +```shell +kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force +``` +```shell +deployment.apps/my-nginx deleted +deployment.apps/my-nginx replaced +``` + +## 서비스 중단없이 애플리케이션 업데이트 + +언젠가는, 위의 카나리 디플로이먼트 시나리오에서와 같이, 일반적으로 새 이미지 또는 이미지 태그를 지정하여, 배포된 애플리케이션을 업데이트해야 한다. `kubectl` 은 여러 가지 업데이트 작업을 지원하며, 각 업데이트 작업은 서로 다른 시나리오에 적용할 수 있다. + +디플로이먼트를 사용하여 애플리케이션을 생성하고 업데이트하는 방법을 안내한다. + +nginx 1.14.2 버전을 실행한다고 가정해 보겠다. + +```shell +kubectl create deployment my-nginx --image=nginx:1.14.2 +``` +```shell +deployment.apps/my-nginx created +``` + +3개의 레플리카를 포함한다(이전과 새 개정판이 공존할 수 있음). +```shell +kubectl scale deployment my-nginx --current-replicas=1 --replicas=3 +``` +``` +deployment.apps/my-nginx scaled +``` + +1.16.1 버전으로 업데이트하려면, 위에서 배운 kubectl 명령을 사용하여 `.spec.template.spec.containers[0].image` 를 `nginx:1.14.2` 에서 `nginx:1.16.1` 로 간단히 변경한다. + +```shell +kubectl edit deployment/my-nginx +``` + +이것으로 끝이다! 디플로이먼트는 배포된 nginx 애플리케이션을 배후에서 점차적으로 업데이트한다. 업데이트되는 동안 특정 수의 이전 레플리카만 중단될 수 있으며, 원하는 수의 파드 위에 특정 수의 새 레플리카만 생성될 수 있다. 이에 대한 더 자세한 내용을 보려면, [디플로이먼트 페이지](/ko/docs/concepts/workloads/controllers/deployment/)를 방문한다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +- [애플리케이션 검사 및 디버깅에 `kubectl` 을 사용하는 방법](/docs/tasks/debug-application-cluster/debug-application-introspection/)에 대해 알아본다. +- [구성 모범 사례 및 팁](/ko/docs/concepts/configuration/overview/)을 참고한다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/configuration/assign-pod-node.md b/content/ko/docs/concepts/configuration/assign-pod-node.md index 027a7d38d378e..5136fddeeb992 100644 --- a/content/ko/docs/concepts/configuration/assign-pod-node.md +++ b/content/ko/docs/concepts/configuration/assign-pod-node.md @@ -315,7 +315,7 @@ spec: topologyKey: "kubernetes.io/hostname" containers: - name: web-app - image: nginx:1.12-alpine + image: nginx:1.16-alpine ``` 만약 위의 두 디플로이먼트를 생성하면 세 개의 노드가 있는 클러스터는 다음과 같아야 한다. diff --git a/content/ko/docs/concepts/configuration/resource-bin-packing.md b/content/ko/docs/concepts/configuration/resource-bin-packing.md new file mode 100644 index 0000000000000..976e2ec83700a --- /dev/null +++ b/content/ko/docs/concepts/configuration/resource-bin-packing.md @@ -0,0 +1,193 @@ +--- +title: 확장된 리소스를 위한 리소스 빈 패킹(bin packing) +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.16" state="alpha" >}} + +kube-scheduler는 `RequestedToCapacityRatioResourceAllocation` 우선 순위 기능을 사용해서 확장된 리소스와 함께 리소스의 빈 패킹이 가능하도록 구성할 수 있다. 우선 순위 기능을 사용해서 맞춤 요구에 따라 kube-scheduler를 미세 조정할 수 있다. + +{{% /capture %}} + +{{% capture body %}} + +## RequestedToCapacityRatioResourceAllocation을 사용해서 빈 패킹 활성화하기 + +쿠버네티스 1.15 이전에는 Kube-scheduler가 CPU 및 메모리와 같은 리소스의 용량 대비 요청 비율을 기반으로 노드의 점수를 매기는 것을 허용했다. 쿠버네티스 1.16은 우선 순위 기능에 새로운 파라미터를 추가해서 사용자가 용량 대비 요청 비율을 기반으로 노드에 점수를 매기도록 각 리소스의 가중치와 함께 리소스를 지정할 수 있다. 이를 통해 사용자는 적절한 파라미터를 사용해서 확장된 리소스를 빈 팩으로 만들수 있어 대규모의 클러스터에서 부족한 리소스의 활용도가 향상된다. `RequestedToCapacityRatioResourceAllocation` 우선 순위 기능의 동작은 `requestedToCapacityRatioArguments`라는 구성 옵션으로 제어할 수 있다. 이 인수는 `shape`와 `resources` 두 개의 파라미터로 구성된다. 셰이프(shape)는 사용자가 `utilization`과 `score` 값을 기반으로 최소 요청 또는 최대 요청된 대로 기능을 조정할 수 있게 한다. 리소스는 +점수를 매길 때 고려할 리소스를 지정하는 `name` 과 각 리소스의 가중치를 지정하는 `weight` 로 구성된다. + +다음은 확장된 리소스 `intel.com/foo` 와 `intel.com/bar` 에 대한 `requestedToCapacityRatioArguments` 를 빈 패킹 동작으로 설정하는 구성의 예시이다. + +```json +{ + "kind" : "Policy", + "apiVersion" : "v1", + + ... + + "priorities" : [ + + ... + + { + "name": "RequestedToCapacityRatioPriority", + "weight": 2, + "argument": { + "requestedToCapacityRatioArguments": { + "shape": [ + {"utilization": 0, "score": 0}, + {"utilization": 100, "score": 10} + ], + "resources": [ + {"name": "intel.com/foo", "weight": 3}, + {"name": "intel.com/bar", "weight": 5} + ] + } + } + } + ], + } +``` + +**이 기능은 기본적으로 비활성화되어 있다.** + +### RequestedToCapacityRatioResourceAllocation 우선 순위 기능 튜닝하기 + +`shape` 는 `RequestedToCapacityRatioPriority` 기능의 동작을 지정하는 데 사용된다. + +```yaml + {"utilization": 0, "score": 0}, + {"utilization": 100, "score": 10} +``` + +위의 인수는 사용률이 0%인 경우 점수는 0, 사용률이 100%인 경우 10으로 하여, 빈 패킹 동작을 활성화한다. 최소 요청을 활성화하려면 점수 값을 다음과 같이 변경해야 한다. + +```yaml + {"utilization": 0, "score": 100}, + {"utilization": 100, "score": 0} +``` + +`resources` 는 기본적으로 다음과 같이 설정되는 선택적인 파라미터이다. + +``` yaml +"resources": [ + {"name": "CPU", "weight": 1}, + {"name": "Memory", "weight": 1} + ] +``` + +다음과 같이 확장된 리소스를 추가하는 데 사용할 수 있다. + +```yaml +"resources": [ + {"name": "intel.com/foo", "weight": 5}, + {"name": "CPU", "weight": 3}, + {"name": "Memory", "weight": 1} + ] +``` + +가중치 파라미터는 선택 사항이며 지정되지 않은 경우 1로 설정 된다. 또한, 가중치는 음수로 설정할 수 없다. + +### RequestedToCapacityRatioResourceAllocation 우선 순위 기능이 노드에 점수를 매기는 방법 + +이 섹션은 이 기능 내부의 세부적인 사항을 이해하려는 사람들을 +위한 것이다. +아래는 주어진 값의 집합에 대해 노드 점수가 계산되는 방법의 예시이다. + +``` +Requested Resources + +intel.com/foo : 2 +Memory: 256MB +CPU: 2 + +Resource Weights + +intel.com/foo : 5 +Memory: 1 +CPU: 3 + +FunctionShapePoint {{0, 0}, {100, 10}} + +Node 1 Spec + +Available: +intel.com/foo : 4 +Memory : 1 GB +CPU: 8 + +Used: +intel.com/foo: 1 +Memory: 256MB +CPU: 1 + + +Node Score: + +intel.com/foo = resourceScoringFunction((2+1),4) + = (100 - ((4-3)*100/4) + = (100 - 25) + = 75 + = rawScoringFunction(75) + = 7 + +Memory = resourceScoringFunction((256+256),1024) + = (100 -((1024-512)*100/1024)) + = 50 + = rawScoringFunction(50) + = 5 + +CPU = resourceScoringFunction((2+1),8) + = (100 -((8-3)*100/8)) + = 37.5 + = rawScoringFunction(37.5) + = 3 + +NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3) + = 5 + + +Node 2 Spec + +Available: +intel.com/foo: 8 +Memory: 1GB +CPU: 8 + +Used: + +intel.com/foo: 2 +Memory: 512MB +CPU: 6 + + +Node Score: + +intel.com/foo = resourceScoringFunction((2+2),8) + = (100 - ((8-4)*100/8) + = (100 - 25) + = 50 + = rawScoringFunction(50) + = 5 + +Memory = resourceScoringFunction((256+512),1024) + = (100 -((1024-768)*100/1024)) + = 75 + = rawScoringFunction(75) + = 7 + +CPU = resourceScoringFunction((2+6),8) + = (100 -((8-8)*100/8)) + = 100 + = rawScoringFunction(100) + = 10 + +NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3) + = 7 + +``` + +{{% /capture %}} diff --git a/content/ko/docs/concepts/configuration/taint-and-toleration.md b/content/ko/docs/concepts/configuration/taint-and-toleration.md new file mode 100644 index 0000000000000..012924c75dd09 --- /dev/null +++ b/content/ko/docs/concepts/configuration/taint-and-toleration.md @@ -0,0 +1,297 @@ +--- +title: 테인트(Taints)와 톨러레이션(Tolerations) +content_template: templates/concept +weight: 40 +--- + + +{{% capture overview %}} +[여기](/ko/docs/concepts/configuration/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity)에 설명된 노드 어피니티는 +노드 셋을 *끌어들이는* (기본 설정 또는 어려운 요구 사항) +*파드* 속성이다. 테인트는 그 반대로, *노드* 가 파드 셋을 +*제외* 할 수 있다. + +테인트와 톨러레이션은 함께 작동하여 파드가 부적절한 노드에 스케줄되지 +않게 한다. 하나 이상의 테인트가 노드에 적용된다. 이것은 +노드가 테인트를 용인하지 않는 파드를 수용해서는 안 되는 것을 나타낸다. +톨러레이션은 파드에 적용되며, 파드를 일치하는 테인트가 있는 노드에 스케줄되게 +하지만 필수는 아니다. + +{{% /capture %}} + +{{% capture body %}} + +## 개요 + +[kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint)를 사용하여 노드에 테인트을 추가한다. +예를 들면 다음과 같다. + +```shell +kubectl taint nodes node1 key=value:NoSchedule +``` + +`node1` 노드에 테인트을 배치한다. 테인트에는 키 `key`, 값 `value` 및 테인트 이펙트(effect) `NoSchedule` 이 있다. +이는 일치하는 톨러레이션이 없으면 파드를 `node1` 에 스케줄할 수 없음을 의미한다. + +위의 명령으로 추가한 테인트를 제거하려면, 다음을 실행한다. +```shell +kubectl taint nodes node1 key:NoSchedule- +``` + +PodSpec에서 파드에 대한 톨러레이션를 지정한다. 다음의 톨러레이션은 +위의 `kubectl taint` 라인에 의해 생성된 테인트와 "일치"하므로, 어느 쪽 톨러레이션을 가진 파드이던 +`node1` 에 스케줄 될 수 있다. + +```yaml +tolerations: +- key: "key" + operator: "Equal" + value: "value" + effect: "NoSchedule" +``` + +```yaml +tolerations: +- key: "key" + operator: "Exists" + effect: "NoSchedule" +``` + +톨러레이션을 사용하는 파드의 예는 다음과 같다. + +{{< codenew file="pods/pod-with-toleration.yaml" >}} + +톨러레이션은 키가 동일하고 이펙트가 동일한 경우, 테인트와 "일치"한다. 그리고 다음의 경우에도 마찬가지다. + +* `operator` 가 `Exists` 인 경우(이 경우 `value` 를 지정하지 않아야 함), 또는 +* `operator` 는 `Equal` 이고 `value` 는 `value` 로 같다. + +지정하지 않으면 `operator` 의 기본값은 `Equal` 이다. + +{{< note >}} + +두 가지 특별한 경우가 있다. + +* operator `Exists` 가 있는 비어있는 `key` 는 모든 키, 값 및 이펙트와 일치하므로 +모든 것이 톨러레이션 된다. + +```yaml +tolerations: +- operator: "Exists" +``` + +* 비어있는 `effect` 는 모든 이펙트를 키 `key` 와 일치시킨다. + +```yaml +tolerations: +- key: "key" + operator: "Exists" +``` + +{{< /note >}} + +위의 예는 `NoSchedule` 의 `effect` 를 사용했다. 또는, `PreferNoSchedule` 의 `effect` 를 사용할 수 있다. +이것은 `NoSchedule` 의 "기본 설정(preference)" 또는 "소프트(soft)" 버전이다. 시스템은 노드의 테인트를 허용하지 않는 +파드를 배치하지 않으려고 *시도* 하지만, 필요하지는 않다. 세 번째 종류의 `effect` 는 +나중에 설명할 `NoExecute` 이다. + +동일한 노드에 여러 테인트를, 동일한 파드에 여러 톨러레이션을 둘 수 있다. +쿠버네티스가 여러 테인트 및 톨러레이션을 처리하는 방식은 필터와 같다. +모든 노드의 테인트로 시작한 다음, 파드에 일치하는 톨러레이션이 있는 것을 무시한다. +무시되지 않은 나머지 테인트는 파드에 표시된 이펙트를 가진다. 특히, + +* `NoSchedule` 이펙트가 있는 무시되지 않은 테인트가 하나 이상 있으면 쿠버네티스는 해당 노드에 +파드를 스케줄하지 않는다. +* `NoSchedule` 이펙트가 있는 무시되지 않은 테인트가 없지만 `PreferNoSchedule` 이펙트가 있는 +무시되지 않은 테인트가 하나 이상 있으면 쿠버네티스는 파드를 노드에 스케쥴하지 않으려고 *시도* 한다 +* `NoExecute` 이펙트가 있는 무시되지 않은 테인트가 하나 이상 있으면 +파드가 노드에서 축출되고(노드에서 이미 실행 중인 경우), 노드에서 +스케줄되지 않는다(아직 실행되지 않은 경우). + +예를 들어, 이와 같은 노드를 테인트하는 경우는 다음과 같다. + +```shell +kubectl taint nodes node1 key1=value1:NoSchedule +kubectl taint nodes node1 key1=value1:NoExecute +kubectl taint nodes node1 key2=value2:NoSchedule +``` + +그리고 파드에는 두 가지 톨러레이션이 있다. + +```yaml +tolerations: +- key: "key1" + operator: "Equal" + value: "value1" + effect: "NoSchedule" +- key: "key1" + operator: "Equal" + value: "value1" + effect: "NoExecute" +``` + +이 경우, 세 번째 테인트와 일치하는 톨러레이션이 없기 때문에, 파드는 +노드에 스케줄 될 수 없다. 그러나 세 번째 테인트가 파드에서 용인되지 않는 세 가지 중 +하나만 있기 때문에, 테인트가 추가될 때 노드에서 이미 실행 중인 경우, +파드는 계속 실행할 수 있다. + +일반적으로, `NoExecute` 이펙트가 있는 테인트가 노드에 추가되면, 테인트를 +용인하지 않는 파드는 즉시 축출되고, 테인트를 용인하는 파드는 +축출되지 않는다. 그러나 `NoExecute` 이펙트가 있는 톨러레이션은 +테인트가 추가된 후 파드가 노드에 바인딩된 시간을 지정하는 +선택적 `tolerationSeconds` 필드를 지정할 수 있다. 예를 들어, + +```yaml +tolerations: +- key: "key1" + operator: "Equal" + value: "value1" + effect: "NoExecute" + tolerationSeconds: 3600 +``` + +이것은 이 파드가 실행 중이고 일치하는 테인트가 노드에 추가되면, +파드는 3600초 동안 노드에 바인딩된 후, 축출된다는 것을 의미한다. 그 전에 +테인트를 제거하면, 파드가 축출되지 않는다. + +## 유스케이스 예시 + +테인트 및 톨러레이션은 파드를 노드에서 *멀어지게* 하거나 실행되지 않아야 하는 +파드를 축출할 수 있는 유연한 방법이다. 유스케이스 중 일부는 다음과 같다. + +* **전용 노드**: 특정 사용자들이 독점적으로 사용하도록 +노드 셋을 전용하려면, 해당 노드에 테인트를 추가(예: +`kubectl taint nodes nodename dedicated=groupName:NoSchedule`)한 다음 해당 +톨러레이션을 그들의 파드에 추가할 수 있다(사용자 정의 [어드미션 컨트롤러] +(/docs/reference/access-authn-authz/admission-controllers/)를 작성하면 가장 쉽게 수행할 수 있음). +그런 다음 톨러레이션이 있는 파드는 테인트된(전용) 노드와 +클러스터의 다른 노드를 사용할 수 있다. 노드를 특정 사용자들에게 전용으로 지정하고 *그리고* +그 사용자들이 전용 노드 *만* 사용하려면, 동일한 노드 셋에 +테인트와 유사한 레이블을 추가해야 하고(예: `dedicated=groupName`), +어드미션 컨트롤러는 추가로 파드가 `dedicated=groupName` 으로 레이블이 지정된 노드에만 +스케줄될 수 있도록 노드 어피니티를 추가해야 한다. + +* **특별한 하드웨어가 있는 노드**: 작은 서브셋의 노드에 특별한 +하드웨어(예: GPU)가 있는 클러스터에서는, 특별한 하드웨어가 필요하지 않는 파드를 +해당 노드에서 분리하여, 나중에 도착하는 특별한 하드웨어가 필요한 파드를 위한 공간을 +남겨두는 것이 바람직하다. 이는 특별한 하드웨어가 있는 +노드(예: `kubectl taint nodes nodename special=true:NoSchedule` 또는 +`kubectl taint nodes nodename special=true:PreferNoSchedule`)에 테인트를 추가하고 +특별한 하드웨어를 사용하는 파드에 해당 톨러레이션을 추가하여 수행할 수 있다. 전용 노드 유스케이스에서와 같이, +사용자 정의 [어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/)를 +사용하여 톨러레이션를 적용하는 것이 가장 쉬운 방법이다. +예를 들어, [확장된 +리소스](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)를 +사용하여 특별한 하드웨어를 나타내고, 확장된 리소스 이름으로 +특별한 하드웨어 노드를 테인트시키고 +[ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration) +어드미션 컨트롤러를 실행하는 것을 권장한다. 이제, 노드가 테인트되었으므로, 톨러레이션이 없는 +파드는 스케줄되지 않는다. 그러나 확장된 리소스를 요청하는 파드를 제출하면, +`ExtendedResourceToleration` 어드미션 컨트롤러가 +파드에 올바른 톨러레이션을 자동으로 추가하고 해당 파드는 +특별한 하드웨어 노드에서 스케줄된다. 이렇게 하면 이러한 특별한 하드웨어 노드가 +해당 하드웨어를 요청하는 파드가 전용으로 사용하며 파드에 톨러레이션을 +수동으로 추가할 필요가 없다. + +* **테인트 기반 축출**: 노드 문제가 있을 때 파드별로 +구성 가능한 축출 동작은 다음 섹션에서 설명한다. + +## 테인트 기반 축출 + +{{< feature-state for_k8s_version="1.18" state="stable" >}} + +앞에서 우리는 노드에서 이미 실행 중인 파드에 영향을 주는 `NoExecute` 테인트 이펙트를 +다음과 같이 언급했다. + + * 테인트를 용인하지 않는 파드는 즉시 축출된다. + * 톨러레이션 명세에 `tolerationSeconds` 를 지정하지 않고 + 테인트를 용인하는 파드는 계속 바인딩된다. + * `tolerationSeconds` 가 지정된 테인트를 용인하는 파드는 지정된 + 시간 동안 바인딩된 상태로 유지된다. + +덧붙여, 쿠버네티스 1.6 버전에서는 노드 문제를 나타내는 알파 지원이 +도입되었다. 다시 말해, 특정 조건이 참일 때 노드 컨트롤러는 자동으로 +노드를 테인트시킨다. 다음은 빌트인 테인트이다. + + * `node.kubernetes.io/not-ready`: 노드가 준비되지 않았다. 이는 NodeCondition + `Ready` 가 "`False`"로 됨에 해당한다. + * `node.kubernetes.io/unreachable`: 노드가 노드 컨트롤러에서 도달할 수 없다. 이는 + NodeCondition `Ready` 가 "`Unknown`"로 됨에 해당한다. + * `node.kubernetes.io/out-of-disk`: 노드에 디스크가 부족하다. + * `node.kubernetes.io/memory-pressure`: 노드에 메모리 할당 압박이 있다. + * `node.kubernetes.io/disk-pressure`: 노드에 디스크 할당 압박이 있다. + * `node.kubernetes.io/network-unavailable`: 노드의 네트워크를 사용할 수 없다. + * `node.kubernetes.io/unschedulable`: 노드를 스케줄할 수 없다. + * `node.cloudprovider.kubernetes.io/uninitialized`: "외부" 클라우드 공급자로 + kubelet을 시작하면, 이 테인트가 노드에서 사용 불가능으로 표시되도록 + 설정된다. 클라우드-컨트롤러-관리자의 컨트롤러가 이 노드를 초기화하면, + kubelet이 이 테인트를 제거한다. + +노드가 축출될 경우, 노드 컨트롤러 또는 kubelet은 `NoExecute` 이펙트로 관련 +테인트를 추가한다. 장애 상태가 정상으로 돌아오면 kubelet 또는 노드 컨트롤러가 +관련 테인트를 제거할 수 있다. + +{{< note >}} +노드 문제로 인해 파드 축출의 기존 [비율 제한](/ko/docs/concepts/architecture/nodes/) +동작을 유지하기 위해, 시스템은 실제로 테인트를 비율-제한 방식으로 +추가한다. 이는 마스터가 노드에서 분할되는 등의 시나리오에서 +대규모 파드 축출을 방지한다. +{{< /note >}} + +이 기능을 `tolerationSeconds` 와 함께 사용하면, 파드에서 +이러한 문제 중 하나 또는 둘 다가 있는 노드에 바인딩된 기간을 지정할 수 있다. + +예를 들어, 로컬 상태가 많은 애플리케이션은 네트워크 분할의 장애에서 +네트워크가 복구된 후에 파드가 축출되는 것을 피하기 위해 +오랫동안 노드에 바인딩된 상태를 유지하려고 할 수 있다. +이 경우 파드가 사용하는 톨러레이션은 다음과 같다. + +```yaml +tolerations: +- key: "node.kubernetes.io/unreachable" + operator: "Exists" + effect: "NoExecute" + tolerationSeconds: 6000 +``` + +쿠버네티스는 사용자가 제공한 파드 구성에 이미 추가된 +`node.kubernetes.io/not-ready` 에 대한 톨러레이션이 없는 경우 +`tolerationSeconds=300` 으로 `node.kubernetes.io/not-ready` 에 대한 +톨러레이션을 자동으로 추가한다. +마찬가지로 사용자가 제공한 파드 구성에 이미 추가된 +`node.kubernetes.io/unreachable` 에 대한 톨러레이션이 없는 경우 +`tolerationSeconds=300` 으로 `node.kubernetes.io/unreachable` 에 대한 +톨러레이션을 추가한다. + +자동으로 추가된 이 톨러레이션은 이러한 문제 중 하나가 +감지된 후 5분 동안 바인딩 상태로 남아있는 기본 파드 +동작이 유지되도록 한다. +[DefaultTolerationSecondsadmission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds) +어드미션 컨트롤러에 의해 두 개의 기본 톨러레이션이 추가된다. + +[데몬셋](/ko/docs/concepts/workloads/controllers/daemonset/) 파드는 `tolerationSeconds` 가 없는 +다음 테인트에 대해 `NoExecute` 톨러레이션를 가지고 생성된다. + + * `node.kubernetes.io/unreachable` + * `node.kubernetes.io/not-ready` + +이렇게 하면 이러한 문제로 인해 데몬셋 파드가 축출되지 않는다. + +## 컨디션별 노드 테인트하기 + +노드 라이프사이클 컨트롤러는 `NoSchedule` 이펙트가 있는 노드 컨디션에 해당하는 +테인트를 자동으로 생성한다. +마찬가지로 스케줄러는 노드 컨디션을 확인하지 않는다. 대신 스케줄러는 테인트를 확인한다. 이렇게 하면 노드 컨디션이 노드에 스케줄된 내용에 영향을 미치지 않는다. 사용자는 적절한 파드 톨러레이션을 추가하여 노드의 일부 문제(노드 컨디션으로 표시)를 무시하도록 선택할 수 있다. + +쿠버네티스 1.8 버전부터 데몬셋 컨트롤러는 다음의 `NoSchedule` 톨러레이션을 +모든 데몬에 자동으로 추가하여, 데몬셋이 중단되는 것을 +방지한다. + + * `node.kubernetes.io/memory-pressure` + * `node.kubernetes.io/disk-pressure` + * `node.kubernetes.io/out-of-disk` (*중요한 파드에만 해당*) + * `node.kubernetes.io/unschedulable` (1.10 이상) + * `node.kubernetes.io/network-unavailable` (*호스트 네트워크만 해당*) + +이러한 톨러레이션을 추가하면 이전 버전과의 호환성이 보장된다. 데몬셋에 +임의의 톨러레이션을 추가할 수도 있다. diff --git a/content/ko/docs/concepts/containers/container-environment-variables.md b/content/ko/docs/concepts/containers/container-environment.md similarity index 100% rename from content/ko/docs/concepts/containers/container-environment-variables.md rename to content/ko/docs/concepts/containers/container-environment.md diff --git a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md index 6967e70fc0008..6264621a24694 100644 --- a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md @@ -113,7 +113,7 @@ Events: {{% capture whatsnext %}} -* [컨테이너 환경](/ko/docs/concepts/containers/container-environment-variables/)에 대해 더 배우기. +* [컨테이너 환경](/ko/docs/concepts/containers/container-environment/)에 대해 더 배우기. * [컨테이너 라이프사이클 이벤트에 핸들러 부착](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) 실습 경험하기. diff --git a/content/ko/docs/concepts/containers/overview.md b/content/ko/docs/concepts/containers/overview.md new file mode 100644 index 0000000000000..11d29a18ce72c --- /dev/null +++ b/content/ko/docs/concepts/containers/overview.md @@ -0,0 +1,43 @@ +--- +title: 컨테이너 개요 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +컨테이너는 런타임에 필요한 종속성과 애플리케이션의 +컴파일 된 코드를 패키징 하는 기술이다. 실행되는 각각의 +컨테이너는 반복해서 사용 가능하다. 종속성이 포함된 표준화를 +통해 컨테이너가 실행되는 환경과 무관하게 항상 동일하게 +동작한다. + +컨테이너는 기본 호스트 인프라 환경에서 애플리케이션의 실행환경을 분리한다. +따라서 다양한 클라우드 환경이나 운영체제에서 쉽게 배포 할 수 있다. + +{{% /capture %}} + + +{{% capture body %}} + +## 컨테이너 이미지 +[컨테이너 이미지](/ko/docs/concepts/containers/images/) 는 즉시 실행할 수 있는 +소프트웨어 패키지이며, 애플리케이션을 실행하는데 필요한 모든 것 +(필요한 코드와 런타임, 애플리케이션 및 시스템 라이브러리 등의 모든 필수 설정에 대한 기본값) +을 포함한다. + +원칙적으로, 컨테이너는 변경되지 않는다. 이미 구동 중인 컨테이너의 +코드를 변경할 수 없다. 컨테이너화 된 애플리케이션이 있고 그 +애플리케이션을 변경하려는 경우, 변경사항을 포함하여 만든 +새로운 이미지를 통해 컨테이너를 다시 생성해야 한다. + + +## 컨테이너 런타임 + +{{< glossary_definition term_id="container-runtime" length="all" >}} + +{{% /capture %}} +{{% capture whatsnext %}} +* [컨테이너 이미지](/ko/docs/concepts/containers/images/)에 대해 읽어보기 +* [파드](/ko/docs/concepts/workloads/pods/)에 대해 읽어보기 +{{% /capture %}} diff --git a/content/ko/docs/concepts/containers/runtime-class.md b/content/ko/docs/concepts/containers/runtime-class.md index befcefcfe4294..eab78a19fb043 100644 --- a/content/ko/docs/concepts/containers/runtime-class.md +++ b/content/ko/docs/concepts/containers/runtime-class.md @@ -10,22 +10,14 @@ weight: 20 이 페이지는 런타임 클래스(RuntimeClass) 리소스와 런타임 선택 메커니즘에 대해서 설명한다. -{{< warning >}} -런타임클래스는 v1.14 베타 업그레이드에서 *중대한* 변화를 포함한다. -런타임클래스를 v1.14 이전부터 사용하고 있었다면, -[런타임 클래스를 알파에서 베타로 업그레이드하기](#upgrading-runtimeclass-from-alpha-to-beta)를 확인한다. -{{< /warning >}} +런타임클래스는 컨테이너 런타임을 구성을 선택하는 기능이다. 컨테이너 런타임 +구성은 파드의 컨테이너를 실행하는데 사용된다. {{% /capture %}} {{% capture body %}} -## 런타임 클래스 - -런타임 클래스는 컨테이너 런타임 설정을 선택하는 기능이다. -이 컨테이너 런타임 설정은 파드의 컨테이너를 실행할 때에 이용한다. - ## 동기 서로 다른 파드간에 런타임 클래스를 설정하여 @@ -38,7 +30,7 @@ weight: 20 또한 런타임 클래스를 사용하여 컨테이너 런타임이 같으나 설정이 다른 여러 파드를 실행할 수 있다. -### 셋업 +## 셋업 RuntimeClass 특징 게이트가 활성화(기본값)를 확인한다. 특징 게이트 활성화에 대한 설명은 [특징 게이트](/docs/reference/command-line-tools-reference/feature-gates/)를 @@ -47,7 +39,7 @@ RuntimeClass 특징 게이트가 활성화(기본값)를 확인한다. 1. CRI 구현(implementation)을 노드에 설정(런타임에 따라서) 2. 상응하는 런타임 클래스 리소스 생성 -#### 1. CRI 구현을 노드에 설정 +### 1. CRI 구현을 노드에 설정 런타임 클래스를 통한 가능한 구성은 컨테이너 런타임 인터페이스(CRI) 구현에 의존적이다. 사용자의 CRI 구현에 따른 설정 방법은 @@ -62,7 +54,7 @@ RuntimeClass 특징 게이트가 활성화(기본값)를 확인한다. 해당 설정은 상응하는 `handler` 이름을 가지며, 이는 런타임 클래스에 의해서 참조된다. 런타임 핸들러는 유효한 DNS 1123 서브도메인(알파-숫자 + `-`와 `.`문자)을 가져야 한다. -#### 2. 상응하는 런타임 클래스 리소스 생성 +### 2. 상응하는 런타임 클래스 리소스 생성 1단계에서 셋업 한 설정은 연관된 `handler` 이름을 가져야 하며, 이를 통해서 설정을 식별할 수 있다. 각 런타임 핸들러(그리고 선택적으로 비어있는 `""` 핸들러)에 대해서, 상응하는 런타임 클래스 오브젝트를 생성한다. @@ -88,7 +80,7 @@ handler: myconfiguration # 상응하는 CRI 설정의 이름임 더 자세한 정보는 [권한 개요](/docs/reference/access-authn-authz/authorization/)를 참고한다. {{< /note >}} -### 사용 +## 사용 클러스터를 위해서 런타임 클래스를 설정하고 나면, 그것을 사용하는 것은 매우 간단하다. 파드 스펙에 `runtimeClassName`를 명시한다. 예를 들면 다음과 같다. @@ -147,13 +139,13 @@ https://github.com/containerd/cri/blob/master/docs/config.md [100]: https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md -### 스케줄 +## 스케줄 {{< feature-state for_k8s_version="v1.16" state="beta" >}} 쿠버네티스 v1.16 부터, 런타임 클래스는 `scheduling` 필드를 통해 이종의 클러스터 지원을 포함한다. 이 필드를 사용하면, 이 런타임 클래스를 갖는 파드가 이를 지원하는 노드로 스케줄된다는 것을 보장할 수 있다. -이 스케줄링 기능을 사용하려면, 런타임 클래스 [어드미션(admission) 컨트롤러][]를 활성화(1.16 부터 기본 값)해야 한다. +이 스케줄링 기능을 사용하려면, [런타임 클래스 어드미션(admission) 컨트롤러][]를 활성화(1.16 부터 기본 값)해야 한다. 파드가 지정된 런타임 클래스를 지원하는 노드에 안착한다는 것을 보장하려면, 해당 노드들은 `runtimeClass.scheduling.nodeSelector` 필드에서 선택되는 공통 레이블을 가져야한다. @@ -168,50 +160,24 @@ https://github.com/containerd/cri/blob/master/docs/config.md 노드 셀렉터와 톨러레이션 설정에 대해 더 배우려면 [노드에 파드 할당](/ko/docs/concepts/configuration/assign-pod-node/)을 참고한다. -[어드미션 컨트롤러]: /docs/reference/access-authn-authz/admission-controllers/ +[런타임 클래스 어드미션 컨트롤러]: /docs/reference/access-authn-authz/admission-controllers/#runtimeclass ### 파드 오버헤드 -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.18" state="beta" >}} -쿠버네티스 v1.16 부터는, 런타임 클래스에는 구동 중인 파드와 관련된 오버헤드를 -지정할 수 있는 기능이 [`PodOverhead`](/docs/concepts/configuration/pod-overhead) 기능을 통해 지원된다. -`PodOverhead`를 사용하려면, PodOverhead [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)를 -활성화 시켜야 한다. (기본 값으로는 비활성화 되어 있다.) +파드 실행과 연관되는 _오버헤드_ 리소스를 지정할 수 있다. 오버헤드를 선언하면 +클러스터(스케줄러 포함)가 파드와 리소스에 대한 결정을 내릴 때 처리를 할 수 있다. +PodOverhead를 사용하려면, PodOverhead [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/) +를 활성화 시켜야 한다. (기본으로 활성화 되어 있다.) -파드 오버헤드는 런타임 클래스에서 `Overhead` 필드를 통해 정의된다. 이 필드를 사용하면, +파드 오버헤드는 런타임 클래스에서 `overhead` 필드를 통해 정의된다. 이 필드를 사용하면, 해당 런타임 클래스를 사용해서 구동 중인 파드의 오버헤드를 특정할 수 있고 이 오버헤드가 쿠버네티스 내에서 처리된다는 것을 보장할 수 있다. -### 런타임 클래스를 알파에서 베타로 업그레이드 {#upgrading-runtimeclass-from-alpha-to-beta} - -런타임 클래스 베타 기능은 다음의 변화를 포함한다. - -- `node.k8s.io` API 그룹과 `runtimeclasses.node.k8s.io` 리소스는 CustomResourceDefinition에서 - 내장 API로 이전되었다. -- 런타임 클래스 정의에서 `spec`을 직접 사용할 수 있다. - (즉, 더 이상 RuntimeClassSpec는 없다). -- `runtimeHandler` 필드는 `handler`로 이름이 바뀌었다. -- `handler` 필드는 이제 모두 API 버전에서 요구된다. 이는 알파 API에서도 `runtimeHandler` 필드가 - 필요하다는 의미이다. -- `handler` 필드는 반드시 올바른 DNS 레이블([RFC 1123](https://tools.ietf.org/html/rfc1123))으로, - 이는 더 이상 `.` 캐릭터(모든 버전에서)를 포함할 수 없다 의미이다. 올바른 핸들러는 - 다음의 정규 표현식을 따른다. `^[a-z0-9]([-a-z0-9]*[a-z0-9])?$`. - -**작업 필요** 다음 작업은 알파 버전의 런타임 기능을 -베타 버전으로 업그레이드하기 위해 진행되어야 한다. - -- 런타임 클래스 리소스는 v1.14로 업그레이드 *후에* 반드시 재생성되어야 하고, - `runtimeclasses.node.k8s.io` CRD는 다음과 같이 수동으로 지워야 한다. - ``` - kubectl delete customresourcedefinitions.apiextensions.k8s.io runtimeclasses.node.k8s.io - ``` -- 지정되지 않았거나 비어 있는 `runtimeHandler` 이거나 핸들러 내에 `.` 캐릭터를 사용한 알파 런타임 클래스는 - 더 이상 올바르지 않으며, 반드시 올바른 핸들러 구성으로 이전헤야 한다 - (위를 참조). - -### 더 읽기 +{{% /capture %}} +{{% capture whatsnext %}} - [런타임 클래스 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) - [런타임 클래스 스케줄링 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md) diff --git a/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index f8db7a20062d6..fe8af209045cf 100644 --- a/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -9,7 +9,7 @@ weight: 10 애그리게이션 레이어는 코어 쿠버네티스 API가 제공하는 기능 이외에 더 많은 기능을 제공할 수 있도록 추가 API를 더해 쿠버네티스를 확장할 수 있게 해준다. 추가 API는 [서비스-카탈로그](/docs/concepts/extend-kubernetes/service-catalog/)와 같이 미리 만들어진 솔루션이거나 사용자가 직접 개발한 API일 수 있다. -애그리게이션 레이어는 [사용자 정의 리소스](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)와는 다르며, 애그리게이션 레이어는 {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} 가 새로운 종류의 오브젝트를 인식하도록 하는 방법이다. +애그리게이션 레이어는 [사용자 정의 리소스](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)와는 다르며, 애그리게이션 레이어는 {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} 가 새로운 종류의 오브젝트를 인식하도록 하는 방법이다. {{% /capture %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md new file mode 100644 index 0000000000000..e7e380d6ba62b --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -0,0 +1,254 @@ +--- +title: 커스텀 리소스 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +*커스텀 리소스* 는 쿠버네티스 API의 익스텐션이다. 이 페이지에서는 쿠버네티스 클러스터에 +커스텀 리소스를 추가할 시기와 독립형 서비스를 사용하는 시기에 대해 설명한다. 커스텀 리소스를 +추가하는 두 가지 방법과 이들 중에서 선택하는 방법에 대해 설명한다. + +{{% /capture %}} + +{{% capture body %}} +## 커스텀 리소스 + +*리소스* 는 [쿠버네티스 API](/ko/docs/reference/using-api/api-overview/)에서 특정 종류의 +[API 오브젝트](/ko/docs/concepts/overview/working-with-objects/kubernetes-objects/) 모음을 저장하는 엔드포인트이다. 예를 들어 빌트인 *파드* 리소스에는 파드 오브젝트 모음이 포함되어 있다. + +*커스텀 리소스* 는 쿠버네티스 API의 익스텐션으로, 기본 쿠버네티스 설치에서 반드시 +사용할 수 있는 것은 아니다. 이는 특정 쿠버네티스 설치에 수정이 가해졌음을 나타낸다. 그러나 +많은 핵심 쿠버네티스 기능은 이제 커스텀 리소스를 사용하여 구축되어, 쿠버네티스를 더욱 모듈화한다. + +동적 등록을 통해 실행 중인 클러스터에서 커스텀 리소스가 나타나거나 사라질 수 있으며 +클러스터 관리자는 클러스터 자체와 독립적으로 커스텀 리소스를 업데이트 할 수 있다. +커스텀 리소스가 설치되면 사용자는 *파드* 와 같은 빌트인 리소스와 마찬가지로 +[kubectl](/docs/user-guide/kubectl-overview/)을 사용하여 해당 오브젝트를 생성하고 +접근할 수 있다. + +## 커스텀 컨트롤러 + +자체적으로 커스텀 리소스를 사용하면 구조화된 데이터를 저장하고 검색할 수 있다. +커스텀 리소스를 *커스텀 컨트롤러* 와 결합하면, 커스텀 리소스가 진정한 +_선언적(declarative) API_ 를 제공하게 된다. + +[선언적 API](/ko/docs/concepts/overview/kubernetes-api/)는 리소스의 의도한 상태를 +_선언_ 하거나 지정할 수 있게 해주며 쿠버네티스 오브젝트의 현재 상태를 의도한 상태와 +동기화 상태로 유지하려고 한다. 컨트롤러는 구조화된 데이터를 사용자가 +원하는 상태의 레코드로 해석하고 지속적으로 +이 상태를 유지한다. + +클러스터 라이프사이클과 관계없이 실행 중인 클러스터에 커스텀 컨트롤러를 배포하고 +업데이트할 수 있다. 커스텀 컨트롤러는 모든 종류의 리소스와 함께 작동할 수 있지만 +커스텀 리소스와 결합할 때 특히 효과적이다. +[오퍼레이터 패턴](https://coreos.com/blog/introducing-operators.html)은 사용자 정의 +리소스와 커스텀 컨트롤러를 결합한다. 커스텀 컨트롤러를 사용하여 특정 애플리케이션에 대한 도메인 지식을 +쿠버네티스 API의 익스텐션으로 인코딩할 수 있다. + +## 쿠버네티스 클러스터에 커스텀 리소스를 추가해야 하나? + +새로운 API를 생성할 때 [쿠버네티스 클러스터 API와 생성한 API를 애그리게이트](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)할 것인지 아니면 생성한 API를 독립적으로 유지할 것인지 고려하자. + +| API 애그리게이트를 고려할 경우 | 독립 API를 고려할 경우 | +| ---------------------------- | ---------------------------- | +| API가 [선언적](#선언적-api)이다. | API가 [선언적](#선언적-api) 모델에 맞지 않다. | +| `kubectl`을 사용하여 새로운 타입을 읽고 쓸 수 있기를 원한다.| `kubectl` 지원이 필요하지 않다. | +| 쿠버네티스 UI(예: 대시보드)에서 빌트인 타입과 함께 새로운 타입을 보길 원한다. | 쿠버네티스 UI 지원이 필요하지 않다. | +| 새로운 API를 개발 중이다. | 생성한 API를 제공하는 프로그램이 이미 있고 잘 작동하고 있다. | +| API 그룹 및 네임스페이스와 같은 REST 리소스 경로에 적용하는 쿠버네티스의 형식 제한을 기꺼이 수용한다. ([API 개요](/ko/docs/concepts/overview/kubernetes-api/)를 참고한다.) | 이미 정의된 REST API와 호환되도록 특정 REST 경로가 있어야 한다. | +| 자체 리소스는 자연스럽게 클러스터 또는 클러스터의 네임스페이스로 범위가 지정된다. | 클러스터 또는 네임스페이스 범위의 리소스는 적합하지 않다. 특정 리소스 경로를 제어해야 한다. | +| [쿠버네티스 API 지원 기능](#일반적인-기능)을 재사용하려고 한다. | 이러한 기능이 필요하지 않다. | + +### 선언적 API + +선언적 API에서는 다음의 특성이 있다. + + - API는 상대적으로 적은 수의 상대적으로 작은 오브젝트(리소스)로 구성된다. + - 오브젝트는 애플리케이션 또는 인프라의 구성을 정의한다. + - 오브젝트는 비교적 드물게 업데이트 된다. + - 사람이 종종 오브젝트를 읽고 쓸 필요가 있다. + - 오브젝트의 주요 작업은 CRUD-y(생성, 읽기, 업데이트 및 삭제)이다. + - 오브젝트 간 트랜잭션은 필요하지 않다. API는 정확한(exact) 상태가 아니라 의도한 상태를 나타낸다. + +명령형 API는 선언적이지 않다. +자신의 API가 선언적이지 않을 수 있다는 징후는 다음과 같다. + + - 클라이언트는 "이 작업을 수행한다"라고 말하고 완료되면 동기(synchronous) 응답을 받는다. + - 클라이언트는 "이 작업을 수행한다"라고 말한 다음 작업 ID를 다시 가져오고 별도의 오퍼레이션(operation) 오브젝트를 확인하여 요청의 완료 여부를 결정해야 한다. + - RPC(원격 프로시저 호출)에 대해 얘기한다. + - 대량의 데이터를 직접 저장한다(예: > 오브젝트별 몇 kB 또는 >1000개의 오브젝트). + - 높은 대역폭 접근(초당 10개의 지속적인 요청)이 필요하다. + - 최종 사용자 데이터(예: 이미지, PII 등) 또는 애플리케이션에서 처리한 기타 대규모 데이터를 저장한다. + - 오브젝트에 대한 자연스러운 조작은 CRUD-y가 아니다. + - API는 오브젝트로 쉽게 모델링되지 않는다. + - 작업 ID 또는 작업 오브젝트로 보류 중인 작업을 나타내도록 선택했다. + +## 컨피그맵을 사용해야 하나, 커스텀 리소스를 사용해야 하나? + +다음 중 하나에 해당하면 컨피그맵을 사용하자. + +* `mysql.cnf` 또는 `pom.xml`과 같이 잘 문서화된 기존 구성 파일 형식이 있다. +* 전체 구성 파일을 컨피그맵의 하나의 키에 넣고 싶다. +* 구성 파일의 주요 용도는 클러스터의 파드에서 실행 중인 프로그램이 파일을 사용하여 자체 구성하는 것이다. +* 파일 사용자는 쿠버네티스 API가 아닌 파드의 환경 변수 또는 파드의 파일을 통해 사용하는 것을 선호한다. +* 파일이 업데이트될 때 디플로이먼트 등을 통해 롤링 업데이트를 수행하려고 한다. + +{{< note >}} +민감한 데이터에는 [시크릿](/docs/concepts/configuration/secret/)을 사용하자. 이는 컨피그맵과 비슷하지만 더 안전한다. +{{< /note >}} + +다음 중 대부분이 적용되는 경우 커스텀 리소스(CRD 또는 애그리게이트 API(aggregated API))를 사용하자. + +* 쿠버네티스 클라이언트 라이브러리 및 CLI를 사용하여 새 리소스를 만들고 업데이트하려고 한다. +* kubectl의 최상위 지원을 원한다(예: `kubectl get my-object object-name`). +* 새 오브젝트에 대한 업데이트를 감시한 다음 다른 오브젝트를 CRUD하거나 그 반대로 하는 새로운 자동화를 구축하려고 한다. +* 오브젝트의 업데이트를 처리하는 자동화를 작성하려고 한다. +* `.spec`, `.status` 및 `.metadata`와 같은 쿠버네티스 API 규칙을 사용하려고 한다. +* 제어된 리소스의 콜렉션 또는 다른 리소스의 요약에 대한 오브젝트가 되기를 원한다. + +## 커스텀 리소스 추가 + +쿠버네티스는 클러스터에 커스텀 리소스를 추가하는 두 가지 방법을 제공한다. + +- CRD는 간단하며 프로그래밍 없이 만들 수 있다. +- [API 애그리게이션](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)에는 프로그래밍이 필요하지만, 데이터 저장 방법 및 API 버전 간 변환과 같은 API 동작을 보다 강력하게 제어할 수 있다. + +쿠버네티스는 다양한 사용자의 요구를 충족시키기 위해 이 두 가지 옵션을 제공하므로 사용의 용이성이나 유연성이 저하되지 않는다. + +애그리게이트 API는 기본 API 서버 뒤에 있는 하위 API 서버이며 프록시 역할을 한다. 이 배치를 [API 애그리게이션](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA)이라고 한다. 사용자에게는 쿠버네티스 API가 확장된 것과 같다. + +CRD를 사용하면 다른 API 서버를 추가하지 않고도 새로운 타입의 리소스를 생성할 수 있다. CRD를 사용하기 위해 API 애그리게이션을 이해할 필요는 없다. + +설치 방법에 관계없이 새 리소스는 커스텀 리소스라고 하며 빌트인 쿠버네티스 리소스(파드 등)와 구별된다. + +## 커스텀리소스데피니션 + +[커스텀리소스데피니션](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) +API 리소스를 사용하면 커스텀 리소스를 정의할 수 있다. +CRD 오브젝트를 정의하면 지정한 이름과 스키마를 사용하여 새 커스텀 리소스가 만들어진다. +쿠버네티스 API는 커스텀 리소스의 스토리지를 제공하고 처리한다. +CRD 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names#dns-서브도메인-이름)이어야 한다. + +따라서 커스텀 리소스를 처리하기 위해 자신의 API 서버를 작성할 수 없지만 +구현의 일반적인 특성으로 인해 +[API 서버 애그리게이션](#api-서버-애그리게이션)보다 유연성이 떨어진다. + +새 커스텀 리소스를 등록하고 새 리소스 타입의 인스턴스에 대해 작업하고 +컨트롤러를 사용하여 이벤트를 처리하는 방법에 대한 예제는 +[커스텀 컨트롤러 예제](https://github.com/kubernetes/sample-controller)를 참고한다. + +## API 서버 애그리게이션 + +일반적으로 쿠버네티스 API의 각 리소스에는 REST 요청을 처리하고 오브젝트의 퍼시스턴트 스토리지를 관리하는 코드가 필요하다. 주요 쿠버네티스 API 서버는 *파드* 및 *서비스* 와 같은 빌트인 리소스를 처리하고, 일반적으로 [CRD](#커스텀리소스데피니션)를 통해 커스텀 리소스를 처리할 수 ​​있다. + +[애그리게이션 레이어](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)를 사용하면 자체 독립형 API 서버를 +작성하고 배포하여 커스텀 리소스에 대한 특수한 구현을 제공할 수 있다. +기본 API 서버는 처리하는 커스텀 리소스에 대한 요청을 사용자에게 위임하여 +모든 클라이언트가 사용할 수 있게 한다. + +## 커스텀 리소스를 추가할 방법 선택 + +CRD는 사용하기가 더 쉽다. 애그리게이트 API가 더 유연하다. 자신의 요구에 가장 잘 맞는 방법을 선택하자. + +일반적으로 CRD는 다음과 같은 경우에 적합하다. + +* 필드가 몇 개 되지 않는다 +* 회사 내에서 또는 소규모 오픈소스 프로젝트의 일부인(상용 제품이 아닌) 리소스를 사용하고 있다. + +### 사용 편의성 비교 + +CRD는 애그리게이트 API보다 생성하기가 쉽다. + +| CRD | 애그리게이트 API | +| --------------------------- | -------------- | +| 프로그래밍이 필요하지 않다. 사용자는 CRD 컨트롤러에 대한 모든 언어를 선택할 수 있다. | Go로 프로그래밍하고 바이너리와 이미지를 빌드해야 한다. | +| 실행할 추가 서비스가 없다. CR은 API 서버에서 처리한다. | 추가 서비스를 생성하면 실패할 수 있다. | +| CRD가 생성된 후에는 지속적인 지원이 없다. 모든 버그 픽스는 일반적인 쿠버네티스 마스터 업그레이드의 일부로 선택된다. | 업스트림에서 버그 픽스를 주기적으로 선택하고 애그리게이트 API 서버를 다시 빌드하고 업데이트해야 할 수 있다. | +| 여러 버전의 API를 처리할 필요가 없다. 예를 들어, 이 리소스에 대한 클라이언트를 제어할 때 API와 동기화하여 업그레이드할 수 있다. | 인터넷에 공유할 익스텐션을 개발할 때와 같이 여러 버전의 API를 처리해야 한다. | + +### 고급 기능 및 유연성 + +애그리게이트 API는 보다 고급 API 기능과 스토리지 레이어와 같은 다른 기능의 사용자 정의를 제공한다. + +| 기능 | 설명 | CRD | 애그리게이트 API | +| ------- | ----------- | ---- | -------------- | +| 유효성 검사 | 사용자가 오류를 방지하고 클라이언트와 독립적으로 API를 발전시킬 수 있도록 도와준다. 이러한 기능은 동시에 많은 클라이언트를 모두 업데이트할 수 없는 경우에 아주 유용하다. | 예. [OpenAPI v3.0 유효성 검사](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation)를 사용하여 CRD에서 대부분의 유효성 검사를 지정할 수 있다. [웹훅 유효성 검사](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook-alpha-in-1-8-beta-in-1-9)를 추가해서 다른 모든 유효성 검사를 지원한다. | 예, 임의의 유효성 검사를 지원한다. | +| 기본 설정 | 위를 참고하자. | 예, [OpenAPI v3.0 유효성 검사](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#defaulting)의 `default` 키워드(1.17에서 GA) 또는 [웹훅 변형(mutating)](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)(이전 오브젝트의 etcd에서 읽을 때는 실행되지 않음)을 통해 지원한다. | 예 | +| 다중 버전 관리 | 두 가지 API 버전을 통해 동일한 오브젝트를 제공할 수 있다. 필드 이름 바꾸기와 같은 API 변경을 쉽게 할 수 있다. 클라이언트 버전을 제어하는 ​​경우는 덜 중요하다. | [예](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning) | 예 | +| 사용자 정의 스토리지 | 다른 성능 모드(예를 들어, 키-값 저장소 대신 시계열 데이터베이스)나 보안에 대한 격리(예를 들어, 암호화된 시크릿이나 다른 암호화) 기능을 가진 스토리지가 필요한 경우 | 아니오 | 예 | +| 사용자 정의 비즈니스 로직 | 오브젝트를 생성, 읽기, 업데이트 또는 삭제를 할 때 임의의 점검 또는 조치를 수행한다. | 예, [웹훅](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)을 사용한다. | 예 | +| 서브리소스 크기 조정 | HorizontalPodAutoscaler 및 PodDisruptionBudget과 같은 시스템이 새로운 리소스와 상호 작용할 수 있다. | [예](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#scale-subresource) | 예 | +| 서브리소스 상태 | 사용자가 스펙 섹션을 작성하고 컨트롤러가 상태 섹션을 작성하는 세분화된 접근 제어를 허용한다. 커스텀 리소스 데이터 변형 시 오브젝트 생성을 증가시킨다(리소스에서 별도의 스펙과 상태 섹션 필요). | [예](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#status-subresource) | 예 | +| 기타 서브리소스 | "logs" 또는 "exec"과 같은 CRUD 이외의 작업을 추가한다. | 아니오 | 예 | +| strategic-merge-patch | 새로운 엔드포인트는 `Content-Type: application/strategic-merge-patch+json` 형식의 PATCH를 지원한다. 로컬 및 서버 양쪽에서 수정할 수도 있는 오브젝트를 업데이트하는 데 유용하다. 자세한 내용은 ["kubectl 패치를 사용한 API 오브젝트 업데이트"](/docs/tasks/run-application/update-api-object-kubectl-patch/)를 참고한다. | 아니오 | 예 | +| 프로토콜 버퍼 | 새로운 리소스는 프로토콜 버퍼를 사용하려는 클라이언트를 지원한다. | 아니오 | 예 | +| OpenAPI 스키마 | 서버에서 동적으로 가져올 수 있는 타입에 대한 OpenAPI(스웨거(swagger)) 스키마가 있는가? 허용된 필드만 설정하여 맞춤법이 틀린 필드 이름으로부터 사용자를 보호하는가? 타입이 적용되는가(즉, `string` 필드에 `int`를 넣지 않는가?) | 예, [OpenAPI v3.0 유효성 검사](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation)를 기반으로 하는 스키마(1.16에서 GA) | 예 | + +### 일반적인 기능 + +CRD 또는 AA를 통해 커스텀 리소스를 생성하면 쿠버네티스 플랫폼 외부에서 구현하는 것과 비교하여 API에 대한 많은 기능이 제공된다. + +| 기능 | 설명 | +| ------- | ------------ | +| CRUD | 새로운 엔드포인트는 HTTP 및 `kubectl`을 통해 CRUD 기본 작업을 지원한다. | +| 감시 | 새로운 엔드포인트는 HTTP를 통해 쿠버네티스 감시 작업을 지원한다. | +| 디스커버리 | kubectl 및 대시보드와 같은 클라이언트는 리소스에 대해 목록, 표시 및 필드 수정 작업을 자동으로 제공한다. | +| json-patch | 새로운 엔드포인트는 `Content-Type: application/json-patch+json` 형식의 PATCH를 지원한다. | +| merge-patch | 새로운 엔드포인트는 `Content-Type: application/merge-patch+json` 형식의 PATCH를 지원한다. | +| HTTPS | 새로운 엔드포인트는 HTTPS를 사용한다. | +| 빌트인 인증 | 익스텐션에 대한 접근은 인증을 위해 기본 API 서버(애그리게이션 레이어)를 사용한다. | +| 빌트인 권한 부여 | 익스텐션에 접근하면 기본 API 서버(예: RBAC)에서 사용하는 권한을 재사용할 수 있다. | +| Finalizer | 외부 정리가 발생할 때까지 익스텐션 리소스의 삭제를 차단한다. | +| 어드미션 웹훅 | 생성/업데이트/삭제 작업 중에 기본값을 설정하고 익스텐션 리소스의 유효성 검사를 한다. | +| UI/CLI 디스플레이 | Kubectl, 대시보드는 익스텐션 리소스를 표시할 수 있다. | +| 설정하지 않음(unset)과 비어있음(empty) | 클라이언트는 값이 없는 필드 중에서 설정되지 않은 필드를 구별할 수 있다. | +| 클라이언트 라이브러리 생성 | 쿠버네티스는 일반 클라이언트 라이브러리와 타입별 클라이언트 라이브러리를 생성하는 도구를 제공한다. | +| 레이블 및 어노테이션 | 공통 메타데이터는 핵심 및 커스텀 리소스를 수정하는 방법을 알고 있는 도구이다. | + +## 커스텀 리소스 설치 준비 + +클러스터에 커스텀 리소스를 추가하기 전에 알아야 할 몇 가지 사항이 있다. + +### 써드파티 코드 및 새로운 장애 포인트 + +CRD를 생성해도 새로운 장애 포인트(예를 들어, API 서버에서 장애를 유발하는 써드파티 코드가 실행됨)가 자동으로 추가되지는 않지만, 패키지(예: 차트(Charts)) 또는 기타 설치 번들에는 CRD 및 새로운 커스텀 리소스에 대한 비즈니스 로직을 구현하는 써드파티 코드의 디플로이먼트가 포함되는 경우가 종종 있다. + +애그리게이트 API 서버를 설치하려면 항상 새 디플로이먼트를 실행해야 한다. + +### 스토리지 + +커스텀 리소스는 컨피그맵과 동일한 방식으로 스토리지 공간을 사용한다. 너무 많은 커스텀 리소스를 생성하면 API 서버의 스토리지 공간이 과부하될 수 있다. + +애그리게이트 API 서버는 기본 API 서버와 동일한 스토리지를 사용할 수 있으며 이 경우 동일한 경고가 적용된다. + +### 인증, 권한 부여 및 감사 + +CRD는 항상 API 서버의 빌트인 리소스와 동일한 인증, 권한 부여 및 감사 로깅을 사용한다. + +권한 부여에 RBAC를 사용하는 경우 대부분의 RBAC 역할은 새로운 리소스에 대한 접근 권한을 부여하지 않는다(클러스터 관리자 역할 또는 와일드 카드 규칙으로 생성된 역할 제외). 새로운 리소스에 대한 접근 권한을 명시적으로 부여해야 한다. CRD 및 애그리게이트 API는 종종 추가하는 타입에 대한 새로운 역할 정의와 함께 제공된다. + +애그리게이트 API 서버는 기본 API 서버와 동일한 인증, 권한 부여 및 감사를 사용하거나 사용하지 않을 수 있다. + +## 커스텀 리소스에 접근 + +쿠버네티스 [클라이언트 라이브러리](/docs/reference/using-api/client-libraries/)를 사용하여 커스텀 리소스에 접근할 수 있다. 모든 클라이언트 라이브러리가 커스텀 리소스를 지원하는 것은 아니다. Go와 python 클라이언트 라이브러리가 지원한다. + +커스텀 리소스를 추가하면 다음을 사용하여 접근할 수 있다. + +- kubectl +- 쿠버네티스 동적 클라이언트 +- 작성한 REST 클라이언트 +- [쿠버네티스 클라이언트 생성 도구](https://github.com/kubernetes/code-generator)를 사용하여 생성된 클라이언트(하나를 생성하는 것은 고급 기능이지만, 일부 프로젝트는 CRD 또는 AA와 함께 클라이언트를 제공할 수 있다). + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [애그리게이션 레이어(aggregation layer)로 쿠버네티스 API 확장](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)하는 방법에 대해 배우기. + +* [커스텀리소스데피니션으로 쿠버네티스 API 확장](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/)하는 방법에 대해 배우기. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/_index.md b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/_index.md new file mode 100644 index 0000000000000..a895ea8bebea7 --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/_index.md @@ -0,0 +1,4 @@ +--- +title: 컴퓨트, 스토리지 및 네트워킹 익스텐션 +weight: 30 +--- diff --git a/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md new file mode 100644 index 0000000000000..0912bcdcde205 --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -0,0 +1,235 @@ +--- +title: 장치 플러그인 +description: GPU, NIC, FPGA, InfiniBand 및 공급 업체별 설정이 필요한 유사한 리소스를 위한 플러그인을 구현하는데 쿠버네티스 장치 플러그인 프레임워크를 사용한다. +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} +{{< feature-state for_k8s_version="v1.10" state="beta" >}} + +쿠버네티스는 시스템 하드웨어 리소스를 {{< glossary_tooltip term_id="kubelet" >}}에 알리는 데 사용할 수 있는 +[장치 플러그인 프레임워크](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md)를 +제공한다. + +공급 업체는 쿠버네티스 자체의 코드를 커스터마이징하는 대신, 수동 또는 +{{< glossary_tooltip text="데몬셋" term_id="daemonset" >}}으로 배포하는 장치 플러그인을 구현할 수 있다. +대상이 되는 장치에는 GPU, 고성능 NIC, FPGA, InfiniBand 어댑터 +및 공급 업체별 초기화 및 설정이 필요할 수 있는 기타 유사한 컴퓨팅 리소스가 +포함된다. + +{{% /capture %}} + +{{% capture body %}} + +## 장치 플러그인 등록 + +kubelet은 `Registration` gRPC 서비스를 노출시킨다. + +```gRPC +service Registration { + rpc Register(RegisterRequest) returns (Empty) {} +} +``` + +장치 플러그인은 이 gRPC 서비스를 통해 kubelet에 자체 등록할 수 있다. +등록하는 동안, 장치 플러그인은 다음을 보내야 한다. + + * 유닉스 소켓의 이름. + * 빌드된 장치 플러그인 API 버전. + * 알리려는 `ResourceName`. 여기서 `ResourceName` 은 + [확장된 리소스 네이밍 체계](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)를 + `vendor-domain/resourcetype` 의 형식으로 따라야 한다. + (예를 들어, NVIDIA GPU는 `nvidia.com/gpu` 로 알려진다.) + +성공적으로 등록하고 나면, 장치 플러그인은 kubelet이 관리하는 +장치 목록을 전송한 다음, kubelet은 kubelet 노드 상태 업데이트의 일부로 +해당 자원을 API 서버에 알리는 역할을 한다. +예를 들어, 장치 플러그인이 kubelet에 `hardware-vendor.example/foo` 를 등록하고 +노드에 두 개의 정상 장치를 보고하고 나면, 노드 상태가 업데이트되어 +노드에 2개의 “Foo” 장치가 설치되어 사용 가능함을 알릴 수 있다. + +그러고 나면, 사용자가 +[컨테이너](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) 명세에 있는 장치를 요청할 수 있다. +다만, 다른 종류의 리소스를 요청하는 것이므로 다음과 같은 제한이 있다. + +* 확장된 리소스는 정수(integer) 형태만 지원되며 오버커밋(overcommit) 될 수 없다. +* 컨테이너간에 장치를 공유할 수 없다. + +쿠버네티스 클러스터가 특정 노드에서 `hardware-vendor.example/foo` 리소스를 알리는 장치 플러그인을 실행한다고 +가정해 보자. 다음은 데모 워크로드를 실행하기 위해 이 리소스를 요청하는 파드의 예이다. + +```yaml +--- +apiVersion: v1 +kind: Pod +metadata: + name: demo-pod +spec: + containers: + - name: demo-container-1 + image: k8s.gcr.io/pause:2.0 + resources: + limits: + hardware-vendor.example/foo: 2 +# +# 이 파드는 2개의 hardware-vendor.example/foo 장치가 필요하며 +# 해당 요구를 충족할 수 있는 노드에만 +# 예약될 수 있다. +# +# 노드에 2개 이상의 사용 가능한 장치가 있는 경우 +# 나머지는 다른 파드에서 사용할 수 있다. +``` + +## 장치 플러그인 구현 + +장치 플러그인의 일반적인 워크플로우에는 다음 단계가 포함된다. + +* 초기화. 이 단계에서, 장치 플러그인은 공급 업체별 초기화 및 설정을 수행하여 + 장치가 준비 상태에 있는지 확인한다. + +* 플러그인은 다음의 인터페이스를 구현하는 호스트 경로 `/var/lib/kubelet/device-plugins/` + 아래에 유닉스 소켓과 함께 gRPC 서비스를 시작한다. + + ```gRPC + service DevicePlugin { + // ListAndWatch는 장치 목록 스트림을 반환한다. + // 장치 상태가 변경되거나 장치가 사라질 때마다, ListAndWatch는 + // 새 목록을 반환한다. + rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} + + // 컨테이너를 생성하는 동안 Allocate가 호출되어 장치 + // 플러그인이 장치별 작업을 실행하고 Kubelet에 장치를 + // 컨테이너에서 사용할 수 있도록 하는 단계를 지시할 수 있다. + rpc Allocate(AllocateRequest) returns (AllocateResponse) {} + } + ``` + +* 플러그인은 호스트 경로 `/var/lib/kubelet/device-plugins/kubelet.sock` 에서 + 유닉스 소켓을 통해 kubelet에 직접 등록한다. + +* 성공적으로 등록하고 나면, 장치 플러그인은 서빙(serving) 모드에서 실행되며, 그 동안 플러그인은 장치 상태를 +모니터링하고 장치 상태 변경 시 kubelet에 다시 보고한다. +또한 gRPC 요청 `Allocate` 를 담당한다. `Allocate` 하는 동안, 장치 플러그인은 +GPU 정리 또는 QRNG 초기화와 같은 장치별 준비를 수행할 수 있다. +작업이 성공하면, 장치 플러그인은 할당된 장치에 접근하기 위한 컨테이너 런타임 구성이 포함된 +`AllocateResponse` 를 반환한다. kubelet은 이 정보를 +컨테이너 런타임에 전달한다. + +### kubelet 재시작 처리 + +장치 플러그인은 일반적으로 kubelet의 재시작을 감지하고 새로운 +kubelet 인스턴스에 자신을 다시 등록할 것으로 기대된다. 현재의 구현에서, 새 kubelet 인스턴스는 시작될 때 +`/var/lib/kubelet/device-plugins` 아래에 있는 모든 기존의 유닉스 소켓을 삭제한다. 장치 플러그인은 유닉스 소켓의 +삭제를 모니터링하고 이러한 이벤트가 발생하면 다시 자신을 등록할 수 있다. + +## 장치 플러그인 배포 + +장치 플러그인을 데몬셋, 노드 운영 체제의 패키지 +또는 수동으로 배포할 수 있다. + +표준 디렉터리 `/var/lib/kubelet/device-plugins` 에는 특권을 가진 접근이 필요하므로, +장치 플러그인은 특권을 가진 보안 컨텍스트에서 실행해야 한다. +장치 플러그인을 데몬셋으로 배포하는 경우, 플러그인의 +[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)에서 +`/var/lib/kubelet/device-plugins` 를 +{{< glossary_tooltip text="볼륨" term_id="volume" >}}으로 마운트해야 한다. + +데몬셋 접근 방식을 선택하면 쿠버네티스를 사용하여 장치 플러그인의 파드를 노드에 배치하고, +장애 후 데몬 파드를 다시 시작하고, 업그레이드를 자동화할 수 있다. + +## API 호환성 + +쿠버네티스 장치 플러그인 지원은 베타 버전이다. 호환되지 않는 방식으로 안정화 전에 API가 +변경될 수 있다. 프로젝트로서, 쿠버네티스는 장치 플러그인 개발자에게 다음 사항을 권장한다. + +* 향후 릴리스에서 변경 사항을 확인하자. +* 이전/이후 버전과의 호환성을 위해 여러 버전의 장치 플러그인 API를 지원한다. + +최신 장치 플러그인 API 버전의 쿠버네티스 릴리스로 업그레이드해야 하는 노드에서 DevicePlugins 기능을 활성화하고 +장치 플러그인을 실행하는 경우, 이 노드를 업그레이드하기 전에 +두 버전을 모두 지원하도록 장치 플러그인을 업그레이드한다. 이 방법을 사용하면 +업그레이드 중에 장치 할당이 지속적으로 작동한다. + +## 장치 플러그인 리소스 모니터링 + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +장치 플러그인에서 제공하는 리소스를 모니터링하려면, 모니터링 에이전트가 +노드에서 사용 중인 장치 셋을 검색하고 메트릭과 연관될 컨테이너를 설명하는 +메타데이터를 얻을 수 있어야 한다. 장치 모니터링 에이전트에 의해 노출된 +[프로메테우스(Prometheus)](https://prometheus.io/) 지표는 +[쿠버네티스 Instrumentation 가이드라인](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/instrumentation.md)을 따라 +`pod`, `namespace` 및 `container` 프로메테우스 레이블을 사용하여 컨테이너를 식별해야 한다. + +kubelet은 gRPC 서비스를 제공하여 사용 중인 장치를 검색하고, 이러한 장치에 대한 메타데이터를 +제공한다. + +```gRPC +// PodResourcesLister는 kubelet에서 제공하는 서비스로, 노드의 포드 및 컨테이너가 +// 사용한 노드 리소스에 대한 정보를 제공한다. +service PodResourcesLister { + rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {} +} +``` + +gRPC 서비스는 `/var/lib/kubelet/pod-resources/kubelet.sock` 의 유닉스 소켓을 통해 제공된다. +장치 플러그인 리소스에 대한 모니터링 에이전트는 데몬 또는 데몬셋으로 배포할 수 있다. +표준 디렉터리 `/var/lib/kubelet/pod-resources` 에는 특권을 가진 접근이 필요하므로, 모니터링 +에이전트는 특권을 가진 ​​보안 컨텍스트에서 실행해야 한다. 장치 모니터링 에이전트가 +데몬셋으로 실행 중인 경우, 플러그인의 [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)에서 +`/var/lib/kubelet/pod-resources` 를 +{{< glossary_tooltip text="볼륨" term_id="volume" >}}으로 마운트해야 한다. + +"PodResources 서비스"를 지원하려면 `KubeletPodResources` [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)를 활성화해야 한다. 쿠버네티스 1.15부터 기본적으로 활성화되어 있다. + +## 토폴로지 관리자와 장치 플러그인 통합 + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +토폴로지 관리자는 Kubelet 컴포넌트로, 리소스를 토폴로지 정렬 방식으로 조정할 수 있다. 이를 위해, 장치 플러그인 API가 `TopologyInfo` 구조체를 포함하도록 확장되었다. + + +```gRPC +message TopologyInfo { + repeated NUMANode nodes = 1; +} + +message NUMANode { + int64 ID = 1; +} +``` +토폴로지 관리자를 활용하려는 장치 플러그인은 장치 ID 및 장치의 정상 상태와 함께 장치 등록의 일부로 채워진 TopologyInfo 구조체를 다시 보낼 수 있다. 그런 다음 장치 관리자는 이 정보를 사용하여 토폴로지 관리자와 상의하고 리소스 할당 결정을 내린다. + +`TopologyInfo` 는 `nil`(기본값) 또는 NUMA 노드 목록인 `nodes` 필드를 지원한다. 이를 통해 NUMA 노드에 걸쳐있을 수 있는 장치 플러그인을 게시할 수 있다. + +장치 플러그인으로 장치에 대해 채워진 `TopologyInfo` 구조체의 예는 다음과 같다. + +``` +pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}} +``` + +## 장치 플러그인 예시 {#examples} + +다음은 장치 플러그인 구현의 예이다. + +* [AMD GPU 장치 플러그인](https://github.com/RadeonOpenCompute/k8s-device-plugin) +* 인텔 GPU, FPGA 및 QuickAssist 장치용 [인텔 장치 플러그인](https://github.com/intel/intel-device-plugins-for-kubernetes) +* 하드웨어 지원 가상화를 위한 [KubeVirt 장치 플러그인](https://github.com/kubevirt/kubernetes-device-plugins) +* [NVIDIA GPU 장치 플러그인](https://github.com/NVIDIA/k8s-device-plugin) + * GPU를 지원하는 Docker 컨테이너를 실행할 수 있는 [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) 2.0이 필요하다. +* [컨테이너에 최적화된 OS를 위한 NVIDIA GPU 장치 플러그인](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu) +* [RDMA 장치 플러그인](https://github.com/hustcat/k8s-rdma-device-plugin) +* [Solarflare 장치 플러그인](https://github.com/vikaschoudhary16/sfc-device-plugin) +* [SR-IOV 네트워크 장치 플러그인](https://github.com/intel/sriov-network-device-plugin) +* Xilinx FPGA 장치용 [Xilinx FPGA 장치 플러그인](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin/trunk) + +{{% /capture %}} +{{% capture whatsnext %}} + +* 장치 플러그인을 사용한 [GPU 리소스 스케줄링](/docs/tasks/manage-gpus/scheduling-gpus/)에 대해 알아보기 +* 노드에서의 [확장 리소스 알리기](/docs/tasks/administer-cluster/extended-resource-node/)에 대해 배우기 +* 쿠버네티스에서 [TLS 수신에 하드웨어 가속](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) 사용에 대해 읽기 +* [토폴로지 관리자](/docs/tasks/adminster-cluster/topology-manager/)에 대해 알아보기 + +{{% /capture %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md new file mode 100644 index 0000000000000..137195e7e7d11 --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -0,0 +1,167 @@ +--- +title: 네트워크 플러그인 +content_template: templates/concept +weight: 10 +--- + + +{{% capture overview %}} + +{{< feature-state state="alpha" >}} +{{< caution >}}알파 기능은 빨리 변경될 수 있다. {{< /caution >}} + +쿠버네티스의 네트워크 플러그인은 몇 가지 종류가 있다. + +* CNI 플러그인: 상호 운용성을 위해 설계된 appc/CNI 명세를 준수한다. +* Kubenet 플러그인: `bridge` 와 `host-local` CNI 플러그인을 사용하여 기본 `cbr0` 구현한다. + +{{% /capture %}} + +{{% capture body %}} + +## 설치 + +kubelet에는 단일 기본 네트워크 플러그인과 전체 클러스터에 공통된 기본 네트워크가 있다. 플러그인은 시작할 때 플러그인을 검색하고, 찾은 것을 기억하며, 파드 라이프사이클에서 적절한 시간에 선택한 플러그인을 실행한다(rkt는 자체 CNI 플러그인을 관리하므로 Docker에만 해당됨). 플러그인 사용 시 명심해야 할 두 가지 Kubelet 커맨드라인 파라미터가 있다. + +* `cni-bin-dir`: Kubelet은 시작할 때 플러그인에 대해 이 디렉터리를 검사한다. +* `network-plugin`: `cni-bin-dir` 에서 사용할 네트워크 플러그인. 플러그인 디렉터리에서 검색한 플러그인이 보고된 이름과 일치해야 한다. CNI 플러그인의 경우, 이는 단순히 "cni"이다. + +## 네트워크 플러그인 요구 사항 + +파드 네트워킹을 구성하고 정리하기 위해 [`NetworkPlugin` 인터페이스](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go)를 제공하는 것 외에도, 플러그인은 kube-proxy에 대한 특정 지원이 필요할 수 있다. iptables 프록시는 분명히 iptables에 의존하며, 플러그인은 컨테이너 트래픽이 iptables에 사용 가능하도록 해야 한다. 예를 들어, 플러그인이 컨테이너를 리눅스 브릿지에 연결하는 경우, 플러그인은 `net/bridge/bridge-nf-call-iptables` sysctl을 `1` 로 설정하여 iptables 프록시가 올바르게 작동하는지 확인해야 한다. 플러그인이 리눅스 브리지를 사용하지 않는 경우(그러나 Open vSwitch나 다른 메커니즘과 같은 기능을 사용함) 컨테이너 트래픽이 프록시에 대해 적절하게 라우팅되도록 해야 한다. + +kubelet 네트워크 플러그인이 지정되지 않은 경우, 기본적으로 `noop` 플러그인이 사용되며, `net/bridge/bridge-nf-call-iptables=1` 을 설정하여 간단한 구성(브릿지가 있는 Docker 등)이 iptables 프록시에서 올바르게 작동하도록 한다. + +### CNI + +CNI 플러그인은 Kubelet에 `--network-plugin=cni` 커맨드라인 옵션을 전달하여 선택된다. Kubelet은 `--cni-conf-dir`(기본값은 `/etc/cni/net.d`)에서 파일을 읽고 해당 파일의 CNI 구성을 사용하여 각 파드의 네트워크를 설정한다. CNI 구성 파일은 [CNI 명세](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration)와 일치해야 하며, 구성에서 참조하는 필수 CNI 플러그인은 `--cni-bin-dir`(기본값은 `/opt/cni/bin`)에 있어야 한다. + +디렉터리에 여러 CNI 구성 파일이 있는 경우, kubelet은 이름별 알파벳 순으로 구성 파일을 사용한다. + +구성 파일에 지정된 CNI 플러그인 외에도, 쿠버네티스는 최소 0.2.0 버전의 표준 CNI [`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) 플러그인이 필요하다. + +#### hostPort 지원 + +CNI 네트워킹 플러그인은 `hostPort` 를 지원한다. CNI 플러그인 팀이 제공하는 공식 [포트맵(portmap)](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap) +플러그인을 사용하거나 portMapping 기능이 있는 자체 플러그인을 사용할 수 있다. + +`hostPort` 지원을 사용하려면, `cni-conf-dir` 에 `portMappings capability` 를 지정해야 한다. +예를 들면 다음과 같다. + +```json +{ + "name": "k8s-pod-network", + "cniVersion": "0.3.0", + "plugins": [ + { + "type": "calico", + "log_level": "info", + "datastore_type": "kubernetes", + "nodename": "127.0.0.1", + "ipam": { + "type": "host-local", + "subnet": "usePodCidr" + }, + "policy": { + "type": "k8s" + }, + "kubernetes": { + "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" + } + }, + { + "type": "portmap", + "capabilities": {"portMappings": true} + } + ] +} +``` + +#### 트래픽 셰이핑 지원 + +CNI 네트워킹 플러그인은 파드 수신 및 송신 트래픽 셰이핑도 지원한다. CNI 플러그인 팀에서 제공하는 공식 [대역폭(bandwidth)](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth) +플러그인을 사용하거나 대역폭 제어 기능이 있는 자체 플러그인을 사용할 수 있다. + +트래픽 셰이핑 지원을 활성화하려면, CNI 구성 파일 (기본값 `/etc/cni/net.d`)에 `bandwidth` 플러그인을 +추가해야 한다. + +```json +{ + "name": "k8s-pod-network", + "cniVersion": "0.3.0", + "plugins": [ + { + "type": "calico", + "log_level": "info", + "datastore_type": "kubernetes", + "nodename": "127.0.0.1", + "ipam": { + "type": "host-local", + "subnet": "usePodCidr" + }, + "policy": { + "type": "k8s" + }, + "kubernetes": { + "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" + } + }, + { + "type": "bandwidth", + "capabilities": {"bandwidth": true} + } + ] +} +``` + +이제 파드에 `kubernetes.io/ingress-bandwidth` 와 `kubernetes.io/egress-bandwidth` 어노테이션을 추가할 수 있다. +예를 들면 다음과 같다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + annotations: + kubernetes.io/ingress-bandwidth: 1M + kubernetes.io/egress-bandwidth: 1M +... +``` + +### kubenet + +Kubenet은 리눅스에서만 사용할 수 있는 매우 기본적이고, 간단한 네트워크 플러그인이다. 그 자체로는, 크로스-노드 네트워킹 또는 네트워크 정책과 같은 고급 기능을 구현하지 않는다. 일반적으로 노드 간, 또는 단일 노드 환경에서 통신을 위한 라우팅 규칙을 설정하는 클라우드 제공자와 함께 사용된다. + +Kubenet은 `cbr0` 라는 리눅스 브리지를 만들고 각 쌍의 호스트 끝이 `cbr0` 에 연결된 각 파드에 대한 veth 쌍을 만든다. 쌍의 파드 끝에는 구성 또는 컨트롤러 관리자를 통해 노드에 할당된 범위 내에서 할당된 IP 주소가 지정된다. `cbr0` 에는 호스트에서 활성화된 일반 인터페이스의 가장 작은 MTU와 일치하는 MTU가 지정된다. + +플러그인에는 몇 가지 사항이 필요하다. + +* 표준 CNI `bridge`, `lo` 및 `host-local` 플러그인은 최소 0.2.0 버전이 필요하다. Kubenet은 먼저 `/opt/cni/bin` 에서 검색한다. 추가 검색 경로를 제공하려면 `cni-bin-dir` 을 지정한다. 처음 검색된 디렉터리가 적용된다. +* 플러그인을 활성화하려면 Kubelet을 `--network-plugin=kubenet` 인수와 함께 실행해야 한다. +* Kubelet은 `--non-masquerade-cidr=` 인수와 함께 실행하여 이 범위 밖 IP로의 트래픽이 IP 마스커레이드(masquerade)를 사용하도록 해야 한다. +* `--pod-cidr` kubelet 커맨드라인 옵션 또는 `--allocate-node-cidrs=true --cluster-cidr=` 컨트롤러 관리자 커맨드라인 옵션을 통해 노드에 IP 서브넷을 할당해야 한다. + +### MTU 사용자 정의 (kubenet 사용) + +최상의 네트워킹 성능을 얻으려면 MTU를 항상 올바르게 구성해야 한다. 네트워크 플러그인은 일반적으로 합리적인 MTU를 +유추하려고 시도하지만, 때로는 로직에 따라 최적의 MTU가 지정되지 않는다. 예를 들어, +Docker 브리지나 다른 인터페이스에 작은 MTU가 지정되어 있으면, kubenet은 현재 해당 MTU를 선택한다. 또는 +IPSEC 캡슐화를 사용하는 경우, MTU를 줄여야 하며, 이 계산은 대부분의 +네트워크 플러그인에서 범위를 벗어난다. + +필요한 경우, `network-plugin-mtu` kubelet 옵션을 사용하여 MTU를 명시 적으로 지정할 수 있다. 예를 들어, +AWS에서 `eth0` MTU는 일반적으로 9001이므로, `--network-plugin-mtu=9001` 을 지정할 수 있다. IPSEC를 사용하는 경우 +캡슐화 오버헤드를 허용하도록 `--network-plugin-mtu=8873` 과 같이 IPSEC을 줄일 수 있다. + +이 옵션은 네트워크 플러그인에 제공된다. 현재 **kubenet만 `network-plugin-mtu` 를 지원한다**. + +## 용법 요약 + +* `--network-plugin=cni` 는 `--cni-bin-dir`(기본값 `/opt/cni/bin`)에 있는 실제 CNI 플러그인 바이너리와 `--cni-conf-dir`(기본값 `/etc/cni/net.d`)에 있는 CNI 플러그인 구성과 함께 `cni` 네트워크 플러그인을 사용하도록 지정한다. +* `--network-plugin=kubenet` 은 `/opt/cni/bin` 또는 `cni-bin-dir` 에 있는 CNI `bridge` 및 `host-local` 플러그인과 함께 kubenet 네트워크 플러그인을 사용하도록 지정한다. +* 현재 kubenet 네트워크 플러그인에서만 사용하는 `--network-plugin-mtu=9001` 은 사용할 MTU를 지정한다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +{{% /capture %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md new file mode 100644 index 0000000000000..408ff70c222a3 --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md @@ -0,0 +1,205 @@ +--- +title: 쿠버네티스 클러스터 확장 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +쿠버네티스는 매우 유연하게 구성할 수 있고 확장 가능하다. 결과적으로 +쿠버네티스 프로젝트를 포크하거나 코드에 패치를 제출할 필요가 +거의 없다. + +이 가이드는 쿠버네티스 클러스터를 사용자 정의하기 위한 옵션을 설명한다. +쿠버네티스 클러스터를 업무 환경의 요구에 맞게 +조정하는 방법을 이해하려는 {{< glossary_tooltip text="클러스터 운영자" term_id="cluster-operator" >}}를 대상으로 한다. +잠재적인 {{< glossary_tooltip text="플랫폼 개발자" term_id="platform-developer" >}} 또는 쿠버네티스 프로젝트 {{< glossary_tooltip text="컨트리뷰터" term_id="contributor" >}}인 개발자에게도 +어떤 익스텐션 포인트와 패턴이 있는지, +그리고 그것들의 트레이드오프와 제약에 대한 소개 자료로 유용할 것이다. + +{{% /capture %}} + + +{{% capture body %}} + +## 개요 + +사용자 정의 방식은 크게 플래그, 로컬 구성 파일 또는 API 리소스 변경만 포함하는 *구성* 과 추가 프로그램이나 서비스 실행과 관련된 *익스텐션* 으로 나눌 수 있다. 이 문서는 주로 익스텐션에 관한 것이다. + +## 구성 + +*구성 파일* 및 *플래그* 는 온라인 문서의 레퍼런스 섹션에 각 바이너리 별로 문서화되어 있다. + +* [kubelet](/docs/admin/kubelet/) +* [kube-apiserver](/docs/admin/kube-apiserver/) +* [kube-controller-manager](/docs/admin/kube-controller-manager/) +* [kube-scheduler](/docs/admin/kube-scheduler/). + +호스팅된 쿠버네티스 서비스 또는 매니지드 설치 환경의 배포판에서 플래그 및 구성 파일을 항상 변경할 수 있는 것은 아니다. 변경 가능한 경우 일반적으로 클러스터 관리자만 변경할 수 있다. 또한 향후 쿠버네티스 버전에서 변경될 수 있으며, 이를 설정하려면 프로세스를 다시 시작해야 할 수도 있다. 이러한 이유로 다른 옵션이 없는 경우에만 사용해야 한다. + +[리소스쿼터](/ko/docs/concepts/policy/resource-quotas/), [PodSecurityPolicy](/ko/docs/concepts/policy/pod-security-policy/), [네트워크폴리시](/ko/docs/concepts/services-networking/network-policies/) 및 역할 기반 접근 제어([RBAC](/docs/reference/access-authn-authz/rbac/))와 같은 *빌트인 정책 API(built-in Policy API)* 는 기본적으로 제공되는 쿠버네티스 API이다. API는 일반적으로 호스팅된 쿠버네티스 서비스 및 매니지드 쿠버네티스 설치 환경과 함께 사용된다. 그것들은 선언적이며 파드와 같은 다른 쿠버네티스 리소스와 동일한 규칙을 사용하므로, 새로운 클러스터 구성을 반복할 수 있고 애플리케이션과 동일한 방식으로 관리할 수 ​​있다. 또한, 이들 API가 안정적인 경우, 다른 쿠버네티스 API와 같이 [정의된 지원 정책](/docs/reference/deprecation-policy/)을 사용할 수 있다. 이러한 이유로 인해 구성 파일과 플래그보다 선호된다. + +## 익스텐션(Extension) {#익스텐션} + +익스텐션은 쿠버네티스를 확장하고 쿠버네티스와 긴밀하게 통합되는 소프트웨어 컴포넌트이다. +이들 컴포넌트는 쿠버네티스가 새로운 유형과 새로운 종류의 하드웨어를 지원할 수 있게 해준다. + +대부분의 클러스터 관리자는 쿠버네티스의 호스팅 또는 배포판 인스턴스를 사용한다. +결과적으로 대부분의 쿠버네티스 사용자는 익스텐션 기능을 설치할 필요가 있고 +새로운 익스텐션 기능을 작성할 필요가 있는 사람은 더 적다. + +## 익스텐션 패턴 + +쿠버네티스는 클라이언트 프로그램을 작성하여 자동화 되도록 설계되었다. +쿠버네티스 API를 읽고 쓰는 프로그램은 유용한 자동화를 제공할 수 있다. +*자동화* 는 클러스터 상에서 또는 클러스터 밖에서 실행할 수 있다. 이 문서의 지침에 따라 +고가용성과 강력한 자동화를 작성할 수 있다. +자동화는 일반적으로 호스트 클러스터 및 매니지드 설치 환경을 포함한 모든 +쿠버네티스 클러스터에서 작동한다. + +쿠버네티스와 잘 작동하는 클라이언트 프로그램을 작성하기 위한 특정 패턴은 *컨트롤러* 패턴이라고 한다. +컨트롤러는 일반적으로 오브젝트의 `.spec`을 읽고, 가능한 경우 수행한 다음 +오브젝트의 `.status`를 업데이트 한다. + +컨트롤러는 쿠버네티스의 클라이언트이다. 쿠버네티스가 클라이언트이고 +원격 서비스를 호출할 때 이를 *웹훅(Webhook)* 이라고 한다. 원격 서비스를 +*웹훅 백엔드* 라고 한다. 컨트롤러와 마찬가지로 웹훅은 장애 지점을 +추가한다. + +웹훅 모델에서 쿠버네티스는 원격 서비스에 네트워크 요청을 한다. +*바이너리 플러그인* 모델에서 쿠버네티스는 바이너리(프로그램)를 실행한다. +바이너리 플러그인은 kubelet(예: +[Flex Volume 플러그인](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md)과 +[네트워크 플러그인](/docs/concepts/cluster-administration/network-plugins/))과 +kubectl에서 +사용한다. + +아래는 익스텐션 포인트가 쿠버네티스 컨트롤 플레인과 상호 작용하는 방법을 +보여주는 다이어그램이다. + + + + + + +## 익스텐션 포인트 + +이 다이어그램은 쿠버네티스 시스템의 익스텐션 포인트를 보여준다. + + + + + +1. 사용자는 종종 `kubectl`을 사용하여 쿠버네티스 API와 상호 작용한다. [Kubectl 플러그인](/docs/tasks/extend-kubectl/kubectl-plugins/)은 kubectl 바이너리를 확장한다. 개별 사용자의 로컬 환경에만 영향을 미치므로 사이트 전체 정책을 적용할 수는 없다. +2. apiserver는 모든 요청을 처리한다. apiserver의 여러 유형의 익스텐션 포인트는 요청을 인증하거나, 콘텐츠를 기반으로 요청을 차단하거나, 콘텐츠를 편집하고, 삭제 처리를 허용한다. 이 내용은 [API 접근 익스텐션](/ko/docs/concepts/extend-kubernetes/extend-cluster/#api-접근-익스텐션) 섹션에 설명되어 있다. +3. apiserver는 다양한 종류의 *리소스* 를 제공한다. `pods`와 같은 *빌트인 리소스 종류* 는 쿠버네티스 프로젝트에 의해 정의되며 변경할 수 없다. 직접 정의한 리소스를 추가할 수도 있고, [커스텀 리소스](/ko/docs/concepts/extend-kubernetes/extend-cluster/#사용자-정의-유형) 섹션에 설명된대로 *커스텀 리소스* 라고 부르는 다른 프로젝트에서 정의한 리소스를 추가할 수도 있다. 커스텀 리소스는 종종 API 접근 익스텐션과 함께 사용된다. +4. 쿠버네티스 스케줄러는 파드를 배치할 노드를 결정한다. 스케줄링을 확장하는 몇 가지 방법이 있다. 이들은 [스케줄러 익스텐션](/ko/docs/concepts/extend-kubernetes/extend-cluster/#스케줄러-익스텐션) 섹션에 설명되어 있다. +5. 쿠버네티스의 많은 동작은 API-Server의 클라이언트인 컨트롤러(Controller)라는 프로그램으로 구현된다. 컨트롤러는 종종 커스텀 리소스와 함께 사용된다. +6. kubelet은 서버에서 실행되며 파드가 클러스터 네트워크에서 자체 IP를 가진 가상 서버처럼 보이도록 한다. [네트워크 플러그인](/ko/docs/concepts/extend-kubernetes/extend-cluster/#네트워크-플러그인)을 사용하면 다양한 파드 네트워킹 구현이 가능하다. +7. kubelet은 컨테이너의 볼륨을 마운트 및 마운트 해제한다. 새로운 유형의 스토리지는 [스토리지 플러그인](/ko/docs/concepts/extend-kubernetes/extend-cluster/#스토리지-플러그인)을 통해 지원될 수 있다. + +어디서부터 시작해야 할지 모르겠다면, 이 플로우 차트가 도움이 될 수 있다. 일부 솔루션에는 여러 유형의 익스텐션이 포함될 수 있다. + + + + + + +## API 익스텐션 +### 사용자 정의 유형 + +새 컨트롤러, 애플리케이션 구성 오브젝트 또는 기타 선언적 API를 정의하고 `kubectl`과 같은 쿠버네티스 도구를 사용하여 관리하려면 쿠버네티스에 커스텀 리소스를 추가하자. + +애플리케이션, 사용자 또는 모니터링 데이터의 데이터 저장소로 커스텀 리소스를 사용하지 않는다. + +커스텀 리소스에 대한 자세한 내용은 [커스텀 리소스 개념 가이드](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)를 참고하길 바란다. + + +### 새로운 API와 자동화의 결합 + +사용자 정의 리소스 API와 컨트롤 루프의 조합을 [오퍼레이터(operator) 패턴](/ko/docs/concepts/extend-kubernetes/operator/)이라고 한다. 오퍼레이터 패턴은 특정 애플리케이션, 일반적으로 스테이트풀(stateful) 애플리케이션을 관리하는 데 사용된다. 이러한 사용자 정의 API 및 컨트롤 루프를 사용하여 스토리지나 정책과 같은 다른 리소스를 제어할 수도 있다. + +### 빌트인 리소스 변경 + +사용자 정의 리소스를 추가하여 쿠버네티스 API를 확장하면 추가된 리소스는 항상 새로운 API 그룹에 속한다. 기존 API 그룹을 바꾸거나 변경할 수 없다. +API를 추가해도 기존 API(예: 파드)의 동작에 직접 영향을 미치지는 않지만 API 접근 익스텐션은 영향을 준다. + + +### API 접근 익스텐션 + +요청이 쿠버네티스 API 서버에 도달하면 먼저 인증이 되고, 그런 다음 승인된 후 다양한 유형의 어드미션 컨트롤이 적용된다. 이 흐름에 대한 자세한 내용은 [쿠버네티스 API에 대한 접근 제어](/docs/reference/access-authn-authz/controlling-access/)를 참고하길 바란다. + +이러한 각 단계는 익스텐션 포인트를 제공한다. + +쿠버네티스에는 이를 지원하는 몇 가지 빌트인 인증 방법이 있다. 또한 인증 프록시 뒤에 있을 수 있으며 인증 헤더에서 원격 서비스로 토큰을 전송하여 확인할 수 있다(웹훅). 이러한 방법은 모두 [인증 설명서](/docs/reference/access-authn-authz/authentication/)에 설명되어 있다. + +### 인증 + +[인증](/docs/reference/access-authn-authz/authentication/)은 모든 요청의 헤더 또는 인증서를 요청하는 클라이언트의 사용자 이름에 매핑한다. + +쿠버네티스는 몇 가지 빌트인 인증 방법과 필요에 맞지 않는 경우 [인증 웹훅](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) 방법을 제공한다. + + +### 승인 + +[승인](/docs/reference/access-authn-authz/webhook/)은 특정 사용자가 API 리소스에서 읽고, 쓰고, 다른 작업을 수행할 수 있는지를 결정한다. 전체 리소스 레벨에서 작동하며 임의의 오브젝트 필드를 기준으로 구별하지 않는다. 빌트인 인증 옵션이 사용자의 요구를 충족시키지 못하면 [인증 웹훅](/docs/reference/access-authn-authz/webhook/)을 통해 사용자가 제공한 코드를 호출하여 인증 결정을 내릴 수 있다. + + +### 동적 어드미션 컨트롤 + +요청이 승인된 후, 쓰기 작업인 경우 [어드미션 컨트롤](/docs/reference/access-authn-authz/admission-controllers/) 단계도 수행된다. 빌트인 단계 외에도 몇 가지 익스텐션이 있다. + +* [이미지 정책 웹훅](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook)은 컨테이너에서 실행할 수 있는 이미지를 제한한다. +* 임의의 어드미션 컨트롤 결정을 내리기 위해 일반적인 [어드미션 웹훅](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)을 사용할 수 있다. 어드미션 웹훅은 생성 또는 업데이트를 거부할 수 있다. + +## 인프라스트럭처 익스텐션 + + +### 스토리지 플러그인 + +[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md)을 사용하면 +Kubelet이 바이너리 플러그인을 호출하여 볼륨을 마운트하도록 함으로써 +빌트인 지원 없이 볼륨 유형을 마운트 할 수 있다. + + +### 장치 플러그인 + +장치 플러그인은 노드가 [장치 플러그인](/docs/concepts/cluster-administration/device-plugins/)을 +통해 새로운 노드 리소스(CPU 및 메모리와 같은 빌트인 자원 외에)를 +발견할 수 있게 해준다. + + +### 네트워크 플러그인 + +노드-레벨의 [네트워크 플러그인](/docs/admin/network-plugins/)을 통해 다양한 네트워킹 패브릭을 지원할 수 있다. + +### 스케줄러 익스텐션 + +스케줄러는 파드를 감시하고 파드를 노드에 할당하는 특수한 유형의 +컨트롤러이다. 다른 쿠버네티스 컴포넌트를 계속 사용하면서 +기본 스케줄러를 완전히 교체하거나, +[여러 스케줄러](/docs/tasks/administer-cluster/configure-multiple-schedulers/)를 +동시에 실행할 수 있다. + +이것은 중요한 부분이며, 거의 모든 쿠버네티스 사용자는 스케줄러를 수정할 +필요가 없다는 것을 알게 된다. + +스케줄러는 또한 웹훅 백엔드(스케줄러 익스텐션)가 +파드에 대해 선택된 노드를 필터링하고 우선 순위를 지정할 수 있도록 하는 +[웹훅](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md)을 +지원한다. + +{{% /capture %}} + + +{{% capture whatsnext %}} + +* [커스텀 리소스](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)에 대해 더 알아보기 +* [동적 어드미션 컨트롤](/docs/reference/access-authn-authz/extensible-admission-controllers/)에 대해 알아보기 +* 인프라스트럭처 익스텐션에 대해 더 알아보기 + * [네트워크 플러그인](/docs/concepts/cluster-administration/network-plugins/) + * [장치 플러그인](/docs/concepts/cluster-administration/device-plugins/) +* [kubectl 플러그인](/docs/tasks/extend-kubectl/kubectl-plugins/)에 대해 알아보기 +* [오퍼레이터 패턴](/docs/concepts/extend-kubernetes/operator/)에 대해 알아보기 + +{{% /capture %}} diff --git a/content/ko/docs/concepts/extend-kubernetes/operator.md b/content/ko/docs/concepts/extend-kubernetes/operator.md new file mode 100644 index 0000000000000..7d7854e05c38d --- /dev/null +++ b/content/ko/docs/concepts/extend-kubernetes/operator.md @@ -0,0 +1,133 @@ +--- +title: 오퍼레이터(operator) 패턴 +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +오퍼레이터(Operator)는 +[사용자 정의 리소스](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)를 +사용하여 애플리케이션 및 해당 컴포넌트를 관리하는 쿠버네티스의 소프트웨어 익스텐션이다. 오퍼레이터는 +쿠버네티스 원칙, 특히 [컨트롤 루프](/ko/docs/concepts/#쿠버네티스-컨트롤-플레인)를 따른다. + +{{% /capture %}} + + +{{% capture body %}} + +## 동기 부여 + +오퍼레이터 패턴은 서비스 또는 서비스 셋을 관리하는 운영자의 +주요 목표를 포착하는 것을 목표로 한다. 특정 애플리케이션 및 +서비스를 돌보는 운영자는 시스템의 작동 방식, 배포 방법 및 문제가 있는 경우 +대처 방법에 대해 깊이 알고 있다. + +쿠버네티스에서 워크로드를 실행하는 사람들은 종종 반복 가능한 작업을 처리하기 위해 +자동화를 사용하는 것을 좋아한다. 오퍼레이터 패턴은 쿠버네티스 자체가 제공하는 것 이상의 +작업을 자동화하기 위해 코드를 작성하는 방법을 포착한다. + +## 쿠버네티스의 오퍼레이터 + +쿠버네티스는 자동화를 위해 설계되었다. 기본적으로 쿠버네티스의 중추를 통해 많은 +빌트인 자동화 기능을 사용할 수 있다. 쿠버네티스를 사용하여 워크로드 배포 +및 실행을 자동화할 수 있고, *또한* 쿠버네티스가 수행하는 방식을 +자동화할 수 있다. + +쿠버네티스의 {{< glossary_tooltip text="컨트롤러" term_id="controller" >}} +개념을 통해 쿠버네티스 코드 자체를 수정하지 않고도 클러스터의 동작을 +확장할 수 있다. +오퍼레이터는 [사용자 정의 리소스](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)의 +컨트롤러 역할을 하는 쿠버네티스 API의 클라이언트이다. + +## 오퍼레이터 예시 {#example} + +오퍼레이터를 사용하여 자동화할 수 있는 몇 가지 사항은 다음과 같다. + +* 주문형 애플리케이션 배포 +* 해당 애플리케이션의 상태를 백업하고 복원 +* 데이터베이스 스키마 또는 추가 구성 설정과 같은 관련 변경 사항에 따른 + 애플리케이션 코드 업그레이드 처리 +* 쿠버네티스 API를 지원하지 않는 애플리케이션에 서비스를 + 게시하여 검색을 지원 +* 클러스터의 전체 또는 일부에서 장애를 시뮬레이션하여 가용성 테스트 +* 내부 멤버 선출 절차없이 분산 애플리케이션의 + 리더를 선택 + +오퍼레이터의 모습을 더 자세하게 볼 수 있는 방법은 무엇인가? 자세한 예는 +다음과 같다. + +1. 클러스터에 구성할 수 있는 SampleDB라는 사용자 정의 리소스. +2. 오퍼레이터의 컨트롤러 부분이 포함된 파드의 실행을 + 보장하는 디플로이먼트. +3. 오퍼레이터 코드의 컨테이너 이미지. +4. 컨트롤 플레인을 쿼리하여 어떤 SampleDB 리소스가 구성되어 있는지 + 알아내는 컨트롤러 코드. +5. 오퍼레이터의 핵심은 API 서버에 구성된 리소스와 현재 상태를 + 일치시키는 방법을 알려주는 코드이다. + * 새 SampleDB를 추가하면 오퍼레이터는 퍼시스턴트볼륨클레임을 + 설정하여 내구성있는 데이터베이스 스토리지, SampleDB를 실행하는 스테이트풀셋 및 + 초기 구성을 처리하는 잡을 제공한다. + * SampleDB를 삭제하면 오퍼레이터는 스냅샷을 생성한 다음 스테이트풀셋과 볼륨도 + 제거되었는지 확인한다. +6. 오퍼레이터는 정기적인 데이터베이스 백업도 관리한다. 오퍼레이터는 각 SampleDB + 리소스에 대해 데이터베이스에 연결하고 백업을 수행할 수 있는 파드를 생성하는 + 시기를 결정한다. 이 파드는 데이터베이스 연결 세부 정보 및 자격 증명이 있는 + 컨피그맵 및 / 또는 시크릿에 의존한다. +7. 오퍼레이터는 관리하는 리소스에 견고한 자동화를 제공하는 것을 목표로 하기 때문에 + 추가 지원 코드가 있다. 이 예제에서 코드는 데이터베이스가 이전 버전을 실행 중인지 + 확인하고, 업그레이드된 경우 이를 업그레이드하는 + 잡 오브젝트를 생성한다. + +## 오퍼레이터 배포 + +오퍼레이터를 배포하는 가장 일반적인 방법은 +커스텀 리소스 데피니션의 정의 및 연관된 컨트롤러를 클러스터에 추가하는 것이다. +컨테이너화된 애플리케이션을 실행하는 것처럼 +컨트롤러는 일반적으로 {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}} +외부에서 실행된다. +예를 들어 클러스터에서 컨트롤러를 디플로이먼트로 실행할 수 있다. + +## 오퍼레이터 사용 {#using-operators} + +오퍼레이터가 배포되면 오퍼레이터가 사용하는 리소스의 종류를 추가, 수정 또는 +삭제하여 사용한다. 위의 예에 따라 오퍼레이터 자체에 대한 +디플로이먼트를 설정한 후 다음을 수행한다. + +```shell +kubectl get SampleDB # 구성된 데이터베이스 찾기 + +kubectl edit SampleDB/example-database # 일부 설정을 수동으로 변경하기 +``` + +…이것으로 끝이다! 오퍼레이터는 변경 사항을 적용하고 기존 서비스를 +양호한 상태로 유지한다. + +## 자신만의 오퍼레이터 작성 {#writing-operator} + +에코시스템에 원하는 동작을 구현하는 오퍼레이터가 없다면 직접 코딩할 수 있다. +[다음 내용](#다음-내용)에서는 클라우드 네이티브 오퍼레이터를 작성하는 데 +사용할 수 있는 라이브러리 및 도구에 대한 몇 가지 링크를 +찾을 수 있다. + +또한 [쿠버네티스 API의 클라이언트](/ko/docs/reference/using-api/client-libraries/) +역할을 할 수 있는 모든 언어 / 런타임을 사용하여 오퍼레이터(즉, 컨트롤러)를 구현한다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [사용자 정의 리소스](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)에 대해 더 알아보기 +* [OperatorHub.io](https://operatorhub.io/)에서 유스케이스에 맞는 이미 만들어진 오퍼레이터 찾기 +* 기존 도구를 사용하여 자신만의 오퍼레이터를 작성해보자. 다음은 예시이다. + * [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) 사용하기 + * [kubebuilder](https://book.kubebuilder.io/) 사용하기 + * 웹훅(WebHook)과 함께 [Metacontroller](https://metacontroller.app/)를 + 사용하여 직접 구현하기 + * [오퍼레이터 프레임워크](https://github.com/operator-framework/getting-started) 사용하기 +* 다른 사람들이 사용할 수 있도록 자신의 오퍼레이터를 [게시](https://operatorhub.io/)하기 +* 오퍼레이터 패턴을 소개한 [CoreOS 원본 기사](https://coreos.com/blog/introducing-operators.html) 읽기 +* 오퍼레이터 구축을 위한 모범 사례에 대한 구글 클라우드(Google Cloud)의 [기사](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) 읽기 + +{{% /capture %}} + diff --git a/content/ko/docs/concepts/overview/components.md b/content/ko/docs/concepts/overview/components.md index d45adac408895..6fc04ab7bfbd2 100644 --- a/content/ko/docs/concepts/overview/components.md +++ b/content/ko/docs/concepts/overview/components.md @@ -121,6 +121,6 @@ cloud-controller-manager는 클라우드 벤더 코드와 쿠버네티스 코드 {{% capture whatsnext %}} * [노드](/ko/docs/concepts/architecture/nodes/)에 대해 더 배우기 * [컨트롤러](/ko/docs/concepts/architecture/controller/)에 대해 더 배우기 -* [kube-scheduler](/docs/concepts/scheduling/kube-scheduler/)에 대해 더 배우기 +* [kube-scheduler](/ko/docs/concepts/scheduling/kube-scheduler/)에 대해 더 배우기 * etcd의 공식 [문서](https://etcd.io/docs/) 읽기 {{% /capture %}} diff --git a/content/ko/docs/concepts/overview/what-is-kubernetes.md b/content/ko/docs/concepts/overview/what-is-kubernetes.md index 14aaf5b43a374..1bbffa96cc680 100644 --- a/content/ko/docs/concepts/overview/what-is-kubernetes.md +++ b/content/ko/docs/concepts/overview/what-is-kubernetes.md @@ -1,5 +1,5 @@ --- -title: 쿠버네티스란 무엇인가 +title: 쿠버네티스란 무엇인가? description: > 쿠버네티스는 컨테이너화된 워크로드와 서비스를 관리하기 위한 이식할 수 있고, 확장 가능한 오픈소스 플랫폼으로, 선언적 구성과 자동화를 모두 지원한다. 쿠버네티스는 크고 빠르게 성장하는 생태계를 가지고 있다. 쿠버네티스 서비스, 지원 그리고 도구들은 광범위하게 제공된다. content_template: templates/concept diff --git a/content/ko/docs/concepts/overview/working-with-objects/annotations.md b/content/ko/docs/concepts/overview/working-with-objects/annotations.md index dfac3521d177a..4b238bf313fbd 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/annotations.md +++ b/content/ko/docs/concepts/overview/working-with-objects/annotations.md @@ -82,7 +82,7 @@ metadata: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/docs/concepts/overview/working-with-objects/labels.md b/content/ko/docs/concepts/overview/working-with-objects/labels.md index a76972a8744ae..7c9983093b3a2 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/labels.md +++ b/content/ko/docs/concepts/overview/working-with-objects/labels.md @@ -67,7 +67,7 @@ metadata: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/docs/concepts/overview/working-with-objects/names.md b/content/ko/docs/concepts/overview/working-with-objects/names.md index 069c49d9089cf..3841e76c1edb2 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/names.md +++ b/content/ko/docs/concepts/overview/working-with-objects/names.md @@ -62,7 +62,7 @@ metadata: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 ``` diff --git a/content/ko/docs/concepts/overview/working-with-objects/namespaces.md b/content/ko/docs/concepts/overview/working-with-objects/namespaces.md index 8308846611be8..4707233f2688f 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/ko/docs/concepts/overview/working-with-objects/namespaces.md @@ -25,7 +25,7 @@ weight: 30 네임스페이스를 통틀어서 유일할 필요는 없다. 네임스페이스는 서로 중첩될 수 없으며, 각 쿠버네티스 리소스는 하나의 네임스페이스에만 있을 수 있다. -네임스페이스는 클러스터 자원을 ([리소스 쿼터](/docs/concepts/policy/resource-quotas/)를 통해) 여러 사용자 사이에서 나누는 방법이다. +네임스페이스는 클러스터 자원을 ([리소스 쿼터](/ko/docs/concepts/policy/resource-quotas/)를 통해) 여러 사용자 사이에서 나누는 방법이다. 이후 버전의 쿠버네티스에서는 같은 네임스페이스의 오브젝트는 기본적으로 동일한 접근 제어 정책을 갖게 된다. @@ -51,6 +51,7 @@ NAME STATUS AGE default Active 1d kube-system Active 1d kube-public Active 1d +kube-node-lease Active 1d ``` 쿠버네티스는 처음에 세 개의 초기 네임스페이스를 갖는다. diff --git a/content/ko/docs/concepts/overview/working-with-objects/object-management.md b/content/ko/docs/concepts/overview/working-with-objects/object-management.md index e164852d2e00d..bdb5ac476de42 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/object-management.md +++ b/content/ko/docs/concepts/overview/working-with-objects/object-management.md @@ -38,7 +38,7 @@ weight: 15 ### 예시 -디프롤이먼트 오브젝트를 생성하기 위해 nginx 컨테이너의 인스턴스를 구동 시킨다. +디플로이먼트 오브젝트를 생성하기 위해 nginx 컨테이너의 인스턴스를 구동시킨다. ```sh kubectl run nginx --image nginx diff --git a/content/zh/docs/concepts/scheduling/_index.md b/content/ko/docs/concepts/policy/_index.md similarity index 54% rename from content/zh/docs/concepts/scheduling/_index.md rename to content/ko/docs/concepts/policy/_index.md index 2f5138bfd8f6f..ae03c565c1a3f 100644 --- a/content/zh/docs/concepts/scheduling/_index.md +++ b/content/ko/docs/concepts/policy/_index.md @@ -1,4 +1,4 @@ --- -title: "调度" +title: "정책" weight: 90 --- diff --git a/content/ko/docs/concepts/policy/limit-range.md b/content/ko/docs/concepts/policy/limit-range.md new file mode 100644 index 0000000000000..e2588b33af53b --- /dev/null +++ b/content/ko/docs/concepts/policy/limit-range.md @@ -0,0 +1,67 @@ +--- +title: 리밋 레인지(Limit Range) +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +기본적으로 컨테이너는 쿠버네티스 클러스터에서 무제한 [컴퓨팅 리소스](/docs/user-guide/compute-resources)로 실행된다. +리소스 쿼터을 사용하면 클러스터 관리자는 네임스페이스별로 리소스 사용과 생성을 제한할 수 있다. +네임스페이스 내에서 파드나 컨테이너는 네임스페이스의 리소스 쿼터에 정의된 만큼의 CPU와 메모리를 사용할 수 있다. 하나의 파드 또는 컨테이너가 사용 가능한 모든 리소스를 독점할 수 있다는 우려가 있다. 리밋레인지는 네임스페이스에서 리소스 할당(파드 또는 컨테이너)을 제한하는 정책이다. + +{{% /capture %}} + + +{{% capture body %}} + +_리밋레인지_ 는 다음과 같은 제약 조건을 제공한다. + +- 네임스페이스에서 파드 또는 컨테이너별 최소 및 최대 컴퓨팅 리소스 사용량을 지정한다. +- 네임스페이스에서 스토리지클래스별 최소 및 최대 스토리지 요청을 지정한다. +- 네임스페이스에서 리소스에 대한 요청과 제한 사이의 비율을 지정한다. +- 네임스페이스에서 컴퓨팅 리소스에 대한 기본 요청/제한을 설정하고 런타임에 있는 컨테이너에 자동으로 설정한다. + +## 리밋레인지 활성화 + +많은 쿠버네티스 배포판에 리밋레인지 지원이 기본적으로 활성화되어 있다. apiserver `--enable-admission-plugins=` 플래그의 인수 중 하나로 `LimitRanger` 어드미션 컨트롤러가 있는 경우 활성화된다. + +해당 네임스페이스에 리밋레인지 오브젝트가 있는 경우 특정 네임스페이스에 리밋레인지가 지정된다. + +리밋레인지 오브젝트의 이름은 유효한 [DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)이어야한다. + +### 범위 제한의 개요 + +- 관리자는 하나의 네임스페이스에 하나의 `LimitRange`를 만든다. +- 사용자는 네임스페이스에서 파드, 컨테이너 및 퍼시스턴트볼륨클레임과 같은 리소스를 생성한다. +- `LimitRanger` 어드미션 컨트롤러는 컴퓨팅 리소스 요청 사항을 설정하지 않은 모든 파드와 컨테이너에 대한 기본값과 제한을 지정하고 네임스페이스의 리밋레인지에 정의된 리소스의 최소, 최대 및 비율을 초과하지 않도록 사용량을 추적한다. +- 리밋레인지 제약 조건을 위반하는 리소스(파드, 컨테이너, 퍼시스턴트볼륨클레임)를 생성하거나 업데이트하는 경우 HTTP 상태 코드 `403 FORBIDDEN` 및 위반된 제약 조건을 설명하는 메시지와 함께 API 서버에 대한 요청이 실패한다. +- `cpu`, `memory`와 같은 컴퓨팅 리소스의 네임스페이스에서 리밋레인지가 활성화된 경우 사용자는 해당 값에 대한 요청 또는 제한을 지정해야 한다. 그렇지 않으면 시스템에서 파드 생성이 거부될 수 있다. +- 리밋레인지 유효성 검사는 파드 실행 단계가 아닌 파드 어드미션 단계에서만 발생한다. + +범위 제한을 사용하여 생성할 수 있는 정책의 예는 다음과 같다. + +- 용량이 8GiB RAM과 16 코어인 2 노드 클러스터에서 네임스페이스의 파드를 제한하여 CPU의 최대 제한이 500m인 CPU 100m를 요청하고 메모리의 최대 제한이 600M인 메모리 200Mi를 요청하라. +- 스펙에 CPU 및 메모리 요청없이 시작된 컨테이너에 대해 기본 CPU 제한 및 요청을 150m로, 메모리 기본 요청을 300Mi로 정의하라. + +네임스페이스의 총 제한이 파드/컨테이너의 제한 합보다 작은 경우 리소스에 대한 경합이 있을 수 있다. +이 경우 컨테이너 또는 파드가 생성되지 않는다. + +경합이나 리밋레인지 변경은 이미 생성된 리소스에 영향을 미치지 않는다. + +## 예제 + +- [네임스페이스당 최소 및 최대 CPU 제약 조건을 설정하는 방법](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)을 본다. +- [네임스페이스당 최소 및 최대 메모리 제약 조건을 설정하는 방법](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)을 본다. +- [네임스페이스당 기본 CPU 요청과 제한을 설정하는 방법](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)을 본다. +- [네임스페이스당 기본 메모리 요청과 제한을 설정하는 방법](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)을 본다. +- [네임스페이스당 최소 및 최대 스토리지 사용량을 설정하는 방법](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage)을 확인한다. +- [네임스페이스당 할당량을 설정하는 자세한 예시](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)를 본다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +보다 자세한 내용은 [LimitRanger 설계 문서](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md)를 참고하길 바란다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/policy/pod-security-policy.md b/content/ko/docs/concepts/policy/pod-security-policy.md new file mode 100644 index 0000000000000..4e264da6f1ef0 --- /dev/null +++ b/content/ko/docs/concepts/policy/pod-security-policy.md @@ -0,0 +1,635 @@ +--- +title: 파드 시큐리티 폴리시 +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +{{< feature-state state="beta" >}} + +파드 시큐리티 폴리시를 사용하면 파드 생성 및 업데이트에 대한 세분화된 권한을 +부여할 수 있다. + +{{% /capture %}} + + +{{% capture body %}} + +## 파드 시큐리티 폴리시란? + +_Pod Security Policy_ 는 파드 명세의 보안 관련 측면을 제어하는 ​​클러스터-레벨의 +리소스이다. [파드시큐리티폴리시](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) 오브젝트는 +관련 필드에 대한 기본값뿐만 아니라 시스템에 적용하기 위해 파드가 실행해야만 하는 +조건 셋을 정의한다. 관리자는 +다음을 제어할 수 있다. + +| 제어 측면 | 필드 이름 | +| ----------------------------------------------------| ------------------------------------------- | +| 특권을 가진(privileged) 컨테이너의 실행 | [`privileged`](#특권을-가진) | +| 호스트 네임스페이스의 사용 | [`hostPID`, `hostIPC`](#호스트-네임스페이스) | +| 호스트 네트워킹과 포트의 사용 | [`hostNetwork`, `hostPorts`](#호스트-네임스페이스) | +| 볼륨 유형의 사용 | [`volumes`](#볼륨-및-파일시스템) | +| 호스트 파일시스템의 사용 | [`allowedHostPaths`](#볼륨-및-파일시스템) | +| FlexVolume 드라이버의 화이트리스트 | [`allowedFlexVolumes`](#flexvolume-드라이버) | +| 파드 볼륨을 소유한 FSGroup 할당 | [`fsGroup`](#볼륨-및-파일시스템) | +| 읽기 전용 루트 파일시스템 사용 필요 | [`readOnlyRootFilesystem`](#볼륨-및-파일시스템) | +| 컨테이너의 사용자 및 그룹 ID | [`runAsUser`, `runAsGroup`, `supplementalGroups`](#사용자-및-그룹) | +| 루트 특권으로의 에스컬레이션 제한 | [`allowPrivilegeEscalation`, `defaultAllowPrivilegeEscalation`](#권한-에스컬레이션) | +| 리눅스 기능 | [`defaultAddCapabilities`, `requiredDropCapabilities`, `allowedCapabilities`](#기능) | +| 컨테이너의 SELinux 컨텍스트 | [`seLinux`](#selinux) | +| 컨테이너에 허용된 Proc 마운트 유형 | [`allowedProcMountTypes`](#allowedprocmounttypes) | +| 컨테이너가 사용하는 AppArmor 프로파일 | [어노테이션](#apparmor) | +| 컨테이너가 사용하는 seccomp 프로파일 | [어노테이션](#seccomp) | +| 컨테이너가 사용하는 sysctl 프로파일 | [`forbiddenSysctls`,`allowedUnsafeSysctls`](#sysctl) | + + +## 파드 시큐리티 폴리시 활성화 + +파드 시큐리티 폴리시 제어는 선택 사항(하지만 권장함)인 +[어드미션 +컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy)로 +구현된다. [어드미션 컨트롤러 활성화](/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in)하면 +파드시큐리티폴리시가 적용되지만, +정책을 승인하지 않고 활성화하면 클러스터에 +**파드가 생성되지 않는다.** + +파드 시큐리티 폴리시 API(`policy/v1beta1/podsecuritypolicy`)는 +어드미션 컨트롤러와 독립적으로 활성화되므로 기존 클러스터의 경우 +어드미션 컨트롤러를 활성화하기 전에 정책을 추가하고 권한을 +부여하는 것이 좋다. + +## 정책 승인 + +파드시큐리티폴리시 리소스가 생성되면 아무 것도 수행하지 않는다. 이를 사용하려면 +요청 사용자 또는 대상 파드의 +[서비스 어카운트](/docs/tasks/configure-pod-container/configure-service-account/)는 +정책에서 `use` 동사를 허용하여 정책을 사용할 권한이 있어야 한다. + +대부분의 쿠버네티스 파드는 사용자가 직접 만들지 않는다. 대신 일반적으로 +컨트롤러 관리자를 통해 +[디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/), +[레플리카셋](/ko/docs/concepts/workloads/controllers/replicaset/), 또는 기타 +템플릿 컨트롤러의 일부로 간접적으로 생성된다. 컨트롤러에 정책에 대한 접근 권한을 부여하면 +해당 컨트롤러에 의해 생성된 *모든* 파드에 대한 접근 권한이 부여되므로 정책을 승인하는 +기본 방법은 파드의 서비스 어카운트에 대한 접근 권한을 +부여하는 것이다([예](#다른-파드를-실행) 참고). + +### RBAC을 통한 방법 + +[RBAC](/docs/reference/access-authn-authz/rbac/)은 표준 쿠버네티스 권한 부여 모드이며, +정책 사용 권한을 부여하는 데 쉽게 사용할 수 있다. + +먼저, `Role` 또는 `ClusterRole`은 원하는 정책을 `use` 하려면 접근 권한을 부여해야 한다. +접근 권한을 부여하는 규칙은 다음과 같다. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: +rules: +- apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - +``` + +그런 다음 `(Cluster)Role`이 승인된 사용자에게 바인딩된다. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: +roleRef: + kind: ClusterRole + name: + apiGroup: rbac.authorization.k8s.io +subjects: +# Authorize specific service accounts: +- kind: ServiceAccount + name: + namespace: +# Authorize specific users (not recommended): +- kind: User + apiGroup: rbac.authorization.k8s.io + name: +``` + +`RoleBinding`(`ClusterRoleBinding` 아님)을 사용하는 경우, 바인딩과 동일한 네임스페이스에서 +실행되는 파드에 대해서만 사용 권한을 부여한다. 네임스페이스에서 실행되는 모든 파드에 접근 권한을 +부여하기 위해 시스템 그룹과 쌍을 이룰 수 있다. +```yaml +# Authorize all service accounts in a namespace: +- kind: Group + apiGroup: rbac.authorization.k8s.io + name: system:serviceaccounts +# Or equivalently, all authenticated users in a namespace: +- kind: Group + apiGroup: rbac.authorization.k8s.io + name: system:authenticated +``` + +RBAC 바인딩에 대한 자세한 예는, +[역할 바인딩 예제](/docs/reference/access-authn-authz/rbac#role-binding-examples)를 참고하길 바란다. +파드시큐리티폴리시 인증에 대한 전체 예제는 +[아래](#예제)를 참고하길 바란다. + + +### 문제 해결 + +- [컨트롤러 관리자](/docs/admin/kube-controller-manager/)는 +[보안 API 포트](/docs/reference/access-authn-authz/controlling-access/)에 대해 실행해야 하며, +슈퍼유저 권한이 없어야 한다. 그렇지 않으면 요청이 인증 및 권한 부여 모듈을 우회하고, +모든 파드시큐리티폴리시 오브젝트가 허용되며 +사용자는 특권있는 컨테이너를 만들 수 있다. 컨트롤러 관리자 권한 구성에 대한 자세한 +내용은 [컨트롤러 역할](/docs/reference/access-authn-authz/rbac/#controller-roles)을 +참고하길 바란다. + +## 정책 순서 + +파드 생성 및 업데이트를 제한할 뿐만 아니라 파드 시큐리티 폴리시를 사용하여 +제어하는 ​​많은 필드에 기본값을 제공할 수도 있다. 여러 정책을 +사용할 수 있는 경우 파드 시큐리티 폴리시 컨트롤러는 +다음 기준에 따라 정책을 선택한다. + +1. 기본 설정을 변경하거나 파드를 변경하지 않고 파드를 있는 그대로 허용하는 파드시큐리티폴리시가 + 선호된다. 이러한 비-변이(non-mutating) 파드시큐리티폴리시의 + 순서는 중요하지 않다. +2. 파드를 기본값으로 설정하거나 변경해야 하는 경우, 파드를 허용할 첫 번째 파드시큐리티폴리시 + (이름순)가 선택된다. + +{{< note >}} +업데이트 작업 중(파드 스펙에 대한 변경이 허용되지 않는 동안) 비-변이 파드시큐리티폴리시만 +파드의 유효성을 검사하는 데 사용된다. +{{< /note >}} + +## 예제 + +_이 예에서는 파드시큐리티폴리시 어드미션 컨트롤러가 활성화된 클러스터가 실행 중이고 +클러스터 관리자 권한이 있다고 가정한다._ + +### 설정 + +이 예제와 같이 네임스페이스와 서비스 어카운트를 설정한다. +이 서비스 어카운트를 사용하여 관리자가 아닌 사용자를 조정한다. + +```shell +kubectl create namespace psp-example +kubectl create serviceaccount -n psp-example fake-user +kubectl create rolebinding -n psp-example fake-editor --clusterrole=edit --serviceaccount=psp-example:fake-user +``` + +어떤 사용자로 활동하고 있는지 명확하게 하고 입력 내용을 저장하려면 2개의 별칭(alias)을 +만든다. + +```shell +alias kubectl-admin='kubectl -n psp-example' +alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n psp-example' +``` + +### 정책과 파드 생성 + +파일에서 예제 파드시큐리티폴리시 오브젝트를 정의한다. 이는 특권있는 파드를 +만들지 못하게 하는 정책이다. +파드시큐리티폴리시 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names#dns-서브도메인-이름)이어야 한다. + +{{< codenew file="policy/example-psp.yaml" >}} + +그리고 kubectl로 생성한다. + +```shell +kubectl-admin create -f example-psp.yaml +``` + +이제 권한이 없는 사용자로서 간단한 파드를 생성해보자. + +```shell +kubectl-user create -f- <}} +이 방법은 권장하지 않는다! 선호하는 방법은 [다음 절](#다른-파드를-실행)을 +참고하길 바란다. +{{< /note >}} + +```shell +kubectl-admin create role psp:unprivileged \ + --verb=use \ + --resource=podsecuritypolicy \ + --resource-name=example +role "psp:unprivileged" created + +kubectl-admin create rolebinding fake-user:psp:unprivileged \ + --role=psp:unprivileged \ + --serviceaccount=psp-example:fake-user +rolebinding "fake-user:psp:unprivileged" created + +kubectl-user auth can-i use podsecuritypolicy/example +yes +``` + +이제 파드 생성을 다시 시도하자. + +```shell +kubectl-user create -f- <}} + +다음은 권한이 없는 사용자로서의 실행을 필요로 하고, 루트로의 에스컬레이션(escalation) 가능성을 차단하고, +여러 보안 메커니즘을 사용을 필요로 하는 제한적 +정책의 예제이다. + +{{< codenew file="policy/restricted-psp.yaml" >}} + +## 정책 레퍼런스 + +### 특권을 가진 + +**Privileged** - 파드의 컨테이너가 특권 모드를 사용할 수 있는지 여부를 결정한다. +기본적으로 컨테이너는 호스트의 모든 장치에 접근할 수 없지만 +"특권을 가진" 컨테이너는 호스트의 모든 장치에 접근할 수 있다. 이것은 +컨테이너가 호스트에서 실행되는 프로세스와 거의 동일한 접근을 허용한다. +이것은 네트워크 스택 조작 및 장치 접근과 같은 +리눅스 기능을 사용하려는 컨테이너에 유용하다. + +### 호스트 네임스페이스 + +**HostPID** - 파드 컨테이너가 호스트 프로세스 ID 네임스페이스를 공유할 수 있는지 여부를 +제어한다. ptrace와 함께 사용하면 컨테이너 외부로 권한을 에스컬레이션하는 데 사용할 수 +있다(ptrace는 기본적으로 금지되어 있음). + +**HostIPC** - 파드 컨테이너가 호스트 IPC 네임스페이스를 공유할 수 있는지 여부를 +제어한다. + +**HostNetwork** - 파드가 노드 네트워크 네임스페이스를 사용할 수 있는지 여부를 제어한다. +이렇게 하면 파드에 루프백 장치에 접근 권한을 주고, 서비스는 로컬호스트(localhost)를 리스닝할 수 있으며, +동일한 노드에 있는 다른 파드의 네트워크 활동을 스누핑(snoop)하는 데 +사용할 수 있다. + +**HostPorts** - 호스트 네트워크 네임스페이스에 허용되는 포트 범위의 화이트리스트(whitelist)를 +제공한다. `min`과 `max`를 포함하여 `HostPortRange`의 목록으로 정의된다. +기본값은 허용하는 호스트 포트 없음(no allowed host ports)이다. + +### 볼륨 및 파일시스템 + +**Volumes** - 허용되는 볼륨 유형의 화이트리스트를 제공한다. 허용 가능한 값은 +볼륨을 생성할 때 정의된 볼륨 소스에 따른다. 볼륨 유형의 전체 목록은 +[볼륨 유형들](/ko/docs/concepts/storage/volumes/#볼륨-유형들)에서 참고한다. +또한 `*`를 사용하여 모든 볼륨 유형을 +허용할 수 있다. + +새 PSP에 허용되는 볼륨의 **최소 권장 셋** 은 다음과 같다. + +- 컨피그맵 +- 다운워드API +- emptyDir +- 퍼시스턴트볼륨클레임 +- 시크릿 +- 프로젝티드(projected) + +{{< warning >}} +파드시큐리티폴리시는 `PersistentVolumeClaim`이 참조할 수 있는 `PersistentVolume` +오브젝트의 유형을 제한하지 않으며 hostPath 유형 +`PersistentVolumes`은 읽기-전용 접근 모드를 지원하지 않는다. 신뢰할 수 있는 사용자만 +`PersistentVolume` 오브젝트를 생성할 수 있는 권한을 부여 받아야 한다. +{{< /warning >}} + +**FSGroup** - 일부 볼륨에 적용되는 보충 그룹(supplemental group)을 제어한다. + +- *MustRunAs* - 하나 이상의 `range`를 지정해야 한다. 첫 번째 범위의 최솟값을 +기본값으로 사용한다. 모든 범위에 대해 검증한다. +- *MayRunAs* - 하나 이상의 `range`를 지정해야 한다. 기본값을 제공하지 않고 +`FSGroups`을 설정하지 않은 상태로 둘 수 있다. `FSGroups`이 설정된 경우 모든 범위에 대해 +유효성을 검사한다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `fsGroup` ID의 지정도 허용한다. + +**AllowedHostPaths** - hostPath 볼륨에서 사용할 수 있는 호스트 경로의 화이트리스트를 +지정한다. 빈 목록은 사용되는 호스트 경로에 제한이 없음을 의미한다. +이는 단일 `pathPrefix` 필드가 있는 오브젝트 목록으로 정의되며, hostPath 볼륨은 +허용된 접두사로 시작하는 경로를 마운트할 수 있으며 `readOnly` 필드는 +읽기-전용으로 마운트 되어야 함을 나타낸다. +예를 들면 다음과 같습니다. + +```yaml +allowedHostPaths: + # 이 정책은 "/foo", "/foo/", "/foo/bar" 등을 허용하지만, + # "/fool", "/etc/foo" 등은 허용하지 않는다. + # "/foo/../" 는 절대 유효하지 않다. + - pathPrefix: "/foo" + readOnly: true # 읽기 전용 마운트만 허용 +``` + +{{< warning >}}호스트 파일시스템에 제한없는 접근을 부여하며, 컨테이너가 특권을 에스컬레이션 +(다른 컨테이너들에 있는 데이터를 읽고, 시스템 서비스의 자격 증명을 어뷰징(abusing)하는 등)할 +수 있도록 만드는 다양한 방법이 있다. 예를 들면, Kubelet과 같다. + +쓰기 가능한 hostPath 디렉토리 볼륨을 사용하면, 컨테이너가 `pathPrefix` 외부의 +호스트 파일시스템에 대한 통행을 허용하는 방식으로 컨테이너의 파일시스템 쓰기(write)를 허용한다. +쿠버네티스 1.11 이상 버전에서 사용 가능한 `readOnly: true`는 지정된 `pathPrefix`에 대한 +접근을 효과적으로 제한하기 위해 **모든** `allowedHostPaths`에서 사용해야 한다. +{{< /warning >}} + +**ReadOnlyRootFilesystem** - 컨테이너는 읽기-전용 루트 파일시스템(즉, 쓰기 가능한 레이어 없음)으로 +실행해야 한다. + +### FlexVolume 드라이버 + +flexvolume에서 사용할 수 있는 FlexVolume 드라이버의 화이트리스트를 지정한다. +빈 목록 또는 nil은 드라이버에 제한이 없음을 의미한다. +[`volumes`](#볼륨-및-파일시스템) 필드에 `flexVolume` 볼륨 유형이 포함되어 +있는지 확인한다. 그렇지 않으면 FlexVolume 드라이버가 허용되지 않는다. + +예를 들면 다음과 같다. + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: allow-flex-volumes +spec: + # ... 다른 스펙 필드 + volumes: + - flexVolume + allowedFlexVolumes: + - driver: example/lvm + - driver: example/cifs +``` + +### 사용자 및 그룹 + +**RunAsUser** - 컨테이너를 실행할 사용자 ID를 제어힌다. + +- *MustRunAs* - 하나 이상의 `range`를 지정해야 한다. 첫 번째 범위의 최솟값을 +기본값으로 사용한다. 모든 범위에 대해 검증한다. +- *MustRunAsNonRoot* - 파드가 0이 아닌 `runAsUser`로 제출되거나 +이미지에 `USER` 지시문이 정의되어 있어야 한다(숫자 UID 사용). `runAsNonRoot` 또는 +`runAsUser` 설정을 지정하지 않은 파드는 `runAsNonRoot=true`를 설정하도록 +변경되므로 컨테이너에 0이 아닌 숫자가 정의된 `USER` 지시문이 +필요하다. 기본값은 제공되지 않는다. +이 전략에서는 `allowPrivilegeEscalation=false`를 설정하는 것이 좋다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `runAsUser`의 지정도 허용한다. + +**RunAsGroup** - 컨테이너가 실행될 기본 그룹 ID를 제어한다. + +- *MustRunAs* - 하나 이상의 `range`를 지정해야 한다. 첫 번째 범위의 최솟값을 +기본값으로 사용한다. 모든 범위에 대해 검증한다. +- *MayRunAs* - `RunAsGroup`을 지정할 필요가 없다. 그러나 `RunAsGroup`을 지정하면 +정의된 범위에 속해야 한다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `runAsGroup`의 지정도 허용한다. + + +**SupplementalGroups** - 컨테이너가 추가할 그룹 ID를 제어한다. + +- *MustRunAs* - 하나 이상의 `range`를 지정해야 한다. 첫 번째 범위의 최솟값을 +기본값으로 사용한다. 모든 범위에 대해 검증한다. +- *MayRunAs* - 하나 이상의 `range`를 지정해야 한다. `supplementalGroups`에 +기본값을 제공하지 않고 설정하지 않은 상태로 둘 수 있다. +`supplementalGroups`가 설정된 경우 모든 범위에 대해 유효성을 검증한다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `supplementalGroups`의 지정도 +허용한다. + +### 권한 에스컬레이션 + +이 옵션은 `allowPrivilegeEscalation` 컨테이너 옵션을 제어한다. 이 bool은 +컨테이너 프로세스에서 +[`no_new_privs`](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt) +플래그가 설정되는지 여부를 직접 제어한다. 이 플래그는 `setuid` 바이너리가 +유효 사용자 ID를 변경하지 못하게 하고 파일에 추가 기능을 활성화하지 못하게 +한다(예: `ping` 도구 사용을 못하게 함). `MustRunAsNonRoot`를 효과적으로 +강제하려면 이 동작이 필요하다. + +**AllowPrivilegeEscalation** - 사용자가 컨테이너의 보안 컨텍스트를 +`allowPrivilegeEscalation=true`로 설정할 수 있는지 여부를 게이트한다. +이 기본값은 setuid 바이너리를 중단하지 않도록 허용한다. 이를 `false`로 설정하면 +컨테이너의 하위 프로세스가 상위 프로세스보다 더 많은 권한을 얻을 수 없다. + +**DefaultAllowPrivilegeEscalation** - `allowPrivilegeEscalation` 옵션의 +기본값을 설정한다. 이것이 없는 기본 동작은 setuid 바이너리를 중단하지 않도록 +권한 에스컬레이션을 허용하는 것이다. 해당 동작이 필요하지 않은 경우 이 필드를 사용하여 +기본적으로 허용하지 않도록 설정할 수 있지만 파드는 여전히 `allowPrivilegeEscalation`을 +명시적으로 요청할 수 있다. + +### 기능 + +리눅스 기능은 전통적으로 슈퍼유저와 관련된 권한을 보다 세밀하게 분류한다. +이러한 기능 중 일부는 권한 에스컬레이션 또는 컨테이너 분류에 사용될 수 있으며 +파드시큐리티폴리시에 의해 제한될 수 있다. 리눅스 기능에 대한 자세한 내용은 +[기능(7)](http://man7.org/linux/man-pages/man7/capabilities.7.html)을 +참고하길 바란다. + +다음 필드는 대문자로 표기된 기능 이름 목록을 +`CAP_` 접두사 없이 가져온다. + +**AllowedCapabilities** - 컨테이너에 추가될 수 있는 기능의 화이트리스트를 +제공한다. 기본적인 기능 셋은 암시적으로 허용된다. 비어있는 셋은 +기본 셋을 넘어서는 추가 기능이 추가되지 않는 것을 +의미한다. `*`는 모든 기능을 허용하는 데 사용할 수 있다. + +**RequiredDropCapabilities** - 컨테이너에서 삭제해야 하는 기능이다. +이러한 기능은 기본 셋에서 제거되며 추가해서는 안된다. +`RequiredDropCapabilities`에 나열된 기능은 `AllowedCapabilities` 또는 +`DefaultAddCapabilities`에 포함되지 않아야 한다. + +**DefaultAddCapabilities** - 런타임 기본값 외에 기본적으로 컨테이너에 추가되는 기능이다. +도커 런타임을 사용할 때 기본 기능 목록은 +[도커 문서](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities)를 +참고하길 바란다. + +### SELinux + +- *MustRunAs* - `seLinuxOptions`을 구성해야 한다. +`seLinuxOptions`을 기본값으로 사용한다. `seLinuxOptions`에 대해 유효성을 검사한다. +- *RunAsAny* - 기본값은 제공되지 않는다. 어떠한 `seLinuxOptions`의 지정도 +허용한다. + +### AllowedProcMountTypes + +`allowedProcMountTypes`는 허용된 ProcMountTypes의 화이트리스트이다. +비어 있거나 nil은 `DefaultProcMountType`만 사용할 수 있음을 나타낸다. + +`DefaultProcMount`는 /proc의 읽기 전용 및 마스킹(masking)된 경로에 컨테이너 런타임 +기본값을 사용한다. 대부분의 컨테이너 런타임은 특수 장치나 정보가 실수로 보안에 +노출되지 않도록 /proc의 특정 경로를 마스킹한다. 이것은 문자열 +`Default`로 표시된다. + +유일하게 다른 ProcMountType은 `UnmaskedProcMount`로, 컨테이너 런타임의 +기본 마스킹 동작을 무시하고 새로 작성된 /proc 컨테이너가 수정없이 +그대로 유지되도록 한다. 이 문자열은 +`Unmasked`로 표시된다. + +### AppArmor + +파드시큐리티폴리시의 어노테이션을 통해 제어된다. [AppArmor +문서](/docs/tutorials/clusters/apparmor/#podsecuritypolicy-annotations)를 참고하길 바란다. + +### Seccomp + +파드에서 seccomp 프로파일의 사용은 파드시큐리티폴리시의 어노테이션을 통해 +제어할 수 있다. Seccomp는 쿠버네티스의 알파 기능이다. + +**seccomp.security.alpha.kubernetes.io/defaultProfileName** - 컨테이너에 +적용할 기본 seccomp 프로파일을 지정하는 어노테이션이다. 가능한 값은 +다음과 같다. + +- `unconfined` - 대안이 제공되지 않으면 Seccomp가 컨테이너 프로세스에 적용되지 + 않는다(쿠버네티스의 기본값임). +- `runtime/default` - 기본 컨테이너 런타임 프로파일이 사용된다. +- `docker/default` - 도커 기본 seccomp 프로파일이 사용된다. 쿠버네티스 1.11 부터 사용 중단(deprecated) + 되었다. 대신 `runtime/default` 사용을 권장한다. +- `localhost/` - `/`에 있는 노드에서 파일을 프로파일로 + 지정한다. 여기서 ``는 Kubelet의 `--seccomp-profile-root` 플래그를 + 통해 정의된다. + +**seccomp.security.alpha.kubernetes.io/allowedProfileNames** - 파드 seccomp +어노테이션에 허용되는 값을 지정하는 어노테이션. 쉼표로 구분된 +허용된 값의 목록으로 지정된다. 가능한 값은 위에 나열된 값과 +모든 프로파일을 허용하는 `*` 이다. +이 주석이 없으면 기본값을 변경할 수 없다. + +### Sysctl + +기본적으로 모든 안전한 sysctls가 허용된다. + +- `forbiddenSysctls` - 특정 sysctls를 제외한다. 목록에서 안전한 것과 안전하지 않은 sysctls의 조합을 금지할 수 있다. 모든 sysctls 설정을 금지하려면 자체적으로 `*`를 사용한다. +- `allowedUnsafeSysctls` - `forbiddenSysctls`에 나열되지 않는 한 기본 목록에서 허용하지 않은 특정 sysctls를 허용한다. + +[Sysctl 문서]( +/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy)를 참고하길 바란다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +API 세부 정보는 [파드 시큐리티 폴리시 레퍼런스](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) 참조 + +{{% /capture %}} diff --git a/content/ko/docs/concepts/policy/resource-quotas.md b/content/ko/docs/concepts/policy/resource-quotas.md new file mode 100644 index 0000000000000..d86685d98607f --- /dev/null +++ b/content/ko/docs/concepts/policy/resource-quotas.md @@ -0,0 +1,600 @@ +--- +title: 리소스 쿼터 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +여러 사용자나 팀이 정해진 수의 노드로 클러스터를 공유할 때 +한 팀이 공정하게 분배된 리소스보다 많은 리소스를 사용할 수 있다는 우려가 있다. + +리소스 쿼터는 관리자가 이 문제를 해결하기 위한 도구이다. + +{{% /capture %}} + + +{{% capture body %}} + +`ResourceQuota` 오브젝트로 정의된 리소스 쿼터는 네임스페이스별 총 리소스 사용을 제한하는 +제약 조건을 제공한다. 유형별로 네임스페이스에서 만들 수 있는 오브젝트 수와 +해당 프로젝트의 리소스가 사용할 수 있는 총 컴퓨트 리소스의 양을 +제한할 수 있다. + +리소스 쿼터는 다음과 같이 작동한다. + +- 다른 팀은 다른 네임스페이스에서 작동한다. 현재 이것은 자발적이지만 ACL을 통해 이 필수 사항을 + 적용하기 위한 지원이 계획되어 있다. +- 관리자는 각 네임스페이스에 대해 하나의 `ResourceQuota`를 생성한다. +- 사용자는 네임스페이스에서 리소스(파드, 서비스 등)를 생성하고 쿼터 시스템은 + 사용량을 추적하여 `ResourceQuota`에 정의된 하드(hard) 리소스 제한을 초과하지 않도록 한다. +- 리소스를 생성하거나 업데이트할 때 쿼터 제약 조건을 위반하면 위반된 제약 조건을 설명하는 + 메시지와 함께 HTTP 상태 코드 `403 FORBIDDEN`으로 요청이 실패한다. +- `cpu`, `memory`와 같은 컴퓨트 리소스에 대해 네임스페이스에서 쿼터가 활성화된 경우 + 사용자는 해당값에 대한 요청 또는 제한을 지정해야 한다. 그렇지 않으면 쿼터 시스템이 + 파드 생성을 거부할 수 있다. 힌트: 컴퓨트 리소스 요구 사항이 없는 파드를 기본값으로 설정하려면 `LimitRanger` 어드미션 컨트롤러를 사용하자. + 이 문제를 회피하는 방법에 대한 예제는 [연습](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)을 참고하길 바란다. + +`ResourceQuota` 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names#dns-서브도메인-이름)이어야 한다. + +네임스페이스와 쿼터를 사용하여 만들 수 있는 정책의 예는 다음과 같다. + +- 용량이 32GiB RAM, 16 코어인 클러스터에서 A 팀이 20GiB 및 10 코어를 사용하고 + B 팀은 10GiB 및 4 코어를 사용하게 하고 2GiB 및 2 코어를 향후 할당을 위해 보유하도록 한다. +- "testing" 네임스페이스를 1 코어 및 1GiB RAM을 사용하도록 제한한다. + "production" 네임스페이스에는 원하는 양을 사용하도록 한다. + +클러스터의 총 용량이 네임스페이스의 쿼터 합보다 작은 경우 리소스에 대한 경합이 있을 수 있다. +이것은 선착순으로 처리된다. + +경합이나 쿼터 변경은 이미 생성된 리소스에 영향을 미치지 않는다. + +## 리소스 쿼터 활성화 + +많은 쿠버네티스 배포판에 기본적으로 리소스 쿼터 지원이 활성화되어 있다. +API 서버 `--enable-admission-plugins=` 플래그의 인수 중 하나로 +`ResourceQuota`가 있는 경우 활성화된다. + +해당 네임스페이스에 `ResourceQuota`가 있는 경우 특정 네임스페이스에 리소스 쿼터가 적용된다. + +## 컴퓨트 리소스 쿼터 + +지정된 네임스페이스에서 요청할 수 있는 총 [컴퓨트 리소스](/docs/user-guide/compute-resources) 합을 제한할 수 있다. + +다음과 같은 리소스 유형이 지원된다. + +| 리소스 이름 | 설명 | +| --------------------- | ----------------------------------------------------------- | +| `limits.cpu` | 터미널이 아닌 상태의 모든 파드에서 CPU 제한의 합은 이 값을 초과할 수 없음 | +| `limits.memory` | 터미널이 아닌 상태의 모든 파드에서 메모리 제한의 합은 이 값을 초과할 수 없음 | +| `requests.cpu` | 터미널이 아닌 상태의 모든 파드에서 CPU 요청의 합은 이 값을 초과할 수 없음 | +| `requests.memory` | 터미널이 아닌 상태의 모든 파드에서 메모리 요청의 합은 이 값을 초과할 수 없음 | + +### 확장된 리소스에 대한 리소스 쿼터 + +위에서 언급한 리소스 외에도 릴리스 1.10에서는 +[확장된 리소스](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)에 대한 쿼터 지원이 추가되었다. + +확장된 리소스에는 오버커밋(overcommit)이 허용되지 않으므로 하나의 쿼터에서 +동일한 확장된 리소스에 대한 `requests`와 `limits`을 모두 지정하는 것은 의미가 없다. 따라서 확장된 +리소스의 경우 지금은 접두사 `requests.`이 있는 쿼터 항목만 허용된다. + +예를 들어, 리소스 이름이 `nvidia.com/gpu`이고 네임스페이스에서 요청된 총 GPU 수를 4개로 제한하려는 경우, +GPU 리소스를 다음과 같이 쿼터를 정의할 수 있다. + +* `requests.nvidia.com/gpu: 4` + +자세한 내용은 [쿼터 보기 및 설정](#쿼터-보기-및-설정)을 참고하길 바란다. + + +## 스토리지 리소스 쿼터 + +지정된 네임스페이스에서 요청할 수 있는 총 [스토리지 리소스](/ko/docs/concepts/storage/persistent-volumes/) 합을 제한할 수 있다. + +또한 연관된 ​​스토리지 클래스를 기반으로 스토리지 리소스 사용을 제한할 수 있다. + +| 리소스 이름 | 설명 | +| --------------------- | ----------------------------------------------------------- | +| `requests.storage` | 모든 퍼시스턴트 볼륨 클레임에서 스토리지 요청의 합은 이 값을 초과할 수 없음 | +| `persistentvolumeclaims` | 네임스페이스에 존재할 수 있는 총 [퍼시스턴트 볼륨 클레임](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트볼륨클레임) 수 | +| `.storageclass.storage.k8s.io/requests.storage` | storage-class-name과 관련된 모든 퍼시스턴트 볼륨 클레임에서 스토리지 요청의 합은 이 값을 초과할 수 없음 | +| `.storageclass.storage.k8s.io/persistentvolumeclaims` | storage-class-name과 관련된 모든 퍼시스턴트 볼륨 클레임에서 네임스페이스에 존재할 수 있는 총 [퍼시스턴트 볼륨 클레임](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트볼륨클레임) 수 | + +예를 들어, 운영자가 `bronze` 스토리지 클래스와 별도로 `gold` 스토리지 클래스를 사용하여 스토리지에 쿼터를 지정하려는 경우 운영자는 다음과 같이 +쿼터를 정의할 수 있다. + +* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi` +* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi` + +릴리스 1.8에서는 로컬 임시 스토리지에 대한 쿼터 지원이 알파 기능으로 추가되었다. + +| 리소스 이름 | 설명 | +| ------------------------------- |----------------------------------------------------------- | +| `requests.ephemeral-storage` | 네임스페이스의 모든 파드에서 로컬 임시 스토리지 요청의 합은 이 값을 초과할 수 없음 | +| `limits.ephemeral-storage` | 네임스페이스의 모든 파드에서 로컬 임시 스토리지 제한의 합은 이 값을 초과할 수 없음 | + +## 오브젝트 수 쿼터 + +1.9 릴리스는 다음 구문을 사용하여 모든 표준 네임스페이스 리소스 유형에 쿼터를 지정하는 지원을 추가했다. + +* `count/.` + +다음은 사용자가 오브젝트 수 쿼터 아래에 배치하려는 리소스 셋의 예이다. + +* `count/persistentvolumeclaims` +* `count/services` +* `count/secrets` +* `count/configmaps` +* `count/replicationcontrollers` +* `count/deployments.apps` +* `count/replicasets.apps` +* `count/statefulsets.apps` +* `count/jobs.batch` +* `count/cronjobs.batch` +* `count/deployments.extensions` + +1.15 릴리스는 동일한 구문을 사용하여 사용자 정의 리소스에 대한 지원을 추가했다. +예를 들어 `example.com` API 그룹에서 `widgets` 사용자 정의 리소스에 대한 쿼터를 생성하려면 `count/widgets.example.com`을 사용한다. + +`count/*` 리소스 쿼터를 사용할 때 서버 스토리지 영역에 있다면 오브젝트는 쿼터에 대해 과금된다. +이러한 유형의 쿼터는 스토리지 리소스 고갈을 방지하는 데 유용하다. 예를 들어, +크기가 큰 서버에서 시크릿 수에 쿼터를 지정할 수 있다. 클러스터에 시크릿이 너무 많으면 실제로 서버와 +컨트롤러가 시작되지 않을 수 있다! 네임스페이스에 너무 많은 작업을 생성하는 +잘못 구성된 크론 잡으로 인해 서비스 거부를 유발하는 것으로부터 보호하기 위해 작업의 쿼터를 지정하도록 선택할 수 있다. + +1.9 릴리스 이전에는 제한된 리소스 셋에서 일반 오브젝트 수 쿼터를 적용할 수 있었다. +또한, 특정 리소스에 대한 쿼터를 유형별로 추가로 제한할 수 있다. + +다음 유형이 지원된다. + +| 리소스 이름 | 설명 | +| ------------------------------- | ------------------------------------------------- | +| `configmaps` | 네임스페이스에 존재할 수 있는 총 구성 맵 수 | +| `persistentvolumeclaims` | 네임스페이스에 존재할 수 있는 총 [퍼시스턴트 볼륨 클레임](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트볼륨클레임) 수 | +| `pods` | 네임스페이스에 존재할 수 있는 터미널이 아닌 상태의 파드의 총 수. `.status.phase in (Failed, Succeeded)`가 true인 경우 파드는 터미널 상태임 | +| `replicationcontrollers` | 네임스페이스에 존재할 수 있는 총 레플리케이션 컨트롤러 수 | +| `resourcequotas` | 네임스페이스에 존재할 수 있는 총 [리소스 쿼터](/docs/reference/access-authn-authz/admission-controllers/#resourcequota) 수 | +| `services` | 네임스페이스에 존재할 수 있는 총 서비스 수 | +| `services.loadbalancers` | 네임스페이스에 존재할 수 있는 로드 밸런서 유형의 총 서비스 수 | +| `services.nodeports` | 네임스페이스에 존재할 수 있는 노드 포트 유형의 총 서비스 수 | +| `secrets` | 네임스페이스에 존재할 수 있는 총 시크릿 수 | + +예를 들어, `pods` 쿼터는 터미널이 아닌 단일 네임스페이스에서 생성된 `pods` 수를 계산하고 최대값을 적용한다. +사용자가 작은 파드를 많이 생성하여 클러스터의 파드 IP 공급이 고갈되는 경우를 피하기 위해 +네임스페이스에 `pods` 쿼터를 설정할 수 있다. + +## 쿼터 범위 + +각 쿼터에는 연결된 범위 셋이 있을 수 있다. 쿼터는 열거된 범위의 교차 부분과 일치하는 경우에만 +리소스 사용량을 측정한다. + +범위가 쿼터에 추가되면 해당 범위와 관련된 리소스를 지원하는 리소스 수가 제한된다. +허용된 셋 이외의 쿼터에 지정된 리소스는 유효성 검사 오류가 발생한다. + +| 범위 | 설명 | +| ----- | ----------- | +| `Terminating` | `.spec.activeDeadlineSeconds >= 0`에 일치하는 파드 | +| `NotTerminating` | `.spec.activeDeadlineSeconds is nil`에 일치하는 파드 | +| `BestEffort` | 최상의 서비스 품질을 제공하는 파드 | +| `NotBestEffort` | 서비스 품질이 나쁜 파드 | + +`BestEffort` 범위는 다음의 리소스(파드)를 추적하도록 쿼터를 제한한다. + +`Terminating`, `NotTerminating` 및 `NotBestEffort` 범위는 쿼터를 제한하여 다음의 리소스를 추적한다. + +* `cpu` +* `limits.cpu` +* `limits.memory` +* `memory` +* `pods` +* `requests.cpu` +* `requests.memory` + +### PriorityClass별 리소스 쿼터 + +{{< feature-state for_k8s_version="1.12" state="beta" >}} + +특정 [우선 순위](/docs/concepts/configuration/pod-priority-preemption/#pod-priority)로 파드를 생성할 수 있다. +쿼터 스펙의 `scopeSelector` 필드를 사용하여 파드의 우선 순위에 따라 파드의 시스템 리소스 사용을 +제어할 수 있다. + +쿼터 스펙의 `scopeSelector`가 파드를 선택한 경우에만 쿼터가 일치하고 사용된다. + +이 예에서는 쿼터 오브젝트를 생성하여 특정 우선 순위의 파드와 일치시킨다. +예제는 다음과 같이 작동한다. + +- 클러스터의 파드는 "low(낮음)", "medium(중간)", "high(높음)"의 세 가지 우선 순위 클래스 중 하나를 가진다. +- 각 우선 순위마다 하나의 쿼터 오브젝트가 생성된다. + +다음 YAML을 `quota.yml` 파일에 저장한다. + +```yaml +apiVersion: v1 +kind: List +items: +- apiVersion: v1 + kind: ResourceQuota + metadata: + name: pods-high + spec: + hard: + cpu: "1000" + memory: 200Gi + pods: "10" + scopeSelector: + matchExpressions: + - operator : In + scopeName: PriorityClass + values: ["high"] +- apiVersion: v1 + kind: ResourceQuota + metadata: + name: pods-medium + spec: + hard: + cpu: "10" + memory: 20Gi + pods: "10" + scopeSelector: + matchExpressions: + - operator : In + scopeName: PriorityClass + values: ["medium"] +- apiVersion: v1 + kind: ResourceQuota + metadata: + name: pods-low + spec: + hard: + cpu: "5" + memory: 10Gi + pods: "10" + scopeSelector: + matchExpressions: + - operator : In + scopeName: PriorityClass + values: ["low"] +``` + +`kubectl create`를 사용하여 YAML을 적용한다. + +```shell +kubectl create -f ./quota.yml +``` + +```shell +resourcequota/pods-high created +resourcequota/pods-medium created +resourcequota/pods-low created +``` + +`kubectl describe quota`를 사용하여 `Used` 쿼터가 `0`인지 확인하자. + +```shell +kubectl describe quota +``` + +```shell +Name: pods-high +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 1k +memory 0 200Gi +pods 0 10 + + +Name: pods-low +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 5 +memory 0 10Gi +pods 0 10 + + +Name: pods-medium +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 10 +memory 0 20Gi +pods 0 10 +``` + +우선 순위가 "high"인 파드를 생성한다. 다음 YAML을 +`high-priority-pod.yml` 파일에 저장한다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: high-priority +spec: + containers: + - name: high-priority + image: ubuntu + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello; sleep 10;done"] + resources: + requests: + memory: "10Gi" + cpu: "500m" + limits: + memory: "10Gi" + cpu: "500m" + priorityClassName: high +``` + +`kubectl create`로 적용하자. + +```shell +kubectl create -f ./high-priority-pod.yml +``` + +"high" 우선 순위 쿼터가 적용된 `pods-high`에 대한 "Used" 통계가 변경되었고 +다른 두 쿼터는 변경되지 않았는지 확인한다. + +```shell +kubectl describe quota +``` + +```shell +Name: pods-high +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 500m 1k +memory 10Gi 200Gi +pods 1 10 + + +Name: pods-low +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 5 +memory 0 10Gi +pods 0 10 + + +Name: pods-medium +Namespace: default +Resource Used Hard +-------- ---- ---- +cpu 0 10 +memory 0 20Gi +pods 0 10 +``` + +`scopeSelector`는 `operator` 필드에서 다음 값을 지원한다. + +* `In` +* `NotIn` +* `Exist` +* `DoesNotExist` + +## 요청과 제한의 비교 {#requests-vs-limits} + +컴퓨트 리소스를 할당할 때 각 컨테이너는 CPU 또는 메모리에 대한 요청과 제한값을 지정할 수 있다. +쿼터는 값에 대한 쿼터를 지정하도록 구성할 수 있다. + +쿼터에 `requests.cpu`나 `requests.memory`에 지정된 값이 있으면 들어오는 모든 +컨테이너가 해당 리소스에 대한 명시적인 요청을 지정해야 한다. 쿼터에 `limits.cpu`나 +`limits.memory`에 지정된 값이 있으면 들어오는 모든 컨테이너가 해당 리소스에 대한 명시적인 제한을 지정해야 한다. + +## 쿼터 보기 및 설정 + +Kubectl은 쿼터 생성, 업데이트 및 보기를 지원한다. + +```shell +kubectl create namespace myspace +``` + +```shell +cat < compute-resources.yaml +apiVersion: v1 +kind: ResourceQuota +metadata: + name: compute-resources +spec: + hard: + requests.cpu: "1" + requests.memory: 1Gi + limits.cpu: "2" + limits.memory: 2Gi + requests.nvidia.com/gpu: 4 +EOF +``` + +```shell +kubectl create -f ./compute-resources.yaml --namespace=myspace +``` + +```shell +cat < object-counts.yaml +apiVersion: v1 +kind: ResourceQuota +metadata: + name: object-counts +spec: + hard: + configmaps: "10" + persistentvolumeclaims: "4" + pods: "4" + replicationcontrollers: "20" + secrets: "10" + services: "10" + services.loadbalancers: "2" +EOF +``` + +```shell +kubectl create -f ./object-counts.yaml --namespace=myspace +``` + +```shell +kubectl get quota --namespace=myspace +``` + +```shell +NAME AGE +compute-resources 30s +object-counts 32s +``` + +```shell +kubectl describe quota compute-resources --namespace=myspace +``` + +```shell +Name: compute-resources +Namespace: myspace +Resource Used Hard +-------- ---- ---- +limits.cpu 0 2 +limits.memory 0 2Gi +requests.cpu 0 1 +requests.memory 0 1Gi +requests.nvidia.com/gpu 0 4 +``` + +```shell +kubectl describe quota object-counts --namespace=myspace +``` + +```shell +Name: object-counts +Namespace: myspace +Resource Used Hard +-------- ---- ---- +configmaps 0 10 +persistentvolumeclaims 0 4 +pods 0 4 +replicationcontrollers 0 20 +secrets 1 10 +services 0 10 +services.loadbalancers 0 2 +``` + +Kubectl은 `count/.` 구문을 사용하여 모든 표준 네임스페이스 리소스에 대한 +오브젝트 수 쿼터를 지원한다. + +```shell +kubectl create namespace myspace +``` + +```shell +kubectl create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 --namespace=myspace +``` + +```shell +kubectl run nginx --image=nginx --replicas=2 --namespace=myspace +``` + +```shell +kubectl describe quota --namespace=myspace +``` + +```shell +Name: test +Namespace: myspace +Resource Used Hard +-------- ---- ---- +count/deployments.extensions 1 2 +count/pods 2 3 +count/replicasets.extensions 1 4 +count/secrets 1 4 +``` + +## 쿼터 및 클러스터 용량 + +`ResourceQuotas`는 클러스터 용량과 무관하다. 그것들은 절대 단위로 표현된다. +따라서 클러스터에 노드를 추가해도 각 네임스페이스에 더 많은 리소스를 +사용할 수 있는 기능이 자동으로 부여되지는 *않는다*. + +가끔 다음과 같은 보다 복잡한 정책이 필요할 수 있다. + + - 여러 팀으로 전체 클러스터 리소스를 비례적으로 나눈다. + - 각 테넌트가 필요에 따라 리소스 사용량을 늘릴 수 있지만, 실수로 리소스가 고갈되는 것을 + 막기 위한 충분한 제한이 있다. + - 하나의 네임스페이스에서 요구를 감지하고 노드를 추가하며 쿼터를 늘린다. + +이러한 정책은 쿼터 사용을 감시하고 다른 신호에 따라 각 네임스페이스의 쿼터 하드 제한을 +조정하는 "컨트롤러"를 작성하여 `ResourceQuotas`를 구성 요소로 +사용하여 구현할 수 있다. + +리소스 쿼터는 통합된 클러스터 리소스를 분할하지만 노드에 대한 제한은 없다. +여러 네임스페이스의 파드가 동일한 노드에서 실행될 수 있다. + +## 기본적으로 우선 순위 클래스 소비 제한 + +파드가 특정 우선 순위, 예를 들어 일치하는 쿼터 오브젝트가 존재하는 경우에만 "cluster-services"가 네임스페이스에 허용되어야 힌다. + +이 메커니즘을 통해 운영자는 특정 우선 순위가 높은 클래스의 사용을 제한된 수의 네임스페이스로 제한할 수 있으며 모든 네임스페이스가 기본적으로 이러한 우선 순위 클래스를 사용할 수 있는 것은 아니다. + +이를 적용하려면 kube-apiserver 플래그 `--admission-control-config-file`을 사용하여 다음 구성 파일의 경로를 전달해야 한다. + +{{< tabs name="example1" >}} +{{% tab name="apiserver.config.k8s.io/v1" %}} +```yaml +apiVersion: apiserver.config.k8s.io/v1 +kind: AdmissionConfiguration +plugins: +- name: "ResourceQuota" + configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: ResourceQuotaConfiguration + limitedResources: + - resource: pods + matchScopes: + - scopeName: PriorityClass + operator: In + values: ["cluster-services"] +``` +{{% /tab %}} +{{% tab name="apiserver.k8s.io/v1alpha1" %}} +```yaml +# Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1 +apiVersion: apiserver.k8s.io/v1alpha1 +kind: AdmissionConfiguration +plugins: +- name: "ResourceQuota" + configuration: + # Deprecated in v1.17 in favor of apiserver.config.k8s.io/v1, ResourceQuotaConfiguration + apiVersion: resourcequota.admission.k8s.io/v1beta1 + kind: Configuration + limitedResources: + - resource: pods + matchScopes: + - scopeName: PriorityClass + operator: In + values: ["cluster-services"] +``` +{{% /tab %}} +{{< /tabs >}} + +이제 "cluster-services" 파드는 `scopeSelector`와 일치하는 쿼터 오브젝트가 있는 네임스페이스에서만 허용된다. +예를 들면 다음과 같다. +```yaml + scopeSelector: + matchExpressions: + - scopeName: PriorityClass + operator: In + values: ["cluster-services"] +``` + +자세한 내용은 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)와 [우선 순위 클래스에 대한 쿼터 지원 디자인 문서](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)를 참고하길 바란다. + +## 예제 + +[리소스 쿼터를 사용하는 방법에 대한 자세한 예](/docs/tasks/administer-cluster/quota-api-object/)를 참고하길 바란다. + +{{% /capture %}} + +{{% capture whatsnext %}} + +자세한 내용은 [리소스쿼터 디자인 문서](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)를 참고하길 바란다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/scheduling/kube-scheduler.md b/content/ko/docs/concepts/scheduling/kube-scheduler.md new file mode 100644 index 0000000000000..a7c33265f29db --- /dev/null +++ b/content/ko/docs/concepts/scheduling/kube-scheduler.md @@ -0,0 +1,97 @@ +--- +title: 쿠버네티스 스케줄러 +content_template: templates/concept +weight: 50 +--- + +{{% capture overview %}} + +쿠버네티스에서 _스케줄링_ 은 {{< glossary_tooltip term_id="kubelet" >}}이 +파드를 실행할 수 있도록 {{< glossary_tooltip text="파드" term_id="pod" >}}가 +{{< glossary_tooltip text="노드" term_id="node" >}}에 적합한지 확인하는 것을 말한다. + +{{% /capture %}} + +{{% capture body %}} + +## 스케줄링 개요 {#scheduling} + +스케줄러는 노드가 할당되지 않은 새로 생성된 파드를 감시한다. +스케줄러가 발견한 모든 파드에 대해 스케줄러는 해당 파드가 실행될 +최상의 노드를 찾는 책임을 진다. 스케줄러는 +아래 설명된 스케줄링 원칙을 고려하여 이 배치 결정을 +하게 된다. + +파드가 특정 노드에 배치되는 이유를 이해하려고 하거나 +사용자 정의된 스케줄러를 직접 구현하려는 경우 이 +페이지를 통해서 스케줄링에 대해 배울 수 있을 것이다. + +## kube-scheduler + +[kube-scheduler](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)는 +쿠버네티스의 기본 스케줄러이며 {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}의 +일부로 실행된다. +kube-scheduler는 원하거나 필요에 따라 자체 스케줄링 컴포넌트를 +만들고 대신 사용할 수 있도록 설계되었다. + +새로 생성된 모든 파드 또는 예약되지 않은 다른 파드에 대해 kube-scheduler는 +실행할 최적의 노드를 선택한다. 그러나 파드의 모든 컨테이너에는 +리소스에 대한 요구사항이 다르며 모든 파드에도 +요구사항이 다르다. 따라서 기존 노드들은 +특정 스케줄링 요구사항에 따라 필터링 되어야 한다. + +클러스터에서 파드에 대한 스케줄링 요구사항을 충족하는 노드를 +_실행 가능한(feasible)_ 노드라고 한다. 적합한 노드가 없으면 스케줄러가 +배치할 수 있을 때까지 파드가 스케줄 되지 않은 상태로 유지된다. + +스케줄러는 파드가 실행 가능한 노드를 찾은 다음 실행 가능한 노드의 +점수를 측정하는 기능 셋을 수행하고 실행 가능한 노드 중에서 가장 높은 점수를 +가진 노드를 선택하여 파드를 실행한다. 그런 다음 스케줄러는 +_바인딩_ 이라는 프로세스에서 이 결정에 대해 API 서버에 알린다. + +스케줄링 결정을 위해 고려해야 할 요소에는 +개별 및 집단 리소스 요구사항, 하드웨어 / 소프트웨어 / +정책 제한조건, 어피니티 및 안티-어피니티 명세, 데이터 +지역성(data locality), 워크로드 간 간섭 등이 포함된다. + +### kube-scheduler에서 노드 선택 {#kube-scheduler-implementation} + +kube-scheduler는 2단계 작업에서 파드에 대한 노드를 선택한다. + +1. 필터링 +1. 스코어링(scoring) + +_필터링_ 단계는 파드를 스케줄링 할 수 있는 노드 셋을 +찾는다. 예를 들어 PodFitsResources 필터는 +후보 노드가 파드의 특정 리소스 요청을 충족시키기에 충분한 가용 리소스가 +있는지 확인한다. 이 단계 다음에 노드 목록에는 적합한 노드들이 +포함된다. 하나 이상의 노드가 포함된 경우가 종종 있을 것이다. 목록이 비어 있으면 +해당 파드는 (아직) 스케줄링 될 수 없다. + +_스코어링_ 단계에서 스케줄러는 목록에 남아있는 노드의 순위를 지정하여 +가장 적합한 파드 배치를 선택한다. 스케줄러는 사용 중인 스코어링 규칙에 따라 +이 점수를 기준으로 필터링에서 통과된 각 노드에 대해 점수를 지정한다. + +마지막으로 kube-scheduler는 파드를 순위가 가장 높은 노드에 할당한다. +점수가 같은 노드가 두 개 이상인 경우 kube-scheduler는 +이들 중 하나를 임의로 선택한다. + +스케줄러의 필터링 및 스코어링 동작을 구성하는 데 지원되는 두 가지 +방법이 있다. + +1. [스케줄링 정책](/docs/reference/scheduling/policies)을 사용하면 + 필터링을 위한 _단정(Predicates)_ 및 스코어링을 위한 _우선순위(Priorities)_ 를 구성할 수 있다. +1. [스케줄링 프로파일](/docs/reference/scheduling/profiles)을 사용하면 + `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit` 등의 + 다른 스케줄링 단계를 구현하는 플러그인을 구성할 수 있다. 다른 프로파일을 실행하도록 + kube-scheduler를 구성할 수도 있다. + +{{% /capture %}} +{{% capture whatsnext %}} +* [스케줄러 성능 튜닝](/ko/docs/concepts/scheduling/scheduler-perf-tuning/)에 대해 읽기 +* [파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)에 대해 읽기 +* kube-scheduler의 [레퍼런스 문서](/docs/reference/command-line-tools-reference/kube-scheduler/) 읽기 +* [멀티 스케줄러 구성하기](/docs/tasks/administer-cluster/configure-multiple-schedulers/)에 대해 배우기 +* [토폴로지 관리 정책](/docs/tasks/administer-cluster/topology-manager/)에 대해 배우기 +* [파드 오버헤드](/docs/concepts/configuration/pod-overhead/)에 대해 배우기 +{{% /capture %}} diff --git a/content/ko/docs/concepts/scheduling/scheduler-perf-tuning.md b/content/ko/docs/concepts/scheduling/scheduler-perf-tuning.md index 6ff8b288f7fd1..3387bdce43ce1 100644 --- a/content/ko/docs/concepts/scheduling/scheduler-perf-tuning.md +++ b/content/ko/docs/concepts/scheduling/scheduler-perf-tuning.md @@ -8,7 +8,7 @@ weight: 70 {{< feature-state for_k8s_version="1.14" state="beta" >}} -[kube-scheduler](/docs/concepts/scheduling/kube-scheduler/#kube-scheduler) +[kube-scheduler](/ko/docs/concepts/scheduling/kube-scheduler/#kube-scheduler) 는 쿠버네티스의 기본 스케줄러이다. 그것은 클러스터의 노드에 파드를 배치하는 역할을 한다. @@ -26,25 +26,69 @@ API 서버에 해당 결정을 통지한다. {{% capture body %}} -## 점수를 측정할 노드의 비율 - -쿠버네티스 1.12 이전 버전에서, Kube-scheduler는 클러스터의 모든 노드에 -대한 적합성(feasibility)을 확인한 후에 적합한 노드들에 대해서 점수를 측정했다. -쿠버네티스 1.12는 새로운 특징을 추가했으며, 이 특징은 스케줄러가 특정 -숫자의 적합한 노드를 찾은 이후에 추가적인 적합 노드를 찾는 것을 중단하게 한다. -이것은 대규모 클러스터에서 스케줄러 성능을 향상시킨다. 해당 숫자는 클러스터 -크기에 대한 비율로 지정된다. 그 비율은 `percentageOfNodesToScore` 구성 -옵션으로 제어될 수 있다. 값의 범위는 1과 100 사이여야 한다. 더 높은 값은 -100%로 간주된다. 0은 구성 옵션을 제공하지 않는 것과 동일하다. -점수를 측정할 노드의 비율이 구성 옵션에 명시되지 않은 경우를 대비하여, 쿠버네티스 1.14는 -클러스터의 크기에 기반하여 해당 비율 값을 찾는 로직을 가지고 있다. 이 로직은 -100-노드 클러스터에 대해 50%를 값으로 출력하는 선형 공식을 사용한다. 해당 공식은 5000-노드 -클러스터에 대해서는 10%를 값으로 출력한다. 자동으로 지정되는 값의 하한값은 5%이다. 다시 -말해, 사용자가 구성 옵션에 5 보다 낮은 값은 지정하지 않은 한, 스케줄러는 -클러스터의 크기와 무관하게 적어도 5%의 클러스터에 대해서는 항상 점수를 -측정한다. - -아래는 `percentageOfNodesToScore`를 50%로 설정하는 구성 예제이다. +큰 규모의 클러스터에서는 스케줄러의 동작을 튜닝하여 응답 시간 +(새 파드가 빠르게 배치됨)과 정확도(스케줄러가 배치 결정을 잘 못하는 경우가 드물게 됨) +사이에서의 스케줄링 결과를 균형 잡을 수 있다. + +kube-scheduler 의 `percentageOfNodesToScore` 설정을 통해 +이 튜닝을 구성 한다. 이 KubeSchedulerConfiguration 설정에 따라 클러스터의 +노드를 스케줄링할 수 있는 임계값이 결정된다. + +### 임계값 설정하기 + +`percentageOfNodesToScore` 옵션은 0과 100 사이의 값을 +허용한다. 값 0은 kube-scheduler가 컴파일 된 기본값을 +사용한다는 것을 나타내는 특별한 숫자이다. +`percentageOfNodesToScore` 를 100 보다 높게 설정해도 kube-scheduler는 +마치 100을 설정한 것처럼 작동한다. + +값을 변경하려면, kube-scheduler 구성 파일(이 파일은 `/etc/kubernetes/config/kube-scheduler.yaml` +일 수 있다)을 편집한 다음 스케줄러를 재시작 한다. + +이를 변경한 후에 다음을 실행해서 +```bash +kubectl get componentstatuses +``` +kube-scheduler 컴포넌트가 정상인지 확인할 수 있다. 출력은 다음과 유사하다. +``` +NAME STATUS MESSAGE ERROR +controller-manager Healthy ok +scheduler Healthy ok +... +``` + +## 노드 스코어링(scoring) 임계값 {#percentage-of-nodes-to-score} + +스케줄링 성능을 향상시키기 위해 kube-scheduler는 실행 가능한 +노드가 충분히 발견되면 이를 찾는 것을 중단할 수 있다. 큰 규모의 클러스터에서는 +모든 노드를 고려하는 고지식한 접근 방식에 비해 시간이 절약된다. + +클러스터에 있는 모든 노드의 정수 백분율로 충분한 노두의 수에 +대한 임계값을 지정한다. kube-scheduler는 이 값을 노드의 +정수 값(숫자)로 변환 한다. 스케줄링 중에 kube-scheduler가 구성된 +비율을 초과 할만큼 충분히 실행 가능한 노드를 식별한 경우, kube-scheduler는 +더 실행 가능한 노드를 찾는 검색을 중지하고 +[스코어링 단계](/ko/docs/concepts/scheduling/kube-scheduler/#kube-scheduler-implementation)를 진행한다. + +[스케줄러가 노드 탐색을 반복(iterate)하는 방법](#스케줄러가-노드-탐색을-반복-iterate-하는-방법) +은 이 프로세스를 자세히 설명한다. + +### 기본 임계값 + +임계값을 지정하지 않으면 쿠버네티스는 100 노드 클러스터인 +경우 50%, 5000 노드 클러스터인 경우 10%를 산출하는 +선형 공식을 사용하여 수치를 계산한다. 자동 값의 하한선은 5% 이다. + +즉, `percentageOfNodesToScore` 를 명시적으로 5보다 작게 설정하지 +않은 경우 클러스터가 아무리 크더라도 kube-scheduler는 +항상 클러스터의 최소 5%를 스코어링을 한다. + +스케줄러가 클러스터의 모든 노드에 스코어링을 하려면 +`percentageOfNodesToScore` 를 100으로 설정 한다. + +## 예시 + +아래는 `percentageOfNodesToScore`를 50%로 설정하는 구성 예시이다. ```yaml apiVersion: kubescheduler.config.k8s.io/v1alpha1 @@ -57,38 +101,37 @@ algorithmSource: percentageOfNodesToScore: 50 ``` +### percentageOfNodesToScore 튜닝 + +`percentageOfNodesToScore`는 1과 100 사이의 값이어야하며 +기본 값은 클러스터 크기에 따라 계산된다. 또한 50 노드로 하드 코딩된 +최소 값도 있다. + {{< note >}} 클러스터에서 적합한 노드가 50 미만인 경우, 스케줄러는 여전히 모든 노드를 확인한다. 그 이유는 스케줄러가 탐색을 조기 중단하기에는 적합한 -노드의 수가 충분하지 않기 때문이다. {{< /note >}} +노드의 수가 충분하지 않기 때문이다. -**이 특징을 비활성화하려면**, `percentageOfNodesToScore`를 100으로 지정한다. +규모가 작은 클러스터에서는 `percentageOfNodesToScore` 에 낮은 값을 설정하면, +비슷한 이유로 변경 사항이 거의 또는 전혀 영향을 미치지 않게 된다. -### percentageOfNodesToScore 튜닝 +클러스터에 수백 개 이하의 노드가 있는 경우 이 구성 옵션을 +기본값으로 둔다. 이는 변경사항을 적용하더라도 스케줄러의 +성능이 크게 향상되지 않는다. +{{< /note >}} -`percentageOfNodesToScore`는 1과 100 사이의 값이어야하며 -기본 값은 클러스터 크기에 따라 계산된다. 또한 50 노드로 하드 코딩된 -최소 값도 있다. 이는 수백 개 정도의 노드가 있는 -클러스터에서는 해당 옵션을 더 낮은 값으로 변경하더라도 스케줄러가 -찾으려는 적합한 노드의 개수에는 크게 영향을 주지 않는다는 뜻이다. -이것은 규모가 작은 클러스터에서는 이 옵션의 조정이 성능을 눈에 띄게 향상시키지 않는 -것을 감안하여 의도적으로 설계되었다. 1000 노드 이상의 큰 규모의 클러스터에서는 이 값을 -낮은 수로 설정하면 눈에 띄는 성능 향상을 보일 수도 있다. - -이 값을 세팅할 때 중요하게 고려해야 할 사항은, 클러스터에서 +이 값을 세팅할 때 중요하고 자세한 사항은, 클러스터에서 적은 수의 노드에 대해서만 적합성을 확인하면, 주어진 파드에 대해서 일부 노드의 점수는 측정이되지 않는다는 것이다. 결과적으로, 주어진 파드를 실행하는데 가장 높은 점수를 가질 가능성이 있는 노드가 점수 측정 단계로 조차 넘어가지 않을 수 있다. 이것은 파드의 이상적인 배치보다 낮은 결과를 초래할 것이다. -그 이유로, 이 값은 너무 낮은 비율로 설정되면 안 된다. 대략의 경험적 법칙은 10 이하의 -값으로는 설정하지 않는 것이다. 더 낮은 값은 사용자의 애플리케이션에서 스케줄러의 -처리량이 치명적이고 노드의 점수가 중요하지 않을 경우에만 사용해야 한다. 다시 말해서, 파드의 -실행에 적합하기만 하다면 어느 노드가 선택되어도 사용자에게 상관없는 경우를 말한다. -만약 사용자의 클러스터가 단지 백여 개 또는 더 적은 노드를 가지고 있는 경우, 이 구성 옵션의 -기본 값보다 낮은 값으로의 변경을 추천하지 않는다. 그것이 스케줄러의 성능을 크게 -향상시키지는 않을 것이다. +`percentageOfNodesToScore` 를 매우 낮게 설정해서 kube-scheduler가 +파드 배치 결정을 잘못 내리지 않도록 해야 한다. 스케줄러의 처리량에 +대해 애플리케이션이 중요하고 노드 점수가 중요하지 않은 경우가 아니라면 +백분율을 10% 미만으로 설정하지 말아야 한다. 즉, 가능한 한 +모든 노드에서 파드를 실행하는 것이 좋다. -### 스케줄러가 노드 탐색을 반복(iterate)하는 방법 +## 스케줄러가 노드 탐색을 반복(iterate)하는 방법 이 섹션은 이 특징의 상세한 내부 방식을 이해하고 싶은 사람들을 위해 작성되었다. diff --git a/content/ko/docs/concepts/services-networking/endpoint-slices.md b/content/ko/docs/concepts/services-networking/endpoint-slices.md index caf6bb1cd464d..4fc51a9fa07c9 100644 --- a/content/ko/docs/concepts/services-networking/endpoint-slices.md +++ b/content/ko/docs/concepts/services-networking/endpoint-slices.md @@ -6,7 +6,7 @@ feature: 쿠버네티스 클러스터에서 확장 가능한 네트워크 엔드포인트 추적. content_template: templates/concept -weight: 10 +weight: 15 --- @@ -22,6 +22,21 @@ _엔드포인트슬라이스_ 는 쿠버네티스 클러스터 내의 네트워 {{% capture body %}} +## 사용동기 + +엔드포인트 API는 쿠버네티스에서 네트워크 엔드포인트를 추적하는 +간단하고 직접적인 방법을 제공한다. 불행하게도 쿠버네티스 클러스터와 +서비스가 점점 더 커짐에 따라, 이 API의 한계가 더욱 눈에 띄게 되었다. +특히나, 많은 수의 네트워크 엔드포인트로 확장하는 것에 +어려움이 있었다. + +이후로 서비스에 대한 모든 네트워크 엔드포인트가 단일 엔드포인트 +리소스에 저장되기 때문에 엔드포인트 리소스가 상당히 커질 수 있다. 이것은 쿠버네티스 +구성요소 (특히 마스터 컨트롤 플레인)의 성능에 영향을 미쳤고 +엔드포인트가 변경될 때 상당한 양의 네트워크 트래픽과 처리를 초래했다. +엔드포인트슬라이스는 이러한 문제를 완화하고 토폴로지 라우팅과 +같은 추가 기능을 위한 확장 가능한 플랫폼을 제공한다. + ## 엔드포인트슬라이스 리소스 {#endpointslice-resource} 쿠버네티스에서 EndpointSlice는 일련의 네트워크 엔드 포인트에 대한 @@ -163,21 +178,6 @@ text="kube-controller-manager" term_id="kube-controller-manager" >}} 플래그 교체되는 엔드포인트에 대해서 엔드포인트슬라이스를 자연스럽게 재포장한다. -## 사용동기 - -엔드포인트 API는 쿠버네티스에서 네트워크 엔드포인트를 추적하는 -간단하고 직접적인 방법을 제공한다. 불행하게도 쿠버네티스 클러스터와 -서비스가 점점 더 커짐에 따라, 이 API의 한계가 더욱 눈에 띄게 되었다. -특히나, 많은 수의 네트워크 엔드포인트로 확장하는 것에 -어려움이 있었다. - -이후로 서비스에 대한 모든 네트워크 엔드포인트가 단일 엔드포인트 -리소스에 저장되기 때문에 엔드포인트 리소스가 상당히 커질 수 있다. 이것은 쿠버네티스 -구성요소 (특히 마스터 컨트롤 플레인)의 성능에 영향을 미쳤고 -엔드포인트가 변경될 때 상당한 양의 네트워크 트래픽과 처리를 초래했다. -엔드포인트슬라이스는 이러한 문제를 완화하고 토폴로지 라우팅과 -같은 추가 기능을 위한 확장 가능한 플랫폼을 제공한다. - {{% /capture %}} {{% capture whatsnext %}} diff --git a/content/ko/docs/concepts/services-networking/ingress.md b/content/ko/docs/concepts/services-networking/ingress.md index 03d8700cd2fa3..ee11186d50456 100644 --- a/content/ko/docs/concepts/services-networking/ingress.md +++ b/content/ko/docs/concepts/services-networking/ingress.md @@ -71,6 +71,7 @@ spec: - http: paths: - path: /testpath + pathType: Prefix backend: serviceName: test servicePort: 80 @@ -115,6 +116,84 @@ spec: 만약 인그레스 오브젝트의 HTTP 요청과 일치하는 호스트 또는 경로가 없으면, 트래픽은 기본 백엔드로 라우팅 된다. +### 경로(Path) 유형 + +인그레스의 각 경로에는 해당하는 경로 유형이 있다. 지원되는 세 가지의 경로 +유형이 있다. + +* _`ImplementationSpecific`_ (기본): 이 경로 유형의 일치 여부는 IngressClass에 따라 + 달라진다. 이를 구현할 때 별도 `pathType` 으로 처리하거나, `Prefix` 또는 `Exact` + 경로 유형과 같이 동일하게 처리할 수 있다. + +* _`Exact`_: URL 경로의 대소문자를 엄격하게 일치시킨다. + +* _`Prefix`_: URL 경로의 접두사를 `/` 를 기준으로 분리한 값과 일치시킨다. + 일치는 대소문자를 구분하고, + 요소별로 경로 요소에 대해 수행한다. + 모든 _p_ 가 요청 경로의 요소별 접두사가 _p_ 인 경우 + 요청은 _p_ 경로에 일치한다. + {{< note >}} + 경로의 마지막 요소가 요청 경로에 있는 마지막 요소의 + 하위 문자열인 경우에는 일치하지 않는다(예시: + `/foo/bar` 와 `/foo/bar/baz` 와 일치하지만, `/foo/barbaz` 는 일치하지 않는다). + {{< /note >}} + +#### 다중 일치 +경우에 따라 인그레스의 여러 경로가 요청과 일치할 수 있다. +이 경우 가장 긴 일치하는 경로가 우선하게 된다. 두 개의 경로가 +여전히 동일하게 일치하는 경우 접두사(prefix) 경로 유형보다 +정확한(exact) 경로 유형을 가진 경로가 사용 된다. + +## 인그레스 클래스 + +인그레스는 서로 다른 컨트롤러에 의해 구현될 수 있으며, 종종 다른 구성으로 +구현될 수 있다. 각 인그레스에서는 클래스를 구현해야하는 컨트롤러 +이름을 포함하여 추가 구성이 포함된 IngressClass +리소스에 대한 참조 클래스를 지정해야 한다. + +```yaml +apiVersion: networking.k8s.io/v1beta1 +kind: IngressClass +metadata: + name: external-lb +spec: + controller: example.com/ingress-controller + parameters: + apiGroup: k8s.example.com/v1alpha + kind: IngressParameters + name: external-lb +``` + +IngressClass 리소스에는 선택적인 파라미터 필드가 있다. 이 클래스에 대한 +추가 구성을 참조하는데 사용할 수 있다. + +### 사용중단(Deprecated) 어노테이션 + +쿠버네티스 1.18에 IngressClass 리소스 및 `ingressClassName` 필드가 추가되기 +전에 인그레스 클래스는 인그레스에서 +`kubernetes.io/ingress.class` 어노테이션으로 지정되었다. 이 어노테이션은 +공식적으로 정의된 것은 아니지만, 인그레스 컨트롤러에서 널리 지원되었었다. + +인그레스의 최신 `ingressClassName` 필드는 해당 어노테이션을 +대체하지만, 직접적으로 해당하는 것은 아니다. 어노테이션은 일반적으로 +인그레스를 구현해야 하는 인그레스 컨트롤러의 이름을 참조하는 데 사용되었지만, +이 필드는 인그레스 컨트롤러의 이름을 포함하는 추가 인그레스 구성이 +포함된 인그레스 클래스 리소스에 대한 참조이다. + +### 기본 인그레스 클래스 + +특정 IngressClass를 클러스터의 기본 값으로 표시할 수 있다. IngressClass +리소스에서 `ingressclass.kubernetes.io/is-default-class` 를 `true` 로 +설정하면 `ingressClassName` 필드가 지정되지 않은 +새 인그레스에게 기본 IngressClass가 할당된다. + +{{< caution >}} +클러스터의 기본값으로 표시된 IngressClass가 두 개 이상 있는 경우 +어드미션 컨트롤러에서 `ingressClassName` 이 지정되지 않은 +새 인그레스 오브젝트를 생성할 수 없다. 클러스터에서 최대 1개의 IngressClass가 +기본값으로 표시하도록 해서 이 문제를 해결할 수 있다. +{{< /caution >}} + ## 인그레스 유형들 ### 단일 서비스 인그레스 diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md index 3072c277a0aeb..85b6742b5f335 100644 --- a/content/ko/docs/concepts/services-networking/service.md +++ b/content/ko/docs/concepts/services-networking/service.md @@ -202,6 +202,17 @@ API 리소스이다. 개념적으로 엔드포인트와 매우 유사하지만, 엔드포인트슬라이스는 [엔드포인트슬라이스](/ko/docs/concepts/services-networking/endpoint-slices/)에서 자세하게 설명된 추가적인 속성 및 기능을 제공한다. +### 애플리케이션 프로토콜 + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +AppProtocol 필드는 각 서비스 포트에 사용될 애플리케이션 프로토콜을 +지정하는 방법을 제공한다. + +알파 기능으로 이 필드는 기본적으로 활성화되어 있지 않다. 이 필드를 사용하려면, +[기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)에서 +`ServiceAppProtocol` 을 활성화해야 한다. + ## 가상 IP와 서비스 프록시 쿠버네티스 클러스터의 모든 노드는 `kube-proxy`를 실행한다. `kube-proxy`는 @@ -522,6 +533,25 @@ NodePort를 사용하면 자유롭게 자체 로드 밸런싱 솔루션을 설 이 서비스는 `:spec.ports[*].nodePort`와 `.spec.clusterIP:spec.ports[*].port`로 표기된다. (kube-proxy에서 `--nodeport-addresses` 플래그가 설정되면, 는 NodeIP를 필터링한다.) +예를 들면 +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + type: NodePort + selector: + app: MyApp + ports: + # 기본적으로 그리고 편의상 `targetPort` 는 `port` 필드와 동일한 값으로 설정된다. + - port: 80 + targetPort: 80 + # 선택적 필드 + # 기본적으로 그리고 편의상 쿠버네티스 컨트롤 플레인은 포트 범위에서 할당한다(기본값: 30000-32767) + nodePort: 30007 +``` + ### 로드밸런서 유형 {#loadbalancer} 외부 로드 밸런서를 지원하는 클라우드 공급자 상에서, `type` diff --git a/content/ko/docs/concepts/storage/dynamic-provisioning.md b/content/ko/docs/concepts/storage/dynamic-provisioning.md new file mode 100644 index 0000000000000..11564490ec44a --- /dev/null +++ b/content/ko/docs/concepts/storage/dynamic-provisioning.md @@ -0,0 +1,131 @@ +--- +title: 동적 볼륨 프로비저닝 +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +동적 볼륨 프로비저닝을 통해 온-디맨드 방식으로 스토리지 볼륨을 생성할 수 있다. +동적 프로비저닝이 없으면 클러스터 관리자는 클라우드 또는 스토리지 +공급자에게 수동으로 요청해서 새 스토리지 볼륨을 생성한 다음, 쿠버네티스에 +표시하기 위해 [`PersistentVolume` 오브젝트](/ko/docs/concepts/storage/persistent-volumes/)를 +생성해야 한다. 동적 프로비저닝 기능을 사용하면 클러스터 관리자가 +스토리지를 사전 프로비저닝 할 필요가 없다. 대신 사용자가 +스토리지를 요청하면 자동으로 프로비저닝 한다. + +{{% /capture %}} + + +{{% capture body %}} + +## 배경 + +동적 볼륨 프로비저닝의 구현은 `storage.k8s.io` API 그룹의 `StorageClass` +API 오브젝트를 기반으로 한다. 클러스터 관리자는 볼륨을 프로비전하는 +*볼륨 플러그인* (프로비저너라고도 알려짐)과 프로비저닝시에 프로비저너에게 +전달할 파라미터 집합을 지정하는 `StorageClass` +오브젝트를 필요한 만큼 정의할 수 있다. +클러스터 관리자는 클러스터 내에서 사용자 정의 파라미터 집합을 +사용해서 여러 가지 유형의 스토리지 (같거나 다른 스토리지 시스템들)를 +정의하고 노출시킬 수 있다. 또한 이 디자인을 통해 최종 사용자는 +스토리지 프로비전 방식의 복잡성과 뉘앙스에 대해 걱정할 필요가 없다. 하지만, +여전히 여러 스토리지 옵션들을 선택할 수 있다. + +스토리지 클래스에 대한 자세한 정보는 +[여기](/docs/concepts/storage/storage-classes/)에서 찾을 수 있다. + +## 동적 프로비저닝 활성화하기 + +동적 프로비저닝을 활성화하려면 클러스터 관리자가 사용자를 위해 하나 이상의 StorageClass +오브젝트를 사전 생성해야 한다. +StorageClass 오브젝트는 동적 프로비저닝이 호출될 때 사용할 프로비저너와 +해당 프로비저너에게 전달할 파라미터를 정의한다. +StorageClass 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)이어야 한다. + +다음 매니페스트는 표준 디스크와 같은 퍼시스턴트 디스크를 프로비전하는 +스토리지 클래스 "slow"를 만든다. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-standard +``` + +다음 매니페스트는 SSD와 같은 퍼시스턴트 디스크를 프로비전하는 +스토리지 클래스 "fast"를 만든다. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: fast +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-ssd +``` + +## 동적 프로비저닝 사용하기 + +사용자는 `PersistentVolumeClaim` 에 스토리지 클래스를 포함시켜 동적으로 프로비전된 +스토리지를 요청한다. 쿠버네티스 v1.6 이전에는 `volume.beta.kubernetes.io/storage-class` +어노테이션을 통해 수행되었다. 그러나 이 어노테이션은 +v1.6부터 더 이상 사용하지 않는다. 사용자는 이제 `PersistentVolumeClaim` 오브젝트의 +`storageClassName` 필드를 사용할 수 있기에 대신하여 사용해야 한다. 이 필드의 값은 +관리자가 구성한 `StorageClass` 의 이름과 +일치해야 한다. ([아래](#동적-프로비저닝-활성화하기)를 참고) + +예를 들어 “fast” 스토리지 클래스를 선택하려면 다음과 +같은 `PersistentVolumeClaim` 을 생성한다. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: claim1 +spec: + accessModes: + - ReadWriteOnce + storageClassName: fast + resources: + requests: + storage: 30Gi +``` + +이 클레임의 결과로 SSD와 같은 퍼시스턴트 디스크가 자동으로 +프로비전 된다. 클레임이 삭제되면 볼륨이 삭제된다. + +## 기본 동작 + +스토리지 클래스가 지정되지 않은 경우 모든 클레임이 동적으로 +프로비전이 되도록 클러스터에서 동적 프로비저닝을 활성화 할 수 있다. 클러스터 관리자는 +이 방법으로 활성화 할 수 있다. + +- 하나의 `StorageClass` 오브젝트를 *default* 로 표시한다. +- API 서버에서 [`DefaultStorageClass` 어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)를 + 사용하도록 설정한다. + +관리자는 `storageclass.kubernetes.io/is-default-class` 어노테이션을 +추가해서 특정 `StorageClass` 를 기본으로 표시할 수 있다. +기본 `StorageClass` 가 클러스터에 존재하고 사용자가 +`storageClassName` 를 지정하지 않은 `PersistentVolumeClaim` 을 +작성하면, `DefaultStorageClass` 어드미션 컨트롤러가 디폴트 +스토리지 클래스를 가리키는 `storageClassName` 필드를 자동으로 추가한다. + +클러스터에는 최대 하나의 *default* 스토리지 클래스가 있을 수 있다. 그렇지 않은 경우 +`storageClassName` 을 명시적으로 지정하지 않은 `PersistentVolumeClaim` 을 +생성할 수 없다. + +## 토폴로지 인식 + +[다중 영역](/ko/docs/setup/best-practices/multiple-zones/) 클러스터에서 파드는 한 지역 내 +여러 영역에 걸쳐 분산될 수 있다. 파드가 예약된 영역에서 단일 영역 스토리지 백엔드를 +프로비전 해야 한다. [볼륨 바인딩 모드](/docs/concepts/storage/storage-classes/#volume-binding-mode)를 +설정해서 수행할 수 있다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/storage/persistent-volumes.md b/content/ko/docs/concepts/storage/persistent-volumes.md new file mode 100644 index 0000000000000..a94eb8490f57e --- /dev/null +++ b/content/ko/docs/concepts/storage/persistent-volumes.md @@ -0,0 +1,757 @@ +--- +title: 퍼시스턴트 볼륨 +feature: + title: 스토리지 오케스트레이션 + description: > + 로컬 스토리지, GCPAWS와 같은 퍼블릭 클라우드 공급자 또는 NFS, iSCSI, Gluster, Ceph, Cinder나 Flocker와 같은 네트워크 스토리지 시스템에서 원하는 스토리지 시스템을 자동으로 마운트한다. + +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +이 페이지는 쿠버네티스의 _퍼시스턴트 볼륨_ 의 현재 상태를 설명한다. [볼륨](/ko/docs/concepts/storage/volumes/)에 대해 익숙해지는 것을 추천한다. + +{{% /capture %}} + + +{{% capture body %}} + +## 소개 + +스토리지 관리는 컴퓨트 인스턴스 관리와는 별개의 문제다. 퍼시스턴트볼륨 서브시스템은 사용자 및 관리자에게 스토리지 사용 방법에서부터 스토리지가 제공되는 방법에 대한 세부 사항을 추상화하는 API를 제공한다. 이를 위해 퍼시스턴트볼륨 및 퍼시스턴트볼륨클레임이라는 두 가지 새로운 API 리소스를 소개한다. + +_퍼시스턴트볼륨_ (PV)은 관리자가 프로비저닝하거나 [스토리지 클래스](/docs/concepts/storage/storage-classes/)를 사용하여 동적으로 프로비저닝한 클러스터의 스토리지이다. 노드가 클러스터 리소스인 것처럼 PV는 클러스터 리소스이다. PV는 Volumes와 같은 볼륨 플러그인이지만, PV를 사용하는 개별 파드와는 별개의 라이프사이클을 가진다. 이 API 오브젝트는 NFS, iSCSI 또는 클라우드 공급자별 스토리지 시스템 등 스토리지 구현에 대한 세부 정보를 담아낸다. + +_퍼시스턴트볼륨클레임_ (PVC)은 사용자의 스토리지에 대한 요청이다. 파드와 비슷하다. 파드는 노드 리소스를 사용하고 PVC는 PV 리소스를 사용한다. 파드는 특정 수준의 리소스(CPU 및 메모리)를 요청할 수 있다. 클레임은 특정 크기 및 접근 모드를 요청할 수 있다(예: 한 번 읽기/쓰기 또는 여러 번 읽기 전용으로 마운트 할 수 있음). + +퍼시스턴트볼륨클레임을 사용하면 사용자가 추상화된 스토리지 리소스를 사용할 수 있지만, 다른 문제들 때문에 성능과 같은 다양한 속성을 가진 퍼시스턴트볼륨이 필요한 경우가 일반적이다. 클러스터 관리자는 사용자에게 해당 볼륨의 구현 방법에 대한 세부 정보를 제공하지 않고 단순히 크기와 접근 모드와는 다른 방식으로 다양한 퍼시스턴트볼륨을 제공할 수 있어야 한다. 이러한 요구에는 _스토리지클래스_ 리소스가 있다. + +[실습 예제와 함께 상세한 내용](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)을 참고하길 바란다. + +## 볼륨과 클레임 라이프사이클 + +PV는 클러스터 리소스이다. PVC는 해당 리소스에 대한 요청이며 리소스에 대한 클레임 검사 역할을 한다. PV와 PVC 간의 상호 작용은 다음 라이프사이클을 따른다. + +### 프로비저닝 + +PV를 프로비저닝 할 수 있는 두 가지 방법이 있다: 정적(static) 프로비저닝과 동적(dynamic) 프로비저닝 + +#### 정적 프로비저닝 + +클러스터 관리자는 여러 PV를 만든다. 클러스터 사용자가 사용할 수 있는 실제 스토리지의 세부 사항을 제공한다. 이 PV들은 쿠버네티스 API에 존재하며 사용할 수 있다. + +#### 동적 프로비저닝 + +관리자가 생성한 정적 PV가 사용자의 퍼시스턴트볼륨클레임과 일치하지 않으면 +클러스터는 PVC를 위해 특별히 볼륨을 동적으로 프로비저닝 하려고 시도할 수 있다. +이 프로비저닝은 스토리지클래스를 기반으로 한다. PVC는 +[스토리지 클래스](/docs/concepts/storage/storage-classes/)를 +요청해야 하며 관리자는 동적 프로비저닝이 발생하도록 해당 클래스를 생성하고 구성해야 한다. +`""` 클래스를 요청하는 클레임은 동적 프로비저닝을 효과적으로 +비활성화한다. + +스토리지 클래스를 기반으로 동적 스토리지 프로비저닝을 사용하려면 클러스터 관리자가 API 서버에서 +`DefaultStorageClass` [어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)를 사용하도록 설정해야 한다. +예를 들어 API 서버 컴포넌트의 `--enable-admission-plugins` 플래그에 대한 쉼표로 구분되어 +정렬된 값들의 목록 중에 `DefaultStorageClass`가 포함되어 있는지 확인하여 설정할 수 있다. +API 서버 커맨드라인 플래그에 대한 자세한 정보는 +[kube-apiserver](/docs/admin/kube-apiserver/) 문서를 확인하면 된다. + +### 바인딩 + +사용자는 원하는 특정 용량의 스토리지와 특정 접근 모드로 퍼시스턴트볼륨클레임을 생성하거나 동적 프로비저닝의 경우 이미 생성한 상태다. 마스터의 컨트롤 루프는 새로운 PVC를 감시하고 일치하는 PV(가능한 경우)를 찾아 서로 바인딩한다. PV가 새 PVC에 대해 동적으로 프로비저닝된 경우 루프는 항상 해당 PV를 PVC에 바인딩한다. 그렇지 않으면 사용자는 항상 최소한 그들이 요청한 것을 얻지만 볼륨은 요청된 것을 초과할 수 있다. 일단 바인딩되면 퍼시스턴트볼륨클레임은 어떻게 바인딩되었는지 상관없이 배타적으로 바인딩된다. PVC 대 PV 바인딩은 일대일 매핑으로, 퍼시스턴트볼륨과 퍼시스턴트볼륨클레임 사이의 양방향 바인딩인 ClaimRef를 사용한다. + +일치하는 볼륨이 없는 경우 클레임은 무한정 바인딩되지 않은 상태로 남아 있다. 일치하는 볼륨이 제공되면 클레임이 바인딩된다. 예를 들어 많은 수의 50Gi PV로 프로비저닝된 클러스터는 100Gi를 요청하는 PVC와 일치하지 않는다. 100Gi PV가 클러스터에 추가되면 PVC를 바인딩할 수 있다. + +### 사용 중 + +파드는 클레임을 볼륨으로 사용한다. 클러스터는 클레임을 검사하여 바인딩된 볼륨을 찾고 해당 볼륨을 파드에 마운트한다. 여러 접근 모드를 지원하는 볼륨의 경우 사용자는 자신의 클레임을 파드에서 볼륨으로 사용할 때 원하는 접근 모드를 지정한다. + +일단 사용자에게 클레임이 있고 그 클레임이 바인딩되면, 바인딩된 PV는 사용자가 필요로 하는 한 사용자에게 속한다. 사용자는 파드의 `volumes` 블록에 `persistentVolumeClaim`을 포함하여 파드를 스케줄링하고 클레임한 PV에 접근한다. 이에 대한 자세한 내용은 [볼륨으로 클레임하기](#볼륨으로-클레임하기)를 참고하길 바란다. + +### 사용 중인 스토리지 오브젝트 ​​보호 +사용 중인 스토리지 오브젝트 ​​보호 기능의 목적은 PVC에 바인딩된 파드와 퍼시스턴트볼륨(PV)이 사용 중인 퍼시스턴트볼륨클레임(PVC)을 시스템에서 삭제되지 않도록 하는 것이다. 삭제되면 이로 인해 데이터의 손실이 발생할 수 있기 때문이다. + +{{< note >}} +PVC를 사용하는 파드 오브젝트가 존재하면 파드가 PVC를 사용하고 있는 상태이다. +{{< /note >}} + +사용자가 파드에서 활발하게 사용 중인 PVC를 삭제하면 PVC는 즉시 삭제되지 않는다. PVC가 더 이상 파드에서 적극적으로 사용되지 않을 때까지 PVC 삭제가 연기된다. 또한 관리자가 PVC에 바인딩된 PV를 삭제하면 PV는 즉시 삭제되지 않는다. PV가 더 이상 PVC에 바인딩되지 않을 때까지 PV 삭제가 연기된다. + +PVC의 상태가 `Terminating`이고 `Finalizers` 목록에 `kubernetes.io/pvc-protection`이 포함되어 있으면 PVC가 보호된 것으로 볼 수 있다. + +```shell +kubectl describe pvc hostpath +Name: hostpath +Namespace: default +StorageClass: example-hostpath +Status: Terminating +Volume: +Labels: +Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath + volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath +Finalizers: [kubernetes.io/pvc-protection] +... +``` + +마찬가지로 PV 상태가 `Terminating`이고 `Finalizers` 목록에 `kubernetes.io/pv-protection`이 포함되어 있으면 PV가 보호된 것으로 볼 수 있다. + +```shell +kubectl describe pv task-pv-volume +Name: task-pv-volume +Labels: type=local +Annotations: +Finalizers: [kubernetes.io/pv-protection] +StorageClass: standard +Status: Terminating +Claim: +Reclaim Policy: Delete +Access Modes: RWO +Capacity: 1Gi +Message: +Source: + Type: HostPath (bare host directory volume) + Path: /tmp/data + HostPathType: +Events: +``` + +### 반환(Reclaiming) + +사용자가 볼륨을 다 사용하고나면 리소스를 반환할 수 있는 API를 사용하여 PVC 오브젝트를 삭제할 수 있다. 퍼시스턴트볼륨의 반환 정책은 볼륨에서 클레임을 해제한 후 볼륨에 수행할 작업을 클러스터에 알려준다. 현재 볼륨에 대한 반환 정책은 Retain, Recycle, 그리고 Delete가 있다. + +#### Retain(보존) + +`Retain` 반환 정책은 리소스를 수동으로 반환할 수 있게 한다. 퍼시스턴트볼륨클레임이 삭제되면 퍼시스턴트볼륨은 여전히 존재하며 볼륨은 "릴리스 된" 것으로 간주된다. 그러나 이전 요청자의 데이터가 여전히 볼륨에 남아 있기 때문에 다른 요청에 대해서는 아직 사용할 수 없다. 관리자는 다음 단계에 따라 볼륨을 수동으로 반환할 수 있다. + +1. 퍼시스턴트볼륨을 삭제한다. PV가 삭제된 후에도 외부 인프라(예: AWS EBS, GCE PD, Azure Disk 또는 Cinder 볼륨)의 관련 스토리지 자산이 존재한다. +1. 관련 스토리지 자산의 데이터를 수동으로 삭제한다. +1. 연결된 스토리지 자산을 수동으로 삭제하거나 동일한 스토리지 자산을 재사용하려는 경우 스토리지 자산 정의로 새 퍼시스턴트볼륨을 생성한다. + +#### Delete(삭제) + +`Delete` 반환 정책을 지원하는 볼륨 플러그인의 경우, 삭제는 쿠버네티스에서 퍼시스턴트볼륨 오브젝트와 외부 인프라(예: AWS EBS, GCE PD, Azure Disk 또는 Cinder 볼륨)의 관련 스토리지 자산을 모두 삭제한다. 동적으로 프로비저닝된 볼륨은 [스토리지클래스의 반환 정책](#반환-정책)을 상속하며 기본값은 `Delete`이다. 관리자는 사용자의 기대에 따라 스토리지클래스를 구성해야 한다. 그렇지 않으면 PV를 생성한 후 PV를 수정하거나 패치해야 한다. [퍼시스턴트볼륨의 반환 정책 변경](/docs/tasks/administer-cluster/change-pv-reclaim-policy/)을 참고하길 바란다. + +#### Recycle(재활용) + +{{< warning >}} +`Recycle` 반환 정책은 더 이상 사용하지 않는다. 대신 권장되는 방식은 동적 프로비저닝을 사용하는 것이다. +{{< /warning >}} + +기본 볼륨 플러그인에서 지원하는 경우 `Recycle` 반환 정책은 볼륨에서 기본 스크럽(`rm -rf /thevolume/*`)을 수행하고 새 클레임에 다시 사용할 수 있도록 한다. + +그러나 관리자는 [여기](/docs/admin/kube-controller-manager/)에 설명된대로 쿠버네티스 컨트롤러 관리자 커맨드라인 인자(command line arguments)를 사용하여 사용자 정의 재활용 파드 템플릿을 구성할 수 있다. 사용자 정의 재활용 파드 템플릿에는 아래 예와 같이 `volumes` 명세가 포함되어야 한다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pv-recycler + namespace: default +spec: + restartPolicy: Never + volumes: + - name: vol + hostPath: + path: /any/path/it/will/be/replaced + containers: + - name: pv-recycler + image: "k8s.gcr.io/busybox" + command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] + volumeMounts: + - name: vol + mountPath: /scrub +``` + +그러나 `volumes` 부분의 사용자 정의 재활용 파드 템플릿에 지정된 특정 경로는 재활용되는 볼륨의 특정 경로로 바뀐다. + +### 퍼시스턴트 볼륨 클레임 확장 + +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + +이제 퍼시스턴트볼륨클레임(PVC) 확장 지원이 기본적으로 활성화되어 있다. 다음 유형의 +볼륨을 확장할 수 있다. + +* gcePersistentDisk +* awsElasticBlockStore +* Cinder +* glusterfs +* rbd +* Azure File +* Azure Disk +* Portworx +* FlexVolumes +* CSI + +스토리지 클래스의 `allowVolumeExpansion` 필드가 true로 설정된 경우에만 PVC를 확장할 수 있다. + +``` yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: gluster-vol-default +provisioner: kubernetes.io/glusterfs +parameters: + resturl: "http://192.168.10.100:8080" + restuser: "" + secretNamespace: "" + secretName: "" +allowVolumeExpansion: true +``` + +PVC에 대해 더 큰 볼륨을 요청하려면 PVC 오브젝트를 수정하여 더 큰 용량을 +지정한다. 이는 기본 퍼시스턴트볼륨을 지원하는 볼륨의 확장을 트리거한다. 클레임을 만족시키기 위해 +새로운 퍼시스턴트볼륨이 생성되지 않고 기존 볼륨의 크기가 조정된다. + +#### CSI 볼륨 확장 + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +CSI 볼륨 확장 지원은 기본적으로 활성화되어 있지만 볼륨 확장을 지원하려면 특정 CSI 드라이버도 필요하다. 자세한 내용은 특정 CSI 드라이버 문서를 참고한다. + + +#### 파일시스템을 포함하는 볼륨 크기 조정 + +파일시스템이 XFS, Ext3 또는 Ext4 인 경우에만 파일시스템을 포함하는 볼륨의 크기를 조정할 수 있다. + +볼륨에 파일시스템이 포함된 경우 새 파드가 `ReadWrite` 모드에서 퍼시스턴트볼륨클레임을 사용하는 +경우에만 파일시스템의 크기가 조정된다. 파일시스템 확장은 파드가 시작되거나 +파드가 실행 중이고 기본 파일시스템이 온라인 확장을 지원할 때 수행된다. + +FlexVolumes는 `RequiresFSResize` 기능으로 드라이버가 `true`로 설정된 경우 크기 조정을 허용한다. +FlexVolume은 파드 재시작 시 크기를 조정할 수 있다. + +#### 사용 중인 퍼시스턴트볼륨클레임 크기 조정 + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +{{< note >}} +사용 중인 PVC 확장은 쿠버네티스 1.15 이후 버전에서는 베타로, 1.11 이후 버전에서는 알파로 제공된다. `ExpandInUsePersistentVolumes` 기능을 사용하도록 설정해야 한다. 베타 기능의 경우 여러 클러스터에서 자동으로 적용된다. 자세한 내용은 [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/) 문서를 참고한다. +{{< /note >}} + +이 경우 기존 PVC를 사용하는 파드 또는 디플로이먼트를 삭제하고 다시 만들 필요가 없다. +파일시스템이 확장되자마자 사용 중인 PVC가 파드에서 자동으로 사용 가능하다. +이 기능은 파드나 디플로이먼트에서 사용하지 않는 PVC에는 영향을 미치지 않는다. 확장을 완료하기 전에 +PVC를 사용하는 파드를 만들어야 한다. + + +다른 볼륨 유형과 비슷하게 FlexVolume 볼륨도 파드에서 사용 중인 경우 확장할 수 있다. + +{{< note >}} +FlexVolume의 크기 조정은 기본 드라이버가 크기 조정을 지원하는 경우에만 가능하다. +{{< /note >}} + +{{< note >}} +EBS 볼륨 확장은 시간이 많이 걸리는 작업이다. 또한 6시간마다 한 번의 수정을 할 수 있는 볼륨별 쿼터(quota)가 있다. +{{< /note >}} + + +## 퍼시스턴트 볼륨의 유형 + +퍼시스턴트볼륨 유형은 플러그인으로 구현된다. 쿠버네티스는 현재 다음의 플러그인을 지원한다. + +* GCEPersistentDisk +* AWSElasticBlockStore +* AzureFile +* AzureDisk +* CSI +* FC (파이버 채널) +* FlexVolume +* Flocker +* NFS +* iSCSI +* RBD (Ceph Block Device) +* CephFS +* Cinder (OpenStack 블록 스토리지) +* Glusterfs +* VsphereVolume +* Quobyte Volumes +* HostPath (단일 노드 테스트 전용 – 로컬 스토리지는 어떤 방식으로도 지원되지 않으며 다중-노드 클러스터에서 작동하지 않음) +* Portworx Volumes +* ScaleIO Volumes +* StorageOS + +## 퍼시스턴트 볼륨 + +각 PV에는 스펙과 상태(볼륨의 명세와 상태)가 포함된다. +퍼시스턴트볼륨 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv0003 +spec: + capacity: + storage: 5Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Recycle + storageClassName: slow + mountOptions: + - hard + - nfsvers=4.1 + nfs: + path: /tmp + server: 172.17.0.2 +``` + +{{< note >}} +클러스터 내에서 퍼시스턴트볼륨을 사용하려면 볼륨 유형과 관련된 헬퍼(Helper) 프로그램이 필요할 수 있다. 이 예에서 퍼시스턴트볼륨은 NFS 유형이며 NFS 파일시스템 마운트를 지원하려면 헬퍼 프로그램인 /sbin/mount.nfs가 필요하다. +{{< /note >}} + +### 용량 + +일반적으로 PV는 특정 저장 용량을 가진다. 이것은 PV의 `capacity` 속성을 사용하여 설정된다. `capacity`가 사용하는 단위를 이해하려면 쿠버네티스 [리소스 모델](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md)을 참고한다. + +현재 스토리지 용량 크기는 설정하거나 요청할 수 있는 유일한 리소스이다. 향후 속성에 IOPS, 처리량 등이 포함될 수 있다. + +### 볼륨 모드 + +{{< feature-state for_k8s_version="v1.18" state="stable" >}} + +쿠버네티스는 퍼시스턴트볼륨의 두 가지 `volumeModes`인 `Filesystem`과 `Block`을 지원한다. + +`volumeMode`는 선택적 API 파라미터이다. +`Filesystem`은 `volumeMode` 파라미터가 생략될 때 사용되는 기본 모드이다. + +`volumeMode: Filesystem`이 있는 볼륨은 파드의 디렉터리에 *마운트* 된다. 볼륨이 장치에 +의해 지원되고 그 장치가 비어 있으면 쿠버네티스는 장치를 +처음 마운트하기 전에 장치에 파일시스템을 만든다. + +볼륨을 원시 블록 장치로 사용하려면 `volumeMode`의 값을 `Block`으로 설정할 수 있다. +이러한 볼륨은 파일시스템이 없는 블록 장치로 파드에 제공된다. +이 모드는 파드와 볼륨 사이에 파일시스템 계층 없이도 볼륨에 액세스하는 +가장 빠른 방법을 파드에 제공하는 데 유용하다. 반면에 파드에서 실행되는 애플리케이션은 +원시 블록 장치를 처리하는 방법을 알아야 한다. +파드에서 `volumeMode: Block`으로 볼륨을 사용하는 방법에 대한 예는 +[원시 블록 볼륨 지원](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)를 참조하십시오. + +### 접근 모드 + +리소스 제공자가 지원하는 방식으로 호스트에 퍼시스턴트볼륨을 마운트할 수 있다. 아래 표에서 볼 수 있듯이 제공자들은 서로 다른 기능을 가지며 각 PV의 접근 모드는 해당 볼륨에서 지원하는 특정 모드로 설정된다. 예를 들어 NFS는 다중 읽기/쓰기 클라이언트를 지원할 수 있지만 특정 NFS PV는 서버에서 읽기 전용으로 export할 수 있다. 각 PV는 특정 PV의 기능을 설명하는 자체 접근 모드 셋을 갖는다. + +접근 모드는 다음과 같다. + +* ReadWriteOnce -- 하나의 노드에서 볼륨을 읽기-쓰기로 마운트할 수 있다 +* ReadOnlyMany -- 여러 노드에서 볼륨을 읽기 전용으로 마운트할 수 있다 +* ReadWriteMany -- 여러 노드에서 볼륨을 읽기-쓰기로 마운트할 수 있다 + +CLI에서 접근 모드는 다음과 같이 약어로 표시된다. + +* RWO - ReadWriteOnce +* ROX - ReadOnlyMany +* RWX - ReadWriteMany + +> __중요!__ 볼륨이 여러 접근 모드를 지원하더라도 한 번에 하나의 접근 모드를 사용하여 마운트할 수 있다. 예를 들어 GCEPersistentDisk는 하나의 노드가 ReadWriteOnce로 마운트하거나 여러 노드가 ReadOnlyMany로 마운트할 수 있지만 동시에는 불가능하다. + + +| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany| +| :--- | :---: | :---: | :---: | +| AWSElasticBlockStore | ✓ | - | - | +| AzureFile | ✓ | ✓ | ✓ | +| AzureDisk | ✓ | - | - | +| CephFS | ✓ | ✓ | ✓ | +| Cinder | ✓ | - | - | +| CSI | 드라이버에 따라 다름 | 드라이버에 따라 다름 | 드라이버에 따라 다름 | +| FC | ✓ | ✓ | - | +| FlexVolume | ✓ | ✓ | 드라이버에 따라 다름 | +| Flocker | ✓ | - | - | +| GCEPersistentDisk | ✓ | ✓ | - | +| Glusterfs | ✓ | ✓ | ✓ | +| HostPath | ✓ | - | - | +| iSCSI | ✓ | ✓ | - | +| Quobyte | ✓ | ✓ | ✓ | +| NFS | ✓ | ✓ | ✓ | +| RBD | ✓ | ✓ | - | +| VsphereVolume | ✓ | - | - (파드가 병치될(collocated) 때 작동) | +| PortworxVolume | ✓ | - | ✓ | +| ScaleIO | ✓ | ✓ | - | +| StorageOS | ✓ | - | - | + +### 클래스 + +PV는 `storageClassName` 속성을 +[스토리지클래스](/docs/concepts/storage/storage-classes/)의 +이름으로 설정하여 지정하는 클래스를 가질 수 있다. +특정 클래스의 PV는 해당 클래스를 요청하는 PVC에만 바인딩될 수 있다. +`storageClassName`이 없는 PV에는 클래스가 없으며 특정 클래스를 요청하지 않는 PVC에만 +바인딩할 수 있다. + +이전에는 `volume.beta.kubernetes.io/storage-class` 어노테이션이 +`storageClassName` 속성 대신 사용되었다. 이 어노테이션은 아직까지는 사용할 수 있지만, +향후 쿠버네티스 릴리스에서 완전히 사용 중단(deprecated)이 될 예정이다. + +### 반환 정책 + +현재 반환 정책은 다음과 같다. + +* Retain(보존) -- 수동 반환 +* Recycle(재활용) -- 기본 스크럽 (`rm -rf /thevolume/*`) +* Delete(삭제) -- AWS EBS, GCE PD, Azure Disk 또는 OpenStack Cinder 볼륨과 같은 관련 스토리지 자산이 삭제됨 + +현재 NFS 및 HostPath만 재활용을 지원한다. AWS EBS, GCE PD, Azure Disk 및 Cinder 볼륨은 삭제를 지원한다. + +### 마운트 옵션 + +쿠버네티스 관리자는 퍼시스턴트 볼륨이 노드에 마운트될 때 추가 마운트 옵션을 지정할 수 있다. + +{{< note >}} +모든 퍼시스턴트 볼륨 유형이 마운트 옵션을 지원하는 것은 아니다. +{{< /note >}} + +다음 볼륨 유형은 마운트 옵션을 지원한다. + +* AWSElasticBlockStore +* AzureDisk +* AzureFile +* CephFS +* Cinder (OpenStack 블록 스토리지) +* GCEPersistentDisk +* Glusterfs +* NFS +* Quobyte Volumes +* RBD (Ceph Block Device) +* StorageOS +* VsphereVolume +* iSCSI + +마운트 옵션의 유효성이 검사되지 않으므로 마운트 옵션이 유효하지 않으면 마운트가 실패한다. + +이전에는 `mountOptions` 속성 대신 `volume.beta.kubernetes.io/mount-options` 어노테이션이 +사용되었다. 이 어노테이션은 아직까지는 사용할 수 있지만, +향후 쿠버네티스 릴리스에서 완전히 사용 중단(deprecated)이 될 예정이다. + +### 노드 어피니티(affinity) + +{{< note >}} +대부분의 볼륨 유형의 경우 이 필드를 설정할 필요가 없다. [AWS EBS](/ko/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/ko/docs/concepts/storage/volumes/#gcepersistentdisk) 및 [Azure Disk](/ko/docs/concepts/storage/volumes/#azuredisk) 볼륨 블록 유형에 자동으로 채워진다. [로컬](/ko/docs/concepts/storage/volumes/#local) 볼륨에 대해서는 이를 명시적으로 설정해야 한다. +{{< /note >}} + +PV는 [노드 어피니티](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core)를 지정하여 이 볼륨에 접근할 수 있는 노드를 제한하는 제약 조건을 정의할 수 있다. PV를 사용하는 파드는 노드 어피니티에 의해 선택된 노드로만 스케줄링된다. + +### 단계(Phase) + +볼륨은 다음 단계 중 하나이다. + +* Available(사용 가능) -– 아직 클레임에 바인딩되지 않은 사용할 수 있는 리소스 +* Bound(바인딩) –- 볼륨이 클레임에 바인딩됨 +* Released(릴리스) –- 클레임이 삭제되었지만 클러스터에서 아직 리소스를 반환하지 않음 +* Failed(실패) –- 볼륨이 자동 반환에 실패함 + +CLI는 PV에 바인딩된 PVC의 이름을 표시한다. + +## 퍼시스턴트볼륨클레임 + +각 PVC에는 스펙과 상태(클레임의 명세와 상태)가 포함된다. +퍼시스턴트볼륨클레임 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 +한다. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: myclaim +spec: + accessModes: + - ReadWriteOnce + volumeMode: Filesystem + resources: + requests: + storage: 8Gi + storageClassName: slow + selector: + matchLabels: + release: "stable" + matchExpressions: + - {key: environment, operator: In, values: [dev]} +``` + +### 접근 모드 + +클레임은 특정 접근 모드로 저장소를 요청할 때 볼륨과 동일한 규칙을 사용한다. + +### 볼륨 모드 + +클레임은 볼륨과 동일한 규칙을 사용하여 파일시스템 또는 블록 장치로 볼륨을 사용함을 나타낸다. + +### 리소스 + +파드처럼 클레임은 특정 수량의 리소스를 요청할 수 있다. 이 경우는 스토리지에 대한 요청이다. 동일한 [리소스 모델](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md)이 볼륨과 클레임 모두에 적용된다. + +### 셀렉터 + +클레임은 볼륨 셋을 추가로 필터링하기 위해 [레이블 셀렉터](/ko/docs/concepts/overview/working-with-objects/labels/#레이블-셀렉터)를 지정할 수 있다. 레이블이 셀렉터와 일치하는 볼륨만 클레임에 바인딩할 수 있다. 셀렉터는 두 개의 필드로 구성될 수 있다. + +* `matchLabels` - 볼륨에 이 값의 레이블이 있어야함 +* `matchExpressions` - 키, 값의 목록, 그리고 키와 값에 관련된 연산자를 지정하여 만든 요구 사항 목록. 유효한 연산자에는 In, NotIn, Exists 및 DoesNotExist가 있다. + +`matchLabels` 및 `matchExpressions`의 모든 요구 사항이 AND 조건이다. 일치하려면 모두 충족해야 한다. + +### 클래스 + +클레임은 `storageClassName` 속성을 사용하여 +[스토리지클래스](/docs/concepts/storage/storage-classes/)의 이름을 지정하여 +특정 클래스를 요청할 수 있다. +요청된 클래스의 PV(PVC와 동일한 `storageClassName`을 갖는 PV)만 PVC에 +바인딩될 수 있다. + +PVC는 반드시 클래스를 요청할 필요는 없다. `storageClassName`이 `""`로 설정된 +PVC는 항상 클래스가 없는 PV를 요청하는 것으로 해석되므로 +클래스가 없는 PV(어노테이션이 없거나 `""`와 같은 하나의 셋)에만 바인딩될 수 +있다. `storageClassName`이 없는 PVC는 +[`DefaultStorageClass` 어드미션 플러그인](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)이 +켜져 있는지 여부에 따라 동일하지 않으며 +클러스터에 따라 다르게 처리된다. + +* 어드미션 플러그인이 켜져 있으면 관리자가 기본 스토리지클래스를 지정할 수 있다. + `storageClassName`이 없는 모든 PVC는 해당 기본값의 PV에만 바인딩할 수 있다. 기본 + 스토리지클래스 지정은 스토리지클래스 오브젝트에서 어노테이션 + `storageclass.kubernetes.io/is-default-class`를 `true`로 + 설정하여 수행된다. 관리자가 기본값을 지정하지 않으면 어드미션 플러그인이 꺼져 있는 것처럼 + 클러스터가 PVC 생성에 응답한다. 둘 이상의 기본값이 지정된 경우 어드미션 + 플러그인은 모든 PVC 생성을 + 금지한다. +* 어드미션 플러그인이 꺼져 있으면 기본 스토리지클래스에 대한 기본값 자체가 없다. + `storageClassName`이 없는 모든 PVC는 클래스가 없는 PV에만 바인딩할 수 있다. 이 경우 + `storageClassName`이 없는 PVC는 `storageClassName`이 `""`로 설정된 PVC와 + 같은 방식으로 처리된다. + +설치 방법에 따라 설치 중에 애드온 관리자가 기본 스토리지클래스를 쿠버네티스 클러스터에 +배포할 수 있다. + +PVC가 스토리지클래스를 요청하는 것 외에도 `selector`를 지정하면 요구 사항들이 +AND 조건으로 동작한다. 요청된 클래스와 요청된 레이블이 있는 PV만 PVC에 +바인딩될 수 있다. + +{{< note >}} +현재 비어 있지 않은 `selector`가 있는 PVC에는 PV를 동적으로 프로비저닝할 수 없다. +{{< /note >}} + +이전에는 `volume.beta.kubernetes.io/storage-class` 어노테이션이 `storageClassName` +속성 대신 사용되었다. 이 어노테이션은 아직까지는 사용할 수 있지만, +향후 쿠버네티스 릴리스에서는 지원되지 않는다. + +## 볼륨으로 클레임하기 + +클레임을 볼륨으로 사용해서 파드가 스토리지에 접근한다. 클레임은 클레임을 사용하는 파드와 동일한 네임스페이스에 있어야 한다. 클러스터는 파드의 네임스페이스에서 클레임을 찾고 이를 사용하여 클레임과 관련된 퍼시스턴트볼륨을 얻는다. 그런 다음 볼륨이 호스트와 파드에 마운트된다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: myfrontend + image: nginx + volumeMounts: + - mountPath: "/var/www/html" + name: mypd + volumes: + - name: mypd + persistentVolumeClaim: + claimName: myclaim +``` + +### 네임스페이스에 대한 참고 사항 + +퍼시스턴트볼륨 바인딩은 배타적이며, 퍼시스턴트볼륨클레임은 네임스페이스 오브젝트이므로 "다중" 모드(`ROX`, `RWX`)를 사용한 클레임은 하나의 네임스페이스 내에서만 가능하다. + +## 원시 블록 볼륨 지원 + +{{< feature-state for_k8s_version="v1.18" state="stable" >}} + +다음 볼륨 플러그인에 해당되는 경우 동적 프로비저닝을 포함하여 원시 블록 볼륨을 +지원한다. + +* AWSElasticBlockStore +* AzureDisk +* CSI +* FC (파이버 채널) +* GCEPersistentDisk +* iSCSI +* Local volume +* OpenStack Cinder +* RBD (Ceph Block Device) +* VsphereVolume + +### 원시 블록 볼륨을 사용하는 퍼시스턴트볼륨 {#persistent-volume-using-a-raw-block-volume} + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: block-pv +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + volumeMode: Block + persistentVolumeReclaimPolicy: Retain + fc: + targetWWNs: ["50060e801049cfd1"] + lun: 0 + readOnly: false +``` +### 원시 블록 볼륨을 요청하는 퍼시스턴트볼륨클레임 {#persistent-volume-claim-requesting-a-raw-block-volume} + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: block-pvc +spec: + accessModes: + - ReadWriteOnce + volumeMode: Block + resources: + requests: + storage: 10Gi +``` + +### 컨테이너에 원시 블록 장치 경로를 추가하는 파드 명세 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-with-block-volume +spec: + containers: + - name: fc-container + image: fedora:26 + command: ["/bin/sh", "-c"] + args: [ "tail -f /dev/null" ] + volumeDevices: + - name: data + devicePath: /dev/xvda + volumes: + - name: data + persistentVolumeClaim: + claimName: block-pvc +``` + +{{< note >}} +파드에 대한 원시 블록 장치를 추가할 때 마운트 경로 대신 컨테이너에 장치 경로를 지정한다. +{{< /note >}} + +### 블록 볼륨 바인딩 + +사용자가 퍼시스턴트볼륨클레임 스펙에서 `volumeMode` 필드를 사용하여 이를 나타내는 원시 블록 볼륨을 요청하는 경우 바인딩 규칙은 스펙의 일부분으로 이 모드를 고려하지 않은 이전 릴리스에 비해 약간 다르다. +사용자와 관리자가 원시 블록 장치를 요청하기 위해 지정할 수 있는 가능한 조합의 표가 아래 나열되어 있다. 이 테이블은 볼륨이 바인딩되는지 여부를 나타낸다. +정적 프로비저닝된 볼륨에 대한 볼륨 바인딩 매트릭스이다. + +| PV volumeMode | PVC volumeMode | Result | +| --------------|:---------------:| ----------------:| +| 지정되지 않음 | 지정되지 않음 | BIND | +| 지정되지 않음 | Block | NO BIND | +| 지정되지 않음 | Filesystem | BIND | +| Block | 지정되지 않음 | NO BIND | +| Block | Block | BIND | +| Block | Filesystem | NO BIND | +| Filesystem | Filesystem | BIND | +| Filesystem | Block | NO BIND | +| Filesystem | 지정되지 않음 | BIND | + +{{< note >}} +알파 릴리스에서는 정적으로 프로비저닝된 볼륨만 지원된다. 관리자는 원시 블록 장치로 작업할 때 이러한 값을 고려해야 한다. +{{< /note >}} + +## 볼륨 스냅샷 및 스냅샷 지원에서 볼륨 복원 + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +CSI 볼륨 플러그인만 지원하도록 볼륨 스냅샷 기능이 추가되었다. 자세한 내용은 [볼륨 스냅샷](/docs/concepts/storage/volume-snapshots/)을 참고한다. + +볼륨 스냅샷 데이터 소스에서 볼륨 복원을 지원하려면 apiserver와 controller-manager에서 +`VolumeSnapshotDataSource` 기능 게이트를 활성화한다. + +### 볼륨 스냅샷에서 퍼시스턴트볼륨클레임 생성 {#create-persistent-volume-claim-from-volume-snapshot} + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: restore-pvc +spec: + storageClassName: csi-hostpath-sc + dataSource: + name: new-snapshot-test + kind: VolumeSnapshot + apiGroup: snapshot.storage.k8s.io + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## 볼륨 복제 + +[볼륨 복제](/ko/docs/concepts/storage/volume-pvc-datasource/)는 CSI 볼륨 플러그인만 사용할 수 있다. + +### 기존 pvc에서 퍼시스턴트볼륨클레임 생성 {#create-persistent-volume-claim-from-an-existing-pvc} + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: cloned-pvc +spec: + storageClassName: my-csi-plugin + dataSource: + name: existing-src-pvc-name + kind: PersistentVolumeClaim + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## 포터블 구성 작성 + +광범위한 클러스터에서 실행되고 퍼시스턴트 스토리지가 필요한 +구성 템플릿 또는 예제를 작성하는 경우 다음 패턴을 사용하는 것이 좋다. + +- 구성 번들(디플로이먼트, 컨피그맵 등)에 퍼시스턴트볼륨클레임 + 오브젝트를 포함시킨다. +- 구성을 인스턴스화 하는 사용자에게 퍼시스턴트볼륨을 생성할 권한이 없을 수 있으므로 + 퍼시스턴트볼륨 오브젝트를 구성에 포함하지 않는다. +- 템플릿을 인스턴스화 할 때 스토리지 클래스 이름을 제공하는 옵션을 + 사용자에게 제공한다. + - 사용자가 스토리지 클래스 이름을 제공하는 경우 해당 값을 + `permanentVolumeClaim.storageClassName` 필드에 입력한다. + 클러스터에서 관리자가 스토리지클래스를 활성화한 경우 + PVC가 올바른 스토리지 클래스와 일치하게 된다. + - 사용자가 스토리지 클래스 이름을 제공하지 않으면 + `permanentVolumeClaim.storageClassName` 필드를 nil로 남겨둔다. + 그러면 클러스터에 기본 스토리지클래스가 있는 사용자에 대해 PV가 자동으로 프로비저닝된다. + 많은 클러스터 환경에 기본 스토리지클래스가 설치되어 있거나 관리자가 + 고유한 기본 스토리지클래스를 생성할 수 있다. +- 도구(tooling)에서 일정 시간이 지나도 바인딩되지 않는 PVC를 관찰하여 사용자에게 + 노출시킨다. 이는 클러스터가 동적 스토리지를 지원하지 + 않거나(이 경우 사용자가 일치하는 PV를 생성해야 함), + 클러스터에 스토리지 시스템이 없음을 나타낸다(이 경우 + 사용자는 PVC가 필요한 구성을 배포할 수 없음). +{{% /capture %}} + {{% capture whatsnext %}} + +* [퍼시스턴트볼륨 생성](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume)에 대해 자세히 알아보기 +* [퍼시스턴트볼륨클레임 생성](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim)에 대해 자세히 알아보기 +* [퍼시스턴트 스토리지 설계 문서](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md) 읽기 + +### 참고 + +* [퍼시스턴트볼륨](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolume-v1-core) +* [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core) +* [퍼시스턴트볼륨클레임](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core) +* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core) +{{% /capture %}} diff --git a/content/ko/docs/concepts/storage/storage-classes.md b/content/ko/docs/concepts/storage/storage-classes.md new file mode 100644 index 0000000000000..66dd33d94143a --- /dev/null +++ b/content/ko/docs/concepts/storage/storage-classes.md @@ -0,0 +1,818 @@ +--- +title: 스토리지 클래스 +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +이 문서는 쿠버네티스의 스토리지클래스의 개념을 설명한다. +[볼륨](/ko/docs/concepts/storage/volumes/)과 +[퍼시스턴트 볼륨](/ko/docs/concepts/storage/persistent-volumes)에 익숙해지는 것을 권장한다. + +{{% /capture %}} + +{{% capture body %}} + +## 소개 + +스토리지클래스는 관리자가 제공하는 스토리지의 "classes"를 설명할 수 있는 +방법을 제공한다. 다른 클래스는 서비스의 품질 수준 또는 +백업 정책, 클러스터 관리자가 정한 임의의 정책에 +매핑될 수 있다. 쿠버네티스 자체는 클래스가 무엇을 나타내는지에 +대해 상관하지 않는다. 다른 스토리지 시스템에서는 이 개념을 +"프로파일"이라고도 한다. + +## 스토리지클래스 리소스 + +각 스토리지클래스에는 해당 스토리지클래스에 속하는 퍼시스턴트볼륨을 동적으로 프로비저닝 +할 때 사용되는 `provisioner`, `parameters` 와 +`reclaimPolicy` 필드가 포함된다. + +스토리지클래스 오브젝트의 이름은 중요하며, 사용자가 특정 +클래스를 요청할 수 있는 방법이다. 관리자는 스토리지클래스 오브젝트를 +처음 생성할 때 클래스의 이름과 기타 파라미터를 설정하며, +일단 생성된 오브젝트는 업데이트할 수 없다. + +관리자는 특정 클래스에 바인딩을 요청하지 않는 PVC에 대해서만 기본 +스토리지클래스를 지정할 수 있다. 자세한 내용은 +[퍼시스턴트볼륨클레임 섹션](/ko/docs/concepts/storage/persistent-volumes/#클래스-1)을 +본다. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: standard +provisioner: kubernetes.io/aws-ebs +parameters: + type: gp2 +reclaimPolicy: Retain +allowVolumeExpansion: true +mountOptions: + - debug +volumeBindingMode: Immediate +``` + +### 프로비저너 + +각 스토리지클래스에는 PV 프로비저닝에 사용되는 볼륨 플러그인을 결정하는 +프로비저너가 있다. 이 필드는 반드시 지정해야 한다. + +| 볼륨 플러그인 | 내부 프로비저너 | 설정 예시 | +| :--- | :---: | :---: | +| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) | +| AzureFile | ✓ | [Azure 파일](#azure-파일) | +| AzureDisk | ✓ | [Azure 디스크](#azure-디스크) | +| CephFS | - | - | +| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder)| +| FC | - | - | +| FlexVolume | - | - | +| Flocker | ✓ | - | +| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) | +| Glusterfs | ✓ | [Glusterfs](#glusterfs) | +| iSCSI | - | - | +| Quobyte | ✓ | [Quobyte](#quobyte) | +| NFS | - | - | +| RBD | ✓ | [Ceph RBD](#ceph-rbd) | +| VsphereVolume | ✓ | [vSphere](#vsphere) | +| PortworxVolume | ✓ | [Portworx 볼륨](#portworx-볼륨) | +| ScaleIO | ✓ | [ScaleIO](#scaleio) | +| StorageOS | ✓ | [StorageOS](#storageos) | +| Local | - | [Local](#local) | + +여기 목록에서 "내부" 프로비저너를 지정할 수 있다(이 +이름은 "kubernetes.io" 가 접두사로 시작하고, 쿠버네티스와 +함께 제공된다). 또한, 쿠버네티스에서 정의한 +[사양](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md)을 +따르는 독립적인 프로그램인 외부 프로비저너를 실행하고 지정할 수 있다. +외부 프로비저너의 작성자는 코드의 수명, 프로비저너의 +배송 방법, 실행 방법, (Flex를 포함한)볼륨 플러그인 +등에 대한 완전한 재량권을 가진다. [kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner) +리포지터리에는 대량의 사양을 구현하는 외부 프로비저너를 작성하기 +위한 라이브러리가 있다. 일부 외부 프로비저너의 목록은 +[kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage) 리포지터리에 있다. + +예를 들어, NFS는 내부 프로비저너를 제공하지 않지만, 외부 +프로비저너를 사용할 수 있다. 타사 스토리지 업체가 자체 외부 +프로비저너를 제공하는 경우도 있다. + +### 리클레임 정책 + +스토리지클래스에 의해 동적으로 생성된 퍼시스턴트볼륨은 클래스의 +`reclaimPolicy` 필드에 지정된 리클레임 정책을 가지는데, +이는 `Delete` 또는 `Retain` 이 될 수 있다. 스토리지클래스 오브젝트가 +생성될 때 `reclaimPolicy` 가 지정되지 않으면 기본값은 `Delete` 이다. + +수동으로 생성되고 스토리지클래스를 통해 관리되는 퍼시스턴트볼륨에는 +생성 시 할당된 리클레임 정책이 있다. + +### 볼륨 확장 허용 + +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + +퍼시스턴트볼륨은 확장이 가능하도록 구성할 수 있다. 이 기능을 `true` 로 설정하면 +해당 PVC 오브젝트를 편집하여 볼륨 크기를 조정할 수 있다. + +다음 볼륨 유형은 기본 스토리지클래스에서 `allowVolumeExpansion` 필드가 +true로 설정된 경우 볼륨 확장을 지원한다. + +{{< table caption = "Table of Volume types and the version of Kubernetes they require" >}} + +볼륨 유형 | 요구되는 쿠버네티스 버전 +:---------- | :-------------------------- +gcePersistentDisk | 1.11 +awsElasticBlockStore | 1.11 +Cinder | 1.11 +glusterfs | 1.11 +rbd | 1.11 +Azure File | 1.11 +Azure Disk | 1.11 +Portworx | 1.11 +FlexVolume | 1.13 +CSI | 1.14 (alpha), 1.16 (beta) + +{{< /table >}} + + +{{< note >}} +볼륨 확장 기능을 사용해서 볼륨을 확장할 수 있지만, 볼륨을 축소할 수는 없다. +{{< /note >}} + +### 마운트 옵션 + +스토리지클래스에 의해 동적으로 생성된 퍼시스턴트볼륨은 +클래스의 `mountOptions` 필드에 지정된 마운트 옵션을 가진다. + +만약 볼륨 플러그인이 마운트 옵션을 지원하지 않는데, 마운트 +옵션을 지정하면 프로비저닝은 실패한다. 마운트 옵션은 클래스 또는 PV 에서 +검증되지 않으므로 PV 마운트가 유효하지 않으면 마운트가 실패하게 된다. + +### 볼륨 바인딩 모드 + +`volumeBindingMode` 필드는 [볼륨 바인딩과 동적 +프로비저닝](/ko/docs/concepts/storage/persistent-volumes/#프로비저닝)의 시작 시기를 제어한다. + +기본적으로, `Immediate` 모드는 퍼시스턴트볼륨클레임이 생성되면 볼륨 +바인딩과 동적 프로비저닝이 즉시 발생하는 것을 나타낸다. 토폴로지 제약이 +있고 클러스터의 모든 노드에서 전역적으로 접근할 수 없는 스토리지 +백엔드의 경우, 파드의 스케줄링 요구 사항에 대한 지식 없이 퍼시스턴트볼륨이 +바인딩되거나 프로비저닝된다. 이로 인해 스케줄되지 않은 파드가 발생할 수 있다. + +클러스터 관리자는 `WaitForFirstConsumer` 모드를 지정해서 이 문제를 해결할 수 있는데 +이 모드는 퍼시스턴트볼륨클레임을 사용하는 파드가 생성될 때까지 퍼시스턴트볼륨의 바인딩과 프로비저닝을 지연시킨다. +퍼시스턴트볼륨은 파드의 스케줄링 제약 조건에 의해 지정된 토폴로지에 +따라 선택되거나 프로비전된다. 여기에는 [리소스 +요구 사항](/docs/concepts/configuration/manage-compute-resources-container/), +[노드 셀렉터](/ko/docs/concepts/configuration/assign-pod-node/#노드-셀렉터-nodeselector), +[파드 어피니티(affinity)와 +안티-어피니티(anti-affinity)](/ko/docs/concepts/configuration/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity) +그리고 [테인트(taint)와 톨러레이션(toleration)](/docs/concepts/configuration/taint-and-toleration/)이 포함된다. + +다음 플러그인은 동적 프로비저닝과 `WaitForFirstConsumer` 를 지원한다. + +* [AWSElasticBlockStore](#aws-ebs) +* [GCEPersistentDisk](#gce-pd) +* [Azure디스크](#azure-디스크) + +다음 플러그인은 사전에 생성된 퍼시스턴트볼륨 바인딩으로 `WaitForFirstConsumer` 를 지원한다. + +* 위에서 언급한 모든 플러그인 +* [Local](#local) + +{{< feature-state state="stable" for_k8s_version="v1.17" >}} +[CSI 볼륨](/ko/docs/concepts/storage/volumes/#csi)은 동적 프로비저닝과 +사전에 생성된 PV에서도 지원되지만, 지원되는 토폴로지 키와 예시를 보려면 해당 +CSI 드라이버에 대한 문서를 본다. + +### 허용된 토폴로지 + +클러스터 운영자가 `WaitForFirstConsumer` 볼륨 바인딩 모드를 지정하면, 대부분의 상황에서 +더 이상 특정 토폴로지로 프로비저닝을 제한할 필요가 없다. 그러나 +여전히 필요한 경우에는 `allowedTopologies` 를 지정할 수 있다. + +이 예시는 프로비전된 볼륨의 토폴로지를 특정 영역으로 제한하는 방법을 +보여 주며 지원되는 플러그인의 `zone` 과 `zones` 파라미터를 대체하는 +데 사용해야 한다. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: standard +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-standard +volumeBindingMode: WaitForFirstConsumer +allowedTopologies: +- matchLabelExpressions: + - key: failure-domain.beta.kubernetes.io/zone + values: + - us-central1-a + - us-central1-b +``` + +## 파라미터 + +스토리지 클래스에는 스토리지 클래스에 속하는 볼륨을 설명하는 파라미터가 +있다. `provisioner` 에 따라 다른 파라미터를 사용할 수 있다. 예를 들어, +파라미터 `type` 에 대한 값 `io1` 과 파라미터 `iopsPerGB` 는 +EBS에만 사용할 수 있다. 파라미터 생략 시 일부 기본값이 +사용된다. + +스토리지클래스에 대해 최대 512개의 파라미터를 정의할 수 있다. +키와 값을 포함하여 파라미터 오브젝터의 총 길이는 256 KiB를 +초과할 수 없다. + +### AWS EBS + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/aws-ebs +parameters: + type: io1 + iopsPerGB: "10" + fsType: ext4 +``` + +* `type`: `io1`, `gp2`, `sc1`, `st1`. 자세한 내용은 + [AWS 문서](https://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/ebs-volume-types.html)를 + 본다. 기본값: `gp2`. +* `zone` (사용 중단(deprecated)): AWS 영역. `zone` 과 `zones` 를 지정하지 않으면, 일반적으로 + 쿠버네티스 클러스터의 노드가 있는 모든 활성화된 영역에 걸쳐 볼륨이 + 라운드 로빈으로 조정된다. `zone` 과 `zones` 파라미터를 동시에 사용해서는 안된다. +* `zones` (사용 중단): 쉼표로 구분된 AWS 영역의 목록. `zone` 과 `zones` 를 + 지정하지 않으면, 일반적으로 쿠버네티스 클러스터의 노드가 있는 모든 활성화된 영역에 걸쳐 + 볼륨이 라운드 로빈으로 조정된다. `zone` 과 `zones` 파라미터를 + 동시에 사용해서는 안된다. +* `iopsPerGB`: `io1` 볼륨 전용이다. 1초당 GiB에 대한 I/O 작업 수이다. AWS + 볼륨 플러그인은 요청된 볼륨 크기에 곱셈하여 볼륨의 IOPS를 + 계산하고 이를 20,000 IOPS로 제한한다(AWS에서 지원하는 최대값으로, + [AWS 문서](https://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/ebs-volume-types.html)를 본다). + 여기에는 문자열, 즉 `10` 이 아닌, `"10"` 이 필요하다. +* `fsType`: fsType은 쿠버네티스에서 지원된다. 기본값: `"ext4"`. +* `encrypted`: EBS 볼륨의 암호화 여부를 나타낸다. + 유효한 값은 `"ture"` 또는 `"false"` 이다. 여기에는 문자열, + 즉 `true` 가 아닌, `"true"` 가 필요하다. +* `kmsKeyId`: 선택 사항. 볼륨을 암호화할 때 사용할 키의 전체 Amazon + 리소스 이름이다. 아무것도 제공되지 않지만, `encrypted` 가 true라면 + AWS에 의해 키가 생성된다. 유효한 ARN 값은 AWS 문서를 본다. + +{{< note >}} +`zone` 과 `zones` 파라미터는 사용 중단 되었으며, +[allowedTopologies](#allowed-topologies)로 대체되었다. +{{< /note >}} + +### GCE PD + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/gce-pd +parameters: + type: pd-standard + fstype: ext4 + replication-type: none +``` + +* `type`: `pd-standard` 또는 `pd-ssd`. 기본값: `pd-standard` +* `zone` (사용 중단): GCE 영역. `zone` 과 `zones` 를 모두 지정하지 않으면, 쿠버네티스 클러스터의 + 노드가 있는 모든 활성화된 영역에 걸쳐 볼륨이 라운드 로빈으로 + 조정된다. `zone` 과 `zones` 파라미터를 동시에 사용해서는 안된다. +* `zones` (사용 중단): 쉼표로 구분되는 GCE 영역의 목록. `zone` 과 `zones` 를 모두 + 지정하지 않으면, 쿠버네티스 클러스터의 노드가 있는 모든 활성화된 영역에 + 걸쳐 볼륨이 라운드 로빈으로 조정된다. `zone` 과 `zones` 파라미터를 + 동시에 사용해서는 안된다. +* `fstype`: `ext4` 또는 `xfs`. 기본값: `ext4`. 정의된 파일시스템 유형은 호스트 운영체제에서 지원되어야 한다. + +* `replication-type`: `none` 또는 `regional-pd`. 기본값: `none`. + +`replication-type` 을 `none` 으로 설정하면 (영역) PD가 프로비전된다. + +`replication-type` 이 `regional-pd` 로 설정되면, +[지역 퍼시스턴트 디스크](https://cloud.google.com/compute/docs/disks/#repds) +가 프로비전된다. 이 경우, 사용자는 `zone` 대신 `zones` 를 사용해서 원하는 +복제 영역을 지정해야 한다. 정확히 두 개의 영역이 지정된 경우, 해당 +영역에서 지역 PD가 프로비전된다. 둘 이상의 영역이 지정되면 +쿠버네티스는 지정된 영역 중에서 임의로 선택한다. `zones` 파라미터가 생략되면, +쿠버네티스는 클러스터가 관리하는 영역 중에서 +임의로 선택한다. + +{{< note >}} +`zone` 과 `zones` 파라미터는 사용 중단 되었으며, +[allowedTopologies](#allowed-topologies)로 대체되었다. +{{< /note >}} + +### Glusterfs + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/glusterfs +parameters: + resturl: "http://127.0.0.1:8081" + clusterid: "630372ccdc720a92c681fb928f27b53f" + restauthenabled: "true" + restuser: "admin" + secretNamespace: "default" + secretName: "heketi-secret" + gidMin: "40000" + gidMax: "50000" + volumetype: "replicate:3" +``` + +* `resturl`: 필요에 따라 gluster 볼륨을 프로비전하는 Gluster REST 서비스/Heketi + 서비스 url 이다. 일반적인 형식은 `IPaddress:Port` 이어야 하며 이는 GlusterFS + 동적 프로비저너의 필수 파라미터이다. Heketi 서비스가 openshift/kubernetes + 설정에서 라우팅이 가능한 서비스로 노출이 되는 경우 이것은 fqdn이 해석할 수 있는 + Heketi 서비스 url인 `http://heketi-storage-project.cloudapps.mystorage.com` 과 + 유사한 형식을 가질 수 있다. +* `restauthenabled` : REST 서버에 대한 인증을 가능하게 하는 Gluster REST 서비스 + 인증 부울이다. 이 값이 `"true"` 이면, `restuser` 와 `restuserkey` + 또는 `secretNamespace` + `secretName` 을 채워야 한다. 이 옵션은 + 사용 중단이며, `restuser`, `restuserkey`, `secretName` 또는 + `secretNamespace` 중 하나를 지정하면 인증이 활성화된다. +* `restuser` : Gluster REST 서비스/Heketi 사용자로 Gluster Trusted Pool에서 + 볼륨을 생성할 수 있다. +* `restuserkey` : REST 서버에 대한 인증에 사용될 Gluster REST 서비스/Heketi + 사용자의 암호이다. 이 파라미터는 `secretNamespace` + `secretName` 을 위해 + 사용 중단 되었다. +* `secretNamespace`, `secretName` : Gluster REST 서비스와 통신할 때 사용할 + 사용자 암호가 포함된 시크릿 인스턴스를 식별한다. 이 파라미터는 + 선택 사항으로 `secretNamespace` 와 `secretName` 을 모두 생략하면 + 빈 암호가 사용된다. 제공된 시크릿은 `"kubernetes.io/glusterfs"` 유형이어야 + 하며, 예를 들어 다음과 같이 생성한다. + + ``` + kubectl create secret generic heketi-secret \ + --type="kubernetes.io/glusterfs" --from-literal=key='opensesame' \ + --namespace=default + ``` + + 시크릿의 예시는 + [glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml)에서 찾을 수 있다. + +* `clusterid`: `630372ccdc720a92c681fb928f27b53f` 는 볼륨을 프로비저닝할 + 때 Heketi가 사용할 클러스터의 ID이다. 또한, 예시와 같이 클러스터 + ID 목록이 될 수 있다. 예: + `"8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397"`. 이것은 + 선택적 파라미터이다. +* `gidMin`, `gidMax` : 스토리지클래스에 대한 GID 범위의 최소값과 + 최대값이다. 이 범위( gidMin-gidMax )의 고유한 값(GID)은 동적으로 + 프로비전된 볼륨에 사용된다. 이것은 선택적인 값이다. 지정하지 않으면, + 볼륨은 각각 gidMin과 gidMax의 기본값인 2000-2147483647 + 사이의 값으로 프로비전된다. +* `volumetype` : 볼륨 유형과 파라미터는 이 선택적 값으로 구성할 + 수 있다. 볼륨 유형을 언급하지 않는 경우, 볼륨 유형을 결정하는 것은 + 프로비저너의 책임이다. + + 예를 들어: + * 레플리카 볼륨: `volumetype: replicate:3` 여기서 '3'은 레플리카의 수이다. + * Disperse/EC 볼륨: `volumetype: disperse:4:2` 여기서 '4'는 데이터이고 '2'는 중복 횟수이다. + * Distribute 볼륨: `volumetype: none` + + 사용 가능한 볼륨 유형과 관리 옵션에 대해서는 + [관리 가이드](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/part-Overview.html)를 참조한다. + + 자세한 정보는 + [Heketi 구성 방법](https://github.com/heketi/heketi/wiki/Setting-up-the-topology)을 참조한다. + + 퍼시스턴트 볼륨이 동적으로 프로비전되면 Gluster 플러그인은 + `gluster-dynamic-` 이라는 이름으로 엔드포인트와 + 헤드리스 서비스를 자동으로 생성한다. 퍼시스턴트 볼륨 클레임을 + 삭제하면 동적 엔드포인트와 서비스가 자동으로 삭제된다. + +### OpenStack Cinder + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: gold +provisioner: kubernetes.io/cinder +parameters: + availability: nova +``` + +* `availability`: 가용성 영역. 지정하지 않으면, 일반적으로 쿠버네티스 클러스터의 + 노드가 있는 모든 활성화된 영역에 걸쳐 볼륨이 라운드 로빈으로 조정된다. + +{{< note >}} +{{< feature-state state="deprecated" for_k8s_version="v1.11" >}} +이 OpenStack 내부 프로비저너는 사용 중단 되었다. [OpenStack용 외부 클라우드 공급자](https://github.com/kubernetes/cloud-provider-openstack)를 사용한다. +{{< /note >}} + +### vSphere + +1. 사용자 지정 디스크 형식으로 스토리지클래스를 생성한다. + + ```yaml + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: fast + provisioner: kubernetes.io/vsphere-volume + parameters: + diskformat: zeroedthick + ``` + + `diskformat`: `thin`, `zeroedthick` 와 `eagerzeroedthick`. 기본값: `"thin"`. + +2. 사용자 지정 데이터스토어에 디스크 형식으로 스토리지클래스를 생성한다. + + ```yaml + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: fast + provisioner: kubernetes.io/vsphere-volume + parameters: + diskformat: zeroedthick + datastore: VSANDatastore + ``` + + `datastore`: 또한, 사용자는 스토리지클래스에서 데이터스토어를 지정할 수 있다. + 볼륨은 스토리지클래스에 지정된 데이터스토어에 생성되며, + 이 경우 `VSANDatastore` 이다. 이 필드는 선택 사항이다. 데이터스토어를 + 지정하지 않으면, vSphere 클라우드 공급자를 초기화하는데 사용되는 vSphere + 설정 파일에 지정된 데이터스토어에 볼륨이 + 생성된다. + +3. 쿠버네티스 내부 스토리지 정책을 관리한다. + + * 기존 vCenter SPBM 정책을 사용한다. + + vSphere 스토리지 관리의 가장 중요한 기능 중 하나는 + 정책 기반 관리이다. 스토리지 정책 기반 관리(Storage Policy Based Management (SPBM))는 + 광범위한 데이터 서비스와 스토리지 솔루션에서 단일 통합 컨트롤 플레인을 + 제공하는 스토리지 정책 프레임워크이다. SPBM을 통해 vSphere 관리자는 용량 계획, + 차별화된 서비스 수준과 용량의 헤드룸(headroom) 관리와 같은 + 선행 스토리지 프로비저닝 문제를 + 극복할 수 있다. + + SPBM 정책은 `storagePolicyName` 파라미터를 사용하면 + 스토리지클래스에서 지정할 수 있다. + + * 쿠버네티스 내부의 가상 SAN 정책 지원 + + Vsphere 인프라스트럭쳐(Vsphere Infrastructure (VI)) 관리자는 + 동적 볼륨 프로비저닝 중에 사용자 정의 가상 SAN 스토리지 + 기능을 지정할 수 있다. 이제 동적 볼륨 프로비저닝 중에 스토리지 + 기능의 형태로 성능 및 가용성과 같은 스토리지 요구 사항을 정의할 + 수 있다. 스토리지 기능 요구 사항은 가상 SAN 정책으로 변환된 + 퍼시스턴트 볼륨(가상 디스크)을 생성할 때 + 가상 SAN 계층으로 푸시된다. 가상 디스크는 가상 SAN 데이터 + 스토어에 분산되어 요구 사항을 충족시키게 된다. + + 퍼시스턴트 볼륨 관리에 스토리지 정책을 사용하는 방법에 대한 자세한 내용은 + [볼륨의 동적 프로비저닝을 위한 스토리지 정책 기반 관리(SPBM)](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html)를 + 참조한다. + +vSphere용 쿠버네티스 내에서 퍼시스턴트 볼륨 관리를 시도하는 +[vSphere 예시](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere)는 +거의 없다. + +### Ceph RBD + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: fast +provisioner: kubernetes.io/rbd +parameters: + monitors: 10.16.153.105:6789 + adminId: kube + adminSecretName: ceph-secret + adminSecretNamespace: kube-system + pool: kube + userId: kube + userSecretName: ceph-secret-user + userSecretNamespace: default + fsType: ext4 + imageFormat: "2" + imageFeatures: "layering" +``` + +* `monitors`: 쉼표로 구분된 Ceph 모니터. 이 파라미터는 필수이다. +* `adminId`: 풀에 이미지를 생성할 수 있는 Ceph 클라이언트 ID. + 기본값은 "admin". +* `adminSecretName`: `adminId` 의 시크릿 이름. 이 파라미터는 필수이다. + 제공된 시크릿은 "kubernetes.io/rbd" 유형이어야 한다. +* `adminSecretNamespace`: `adminSecretName` 의 네임스페이스. 기본값은 "default". +* `pool`: Ceph RBD 풀. 기본값은 "rbd". +* `userId`: RBD 이미지를 매핑하는 데 사용하는 Ceph 클라이언트 ID. 기본값은 + `adminId` 와 동일하다. +* `userSecretName`: RDB 이미지를 매핑하기 위한 `userId` 에 대한 Ceph 시크릿 이름. PVC와 + 동일한 네임스페이스에 존재해야 한다. 이 파라미터는 필수이다. + 제공된 시크릿은 "kubernetes.io/rbd" 유형이어야 하며, 다음의 예시와 같이 + 생성되어야 한다. + + ```shell + kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \ + --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \ + --namespace=kube-system + ``` +* `userSecretNamespace`: `userSecretName` 의 네임스페이스. +* `fsType`: 쿠버네티스가 지원하는 fsType. 기본값: `"ext4"`. +* `imageFormat`: Ceph RBD 이미지 형식, "1" 또는 "2". 기본값은 "2". +* `imageFeatures`: 이 파라미터는 선택 사항이며, `imageFormat` 을 "2"로 설정한 + 경우에만 사용해야 한다. 현재 `layering` 에서만 기능이 지원된다. + 기본값은 ""이며, 기능이 설정되어 있지 않다. + +### Quobyte + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/quobyte +parameters: + quobyteAPIServer: "http://138.68.74.142:7860" + registry: "138.68.74.142:7861" + adminSecretName: "quobyte-admin-secret" + adminSecretNamespace: "kube-system" + user: "root" + group: "root" + quobyteConfig: "BASE" + quobyteTenant: "DEFAULT" +``` + +* `quobyteAPIServer`: `"http(s)://api-server:7860"` 형식의 + Quobyte의 API 서버이다. +* `registry`: 볼륨을 마운트하는 데 사용할 Quobyte 레지스트리이다. 레지스트리를 + ``:`` 의 쌍으로 지정하거나 여러 레지스트리를 + 지정하려면 쉼표만 있으면 된다. + 예: ``:,:,:`` + 호스트는 IP 주소이거나 DNS가 작동 중인 경우 + DNS 이름을 제공할 수도 있다. +* `adminSecretNamespace`: `adminSecretName` 의 네임스페이스. + 기본값은 "default". +* `adminSecretName`: 시크릿은 API 서버에 대해 인증하기 위한 Quobyte 사용자와 암호에 + 대한 정보를 담고 있다. 제공된 시크릿은 "kubernetes.io/quobyte" + 유형과 `user` 및 `password` 키를 가져야 하며, 예를 들면 + 다음과 같다. + + ```shell + kubectl create secret generic quobyte-admin-secret \ + --type="kubernetes.io/quobyte" --from-literal=user='admin' --from-literal=password='opensesame' \ + --namespace=kube-system + ``` + +* `user`: 이 사용자에 대한 모든 접근을 매핑한다. 기본값은 "root". +* `group`: 이 그룹에 대한 모든 접근을 매핑한다. 기본값은 "nfsnobody". +* `quobyteConfig`: 지정된 구성을 사용해서 볼륨을 생성한다. 웹 콘솔 + 또는 quobyte CLI를 사용해서 새 구성을 작성하거나 기존 구성을 + 수정할 수 있다. 기본값은 "BASE". +* `quobyteTenant`: 지정된 테넌트 ID를 사용해서 볼륨을 생성/삭제한다. + 이 Quobyte 테넌트는 이미 Quobyte에 있어야 한다. + 기본값은 "DEFAULT". + +### Azure 디스크크 + +#### Azure 비관리 디스크 스토리지 클래스 {#azure-unmanaged-disk-storage-class} + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/azure-disk +parameters: + skuName: Standard_LRS + location: eastus + storageAccount: azure_storage_account_name +``` + +* `skuName`: Azure 스토리지 계정 Sku 계층. 기본값은 없음. +* `location`: Azure 스토리지 계정 지역. 기본값은 없음. +* `storageAccount`: Azure 스토리지 계정 이름. 스토리지 계정이 제공되면, 클러스터와 동일한 + 리소스 그룹에 있어야 하며, `location` 은 무시된다. 스토리지 계정이 + 제공되지 않으면, 클러스터와 동일한 리소스 + 그룹에 새 스토리지 계정이 생성된다. + +#### Azure 디스크 스토리지 클래스(v1.7.2부터 제공) {#azure-disk-storage-class} + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/azure-disk +parameters: + storageaccounttype: Standard_LRS + kind: Shared +``` + +* `storageaccounttype`: Azure 스토리지 계정 Sku 계층. 기본값은 없음. +* `kind`: 가능한 값은 `shared` (기본값), `dedicated`, 그리고 `managed` 이다. + `kind` 가 `shared` 인 경우, 모든 비관리 디스크는 클러스터와 + 동일한 리소스 그룹에 있는 몇 개의 공유 스토리지 계정에 생성된다. `kind` 가 + `dedicated` 인 경우, 클러스터와 동일한 리소스 그룹에서 새로운 + 비관리 디스크에 대해 새로운 전용 스토리지 계정이 생성된다. `kind` 가 + `managed` 인 경우, 모든 관리 디스크는 클러스터와 동일한 리소스 + 그룹에 생성된다. +* `resourceGroup`: Azure 디스크를 만들 리소스 그룹을 지정한다. + 기존에 있는 리소스 그룹 이름이어야 한다. 지정되지 않는 경우, 디스크는 + 현재 쿠버네티스 클러스터와 동일한 리소스 그룹에 배치된다. + +- 프리미엄 VM은 표준 LRS(Standard_LRS)와 프리미엄 LRS(Premium_LRS) 디스크를 모두 연결할 수 있는 반면에, + 표준 VM은 표준 LRS(Standard_LRS) 디스크만 연결할 수 있다. +- 관리되는 VM은 관리되는 디스크만 연결할 수 있고, + 비관리 VM은 비관리 디스크만 연결할 수 있다. + +### Azure 파일 + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: azurefile +provisioner: kubernetes.io/azure-file +parameters: + skuName: Standard_LRS + location: eastus + storageAccount: azure_storage_account_name +``` + +* `skuName`: Azure 스토리지 계정 Sku 계층. 기본값은 없음. +* `location`: Azure 스토리지 계정 지역. 기본값은 없음. +* `storageAccount`: Azure 스토리지 계정 이름. 기본값은 없음. 스토리지 계정이 + 제공되지 않으면, 리소스 그룹과 관련된 모든 스토리지 계정이 + 검색되어 `skuName` 과 `location` 이 일치하는 것을 찾는다. 스토리지 계정이 + 제공되면, 클러스터와 동일한 리소스 그룹에 있어야 + 하며 `skuName` 과 `location` 은 무시된다. +* `secretNamespace`: Azure 스토리지 계정 이름과 키가 포함된 시크릿 + 네임스페이스. 기본값은 파드와 동일하다. +* `secretName`: Azure 스토리지 계정 이름과 키가 포함된 시크릿 이름. + 기본값은 `azure-storage-account--secret` +* `readOnly`: 스토리지가 읽기 전용으로 마운트되어야 하는지 여부를 나타내는 플래그. + 읽기/쓰기 마운트를 의미하는 기본값은 false. 이 설정은 + 볼륨마운트(VolumeMounts)의 `ReadOnly` 설정에도 영향을 준다. + +스토리지 프로비저닝 중에 마운트 자격증명에 대해 `secretName` +이라는 시크릿이 생성된다. 클러스터에 +[RBAC](/docs/reference/access-authn-authz/rbac/)과 +[컨트롤러의 롤(role)들](/docs/reference/access-authn-authz/rbac/#controller-roles)을 +모두 활성화한 경우, clusterrole `system:controller:persistent-volume-binder` +에 대한 `secret` 리소스에 `create` 권한을 추가한다. + +다중 테넌시 컨텍스트에서 `secretNamespace` 의 값을 명시적으로 설정하는 +것을 권장하며, 그렇지 않으면 다른 사용자가 스토리지 계정 자격증명을 +읽을 수 있기 때문이다. + +### Portworx 볼륨 + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: portworx-io-priority-high +provisioner: kubernetes.io/portworx-volume +parameters: + repl: "1" + snap_interval: "70" + io_priority: "high" + +``` + +* `fs`: 배치할 파일 시스템: `none/xfs/ext4` (기본값: `ext4`) +* `block_size`: Kbytes 단위의 블록 크기(기본값: `32`). +* `repl`: 레플리케이션 팩터 `1..3` (기본값: `1`)의 형태로 제공될 + 동기 레플리카의 수. 여기에는 문자열, + 즉 `0` 이 아닌, `"0"` 이 필요하다. +* `io_priority`: 볼륨이 고성능 또는 우선 순위가 낮은 스토리지에서 + 생성될 것인지를 결정한다 `high/medium/low` (기본값: `low`). +* `snap_interval`: 스냅샷을 트리거할 때의 시각/시간 간격(분). + 스냅샷은 이전 스냅샷과의 차이에 따라 증분되며, 0은 스냅을 + 비활성화 한다(기본값: `0`). 여기에는 문자열, + 즉 `70` 이 아닌, `"70"` 이 필요하다. +* `aggregation_level`: 볼륨이 분배될 청크 수를 지정하며, 0은 집계되지 않은 + 볼륨을 나타낸다(기본값: `0`). 여기에는 문자열, + 즉 `0` 이 아닌, `"0"` 이 필요하다. +* `ephemeral`: 마운트 해제 후 볼륨을 정리해야 하는지 혹은 지속적이어야 + 하는지를 지정한다. `emptyDir` 에 대한 유스케이스는 이 값을 true로 + 설정할 수 있으며, `persistent volumes` 에 대한 유스케이스인 + 카산드라와 같은 데이터베이스는 false로 설정해야 한다. `true/false` (기본값 `false`) + 여기에는 문자열, 즉 `true` 가 아닌, `"true"` 가 필요하다. + +### ScaleIO + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: slow +provisioner: kubernetes.io/scaleio +parameters: + gateway: https://192.168.99.200:443/api + system: scaleio + protectionDomain: pd0 + storagePool: sp1 + storageMode: ThinProvisioned + secretRef: sio-secret + readOnly: false + fsType: xfs +``` + +* `provisioner`: 속성이 `kubernetes.io/scaleio` 로 설정되어 있다. +* `gateway`: ScaleIO API 게이트웨이 주소(필수) +* `system`: ScaleIO 시스템의 이름(필수) +* `protectionDomain`: ScaleIO 보호 도메인의 이름(필수) +* `storagePool`: 볼륨 스토리지 풀의 이름(필수) +* `storageMode`: 스토리지 프로비전 모드: `ThinProvisioned` (기본값) 또는 + `ThickProvisioned` +* `secretRef`: 구성된 시크릿 오브젝트에 대한 참조(필수) +* `readOnly`: 마운트된 볼륨에 대한 접근 모드의 지정(기본값: false) +* `fsType`: 볼륨에 사용할 파일 시스템 유형(기본값: ext4) + +ScaleIO 쿠버네티스 볼륨 플러그인에는 구성된 시크릿 오브젝트가 필요하다. +시크릿은 다음 명령에 표시된 것처럼 `kubernetes.io/scaleio` 유형으로 +작성해야 하며, PVC와 동일한 네임스페이스 +값을 사용해야 한다. + +```shell +kubectl create secret generic sio-secret --type="kubernetes.io/scaleio" \ +--from-literal=username=sioadmin --from-literal=password=d2NABDNjMA== \ +--namespace=default +``` + +### StorageOS + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: fast +provisioner: kubernetes.io/storageos +parameters: + pool: default + description: Kubernetes volume + fsType: ext4 + adminSecretNamespace: default + adminSecretName: storageos-secret +``` + +* `pool`: 볼륨을 프로비전할 StorageOS 분산 용량 + 풀의 이름. 지정되지 않은 경우 일반적으로 존재하는 `default` 풀을 사용한다. +* `description`: 동적으로 생성된 볼륨에 할당할 설명. + 모든 볼륨 설명은 스토리지 클래스에 대해 동일하지만, 서로 다른 + 유스케이스에 대한 설명을 허용하기 위해 다른 스토리지 클래스를 사용할 수 있다. + 기본값은 `Kubernetes volume`. +* `fsType`: 요청할 기본 파일 시스템 유형. StorageOS 내의 사용자 + 정의 규칙이 이 값을 무시할 수 있다. 기본 값은 `ext4`. +* `adminSecretNamespace`: API 구성 시크릿이 있는 네임스페이스. + adminSecretName 이 설정된 경우 필수이다. +* `adminSecretName`: StorageOS API 자격증명을 얻는 데 사용할 시크릿의 이름. + 지정하지 않으면 기본값이 시도된다. + +StorageOS 쿠버네티스 볼륨 플러그인은 시크릿 오브젝트를 사용해서 StorageOS API에 +접근하기 위한 엔드포인트와 자격증명을 지정할 수 있다. 이것은 기본값이 +변경된 경우에만 필요하다. +시크릿은 다음의 명령과 같이 `kubernetes.io/storageos` 유형으로 +만들어야 한다. + +```shell +kubectl create secret generic storageos-secret \ +--type="kubernetes.io/storageos" \ +--from-literal=apiAddress=tcp://localhost:5705 \ +--from-literal=apiUsername=storageos \ +--from-literal=apiPassword=storageos \ +--namespace=default +``` + +동적으로 프로비전된 볼륨에 사용되는 시크릿은 모든 네임스페이스에서 +생성할 수 있으며 `adminSecretNamespace` 파라미터로 참조될 수 있다. +사전에 프로비전된 볼륨에서 사용하는 시크릿은 이를 참조하는 PVC와 +동일한 네임스페이스에서 작성해야 한다. + +### Local + +{{< feature-state for_k8s_version="v1.14" state="stable" >}} + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: local-storage +provisioner: kubernetes.io/no-provisioner +volumeBindingMode: WaitForFirstConsumer +``` + +로컬 볼륨은 현재 동적 프로비저닝을 지원하지 않지만, 파드 스케줄링까지 +볼륨 바인딩을 지연시키기 위해서는 스토리지클래스가 여전히 생성되어야 한다. 이것은 +`WaitForFirstConsumer` 볼륨 바인딩 모드에 의해 지정된다. + +볼륨 바인딩을 지연시키면 스케줄러가 퍼시스턴트볼륨클레임에 +적절한 퍼시스턴트볼륨을 선택할 때 파드의 모든 스케줄링 +제약 조건을 고려할 수 있다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/storage/volume-pvc-datasource.md b/content/ko/docs/concepts/storage/volume-pvc-datasource.md index 92ea37f8cc2ee..b58b882d6d1ee 100644 --- a/content/ko/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/ko/docs/concepts/storage/volume-pvc-datasource.md @@ -6,7 +6,6 @@ weight: 30 {{% capture overview %}} -{{< feature-state for_k8s_version="v1.16" state="beta" >}} 이 문서에서는 쿠버네티스의 기존 CSI 볼륨 복제의 개념을 설명한다. [볼륨] (/ko/docs/concepts/storage/volumes)을 숙지하는 것을 추천한다. @@ -32,6 +31,7 @@ weight: 30 * 복제는 동일한 스토리지 클래스 내에서만 지원된다. - 대상 볼륨은 소스와 동일한 스토리지 클래스여야 한다. - 기본 스토리지 클래스를 사용할 수 있으며, 사양에 storageClassName을 생략할 수 있다. +* 동일한 VolumeMode 설정을 사용하는 두 볼륨에만 복제를 수행할 수 있다(블록 모드 볼륨을 요청하는 경우에는 반드시 소스도 블록 모드여야 한다). ## 프로비저닝 diff --git a/content/ko/docs/concepts/storage/volume-snapshots.md b/content/ko/docs/concepts/storage/volume-snapshots.md new file mode 100644 index 0000000000000..534f3cf88b784 --- /dev/null +++ b/content/ko/docs/concepts/storage/volume-snapshots.md @@ -0,0 +1,149 @@ +--- +title: 볼륨 스냅샷 +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="1.17" state="beta" >}} +쿠버네티스에서 스토리지 시스템 볼륨 스냅샷은 _VolumeSnapshot_ 을 나타낸다. 이 문서는 이미 쿠버네티스 [퍼시스턴트 볼륨](/docs/concepts/storage/persistent-volumes/)에 대해 잘 알고 있다고 가정한다. + +{{% /capture %}} + + +{{% capture body %}} + +## 소개 + +API 리소스 `PersistentVolume` 및 `PersistentVolumeClaim` 가 사용자 및 관리자가 볼륨을 프로비전할 때의 방법과 유사하게, `VolumeSnapshotContent` 및 `VolumeSnapshot` API 리소스는 볼륨 스냅샷을 생성하기 위해 제공된다. + +`VolumeSnapshotContent` 는 관리자가 프로버져닝한 클러스터 볼륨에서의 스냅샷이다. 퍼시스턴트볼륨이 클러스터 리소스인 것처럼 이것 또한 클러스터 리소스이다. + +`VolumeSnapshot` 은 사용자가 볼륨의 스냅샷을 요청할 수 있는 방법이다. 이는 퍼시스턴트볼륨클레임과 유사하다. + +`VolumeSnapshotClass` 을 사용하면 `VolumeSnapshot` 에 속한 다른 속성을 지정할 수 있다. 이러한 속성은 스토리지 시스템에의 동일한 볼륨에서 가져온 스냅샷마다 다를 수 있으므로 `PersistentVolumeClaim` 의 `StorageClass` 를 사용하여 표현할 수는 없다. + +사용자는 이 기능을 사용할 때 다음 사항을 알고 있어야 한다. + +* API 객체인 `VolumeSnapshot`, `VolumeSnapshotContent`, `VolumeSnapshotClass` 는 핵심 API가 아닌, {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRDs" >}}이다. +* `VolumeSnapshot` 은 CSI 드라이버에서만 사용할 수 있다. +* 쿠버네티스 팀은 `VolumeSnapshot` 베타 버젼의 배포 프로세스 일부로써, 컨트롤 플레인에 배포할 스냅샷 컨트롤러와 CSI 드라이버와 함께 배포할 csi-snapshotter라는 사이드카 헬퍼(helper) 컨테이너를 제공한다. 스냅샷 컨트롤러는 `VolumeSnapshot` 및 `VolumeSnapshotContent` 오브젝트를 관찰하고 동적 프로비저닝에서 `VolumeSnapshotContent` 오브젝트의 생성 및 삭제를 할 수 있다.사이드카 csi-snapshotter는 `VolumeSnapshotContent` 오브젝트를 관찰하고 CSI 엔드포인트에 대해 `CreateSnapshot` 및 `DeleteSnapshot` 을 트리거(trigger)한다. +* CSI 드라이버에서의 볼륨 스냅샷 기능 유무는 확실하지 않다. 볼륨 스냅샷 서포트를 제공하는 CSI 드라이버는 csi-snapshotter를 사용할 가능성이 높다. 자세한 사항은 [CSI 드라이버 문서](https://kubernetes-csi.github.io/docs/)를 확인하면 된다. +* CRDs 및 스냅샷 컨트롤러는 쿠버네티스 배포 시 설치된다. + +## 볼륨 스냅샷 및 볼륨 스냅샷 컨텐츠의 라이프사이클 + +`VolumeSnapshotContents` 은 클러스터 리소스이다. `VolumeSnapshots` 은 이러한 리소스의 요청이다. `VolumeSnapshotContents` 과 `VolumeSnapshots`의 상호 작용은 다음과 같은 라이프사이클을 따른다. + +### 프로비저닝 볼륨 스냅샷 + +스냅샷을 프로비저닝할 수 있는 방법에는 사전 프로비저닝 혹은 동적 프로비저닝의 두 가지가 있다: . + +#### 사전 프로비전 {#static} +클러스터 관리자는 많은 `VolumeSnapshotContents` 을 생성한다. 그들은 클러스터 사용자들이 사용 가능한 스토리지 시스템의 실제 볼륨 스냅샷 세부 정보를 제공한다. +이것은 쿠버네티스 API에 있고 사용 가능하다. + +#### 동적 +사전 프로비저닝을 사용하는 대신 퍼시스턴트볼륨클레임에서 스냅샷을 동적으로 가져오도록 요청할 수 있다. [볼륨스냅샷클래스](/docs/concepts/storage/volume-snapshot-classes/)는 스냅샷 사용 시 스토리지 제공자의 특정 파라미터를 명세한다. + +### 바인딩 + +스냅샷 컨트롤러는 사전 프로비저닝과 동적 프로비저닝된 시나리오에서 `VolumeSnapshot` 오브젝트와 적절한 `VolumeSnapshotContent` 오브젝트와의 바인딩을 처리한다. 바인딩은 1:1 매핑이다. + +사전 프로비저닝된 경우, 볼륨스냅샷은 볼륨스냅샷컨텐츠 오브젝트 생성이 요청될 때까지 바인드되지 않은 상태로 유지된다. + +### 스냅샷 소스 보호로서의 퍼시스턴트 볼륨 클레임 + +이 보호의 목적은 스냅샷이 생성되는 동안 사용 중인 퍼시스턴트볼륨클레임 API 오브젝트가 시스템에서 지워지지 않게 하는 것이다(데이터 손실이 발생할 수 있기 때문에). + +퍼시스턴트볼륨클레임이 스냅샷을 생성할 동안에는 해당 퍼시스턴트볼륨클레임은 사용중인 상태이다. 스냅샷 소스로 사용 중인 퍼시스턴트볼륨클레임 API 객체를 삭제한다면, 퍼시스턴트볼륨클레임 객체는 즉시 삭제되지 않는다. 대신, 퍼시스턴트볼륨클레임 객체 삭제는 스냅샷이 준비(readyTouse) 혹은 중단(aborted) 상태가 될 때까지 연기된다. + +### 삭제 + +삭제는 `VolumeSnapshot` 를 삭제 시 트리거로 `DeletionPolicy` 가 실행된다. `DeletionPolicy` 가 `Delete` 라면, 기본 스토리지 스냅샷이 `VolumeSnapshotContent` 오브젝트와 함께 삭제될 것이다. `DeletionPolicy` 이 `Retain` 이라면, 기본 스트리지 스냅샷과 `VolumeSnapshotContent` 둘 다 유지된다. + +## 볼륨 스냅샷 + +각각의 볼륨 스냅샷은 스펙과 상태를 포함한다. + +```yaml +apiVersion: snapshot.storage.k8s.io/v1beta1 +kind: VolumeSnapshot +metadata: + name: new-snapshot-test +spec: + volumeSnapshotClassName: csi-hostpath-snapclass + source: + persistentVolumeClaimName: pvc-test +``` + +`persistentVolumeClaimName` 은 스냅샷을 위한 퍼시스턴트볼륨클레임 데이터 소스의 이름이다. 이 필드는 동적 프로비저닝 스냅샷이 필요하다. + +볼륨 스냅샷은 `volumeSnapshotClassName` 속성을 사용하여 +[볼륨스냅샷클래스](/docs/concepts/storage/volume-snapshot-classes/)의 이름을 지정하여 +특정 클래스를 요청할 수 있다. 아무것도 설정하지 않으면, 사용 가능한 경우 기본 클래스가 사용될 것이다. + +사전 프로비저닝된 스냅샷의 경우, 다음 예와 같이 `volumeSnapshotContentName`을 스냅샷 소스로 지정해야 한다. 사전 프로비저닝된 스냅샷에는 `volumeSnapshotContentName` 소스 필드가 필요하다. + +``` +apiVersion: snapshot.storage.k8s.io/v1beta1 +kind: VolumeSnapshot +metadata: + name: test-snapshot +spec: + source: + volumeSnapshotContentName: test-content +``` + +## 볼륨 스냅샷 컨텐츠 + +각각의 볼륨스냅샷컨텐츠는 스펙과 상태를 포함한다. 동적 프로비저닝에서는, 스냅샷 공통 컨트롤러는 `VolumeSnapshotContent` 오브젝트를 생성한다. 예시는 다음과 같다. + +```yaml +apiVersion: snapshot.storage.k8s.io/v1beta1 +kind: VolumeSnapshotContent +metadata: + name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455 +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002 + volumeSnapshotClassName: csi-hostpath-snapclass + volumeSnapshotRef: + name: new-snapshot-test + namespace: default + uid: 72d9a349-aacd-42d2-a240-d775650d2455 +``` + +`volumeHandle` 은 스토리지 백엔드에서 생성되고 볼륨 생성 중에 CSI 드라이버가 반환하는 볼륨의 고유 식별자이다. 이 필드는 스냅샷을 동적 프로비저닝하는 데 필요하다. 이것은 스냅샷의 볼륨 소스를 지정한다. + +사전 프로비저닝된 스냅샷의 경우, (클러스터 관리자로서) 다음과 같이 `VolumeSnapshotContent` 오브젝트를 작성해야 한다. + +```yaml +apiVersion: snapshot.storage.k8s.io/v1beta1 +kind: VolumeSnapshotContent +metadata: + name: new-snapshot-content-test +spec: + deletionPolicy: Delete + driver: hostpath.csi.k8s.io + source: + snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 + volumeSnapshotRef: + name: new-snapshot-test + namespace: default +``` + +`snapshotHandle` 은 스토리지 백엔드에서 생성된 볼륨 스냅샷의 고유 식별자이다. 이 필드는 사전 프로비저닝된 스냅샷에 필요하다. `VolumeSnapshotContent` 가 나타내는 스토리지 시스템의 CSI 스냅샷 id를 지정한다. + +## 스냅샷을 위한 프로비저닝 볼륨 + +`PersistentVolumeClaim` 오브젝트의 *dataSource* 필드를 사용하여 +스냅샷 데이터로 미리 채워진 새 볼륨을 프로비저닝할 수 있다. + +보다 자세한 사항은 +[볼륨 스냅샷 및 스냅샷에서 볼륨 복원](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)에서 확인할 수 있다. + +{{% /capture %}} diff --git a/content/ko/docs/concepts/storage/volumes.md b/content/ko/docs/concepts/storage/volumes.md index 95395f7587250..0ad7755031ef2 100644 --- a/content/ko/docs/concepts/storage/volumes.md +++ b/content/ko/docs/concepts/storage/volumes.md @@ -744,11 +744,11 @@ spec: ### persistentVolumeClaim {#persistentvolumeclaim} `persistentVolumeClaim` 볼륨은 -[퍼시스턴트볼륨](/docs/concepts/storage/persistent-volumes/)을 파드에 마운트하는데 사용한다. 퍼시스턴트볼륨은 +[퍼시스턴트볼륨](/ko/docs/concepts/storage/persistent-volumes)을 파드에 마운트하는데 사용한다. 퍼시스턴트볼륨은 사용자가 특정 클라우드 환경의 세부 내용을 몰라도 내구성이있는 스토리지 (GCE 퍼시스턴트디스크 또는 iSCSI 볼륨와 같은)를 "클레임" 할 수 있는 방법이다. -더 자세한 내용은 [퍼시스턴트볼륨 예시](/docs/concepts/storage/persistent-volumes/)를 +더 자세한 내용은 [퍼시스턴트볼륨 예시](/ko/docs/concepts/storage/persistent-volumes)를 본다. ### projected {#projected} @@ -974,7 +974,7 @@ ScaleIO는 기존 하드웨어를 사용해서 확장 가능한 공유 블럭 생성할 수 있는 소프트웨어 기반 스포티리 플랫폼이다. `scaleIO` 볼륨 플러그인을 사용하면 배포된 파드가 기존 ScaleIO에 접근할 수 있다(또는 퍼시스턴트 볼륨 클래임을 위한 새 볼륨을 동적 프로비전할 수 있음, -[ScaleIO 퍼시스턴트 볼륨](/docs/concepts/storage/persistent-volumes/#scaleio)을 본다). +[ScaleIO 퍼시스턴트 볼륨](/ko/docs/concepts/storage/persistent-volumes/#scaleio)을 본다). {{< caution >}} 사용하기 위해선 먼저 기존에 ScaleIO 클러스터를 먼저 설정하고 @@ -1327,19 +1327,13 @@ CSI 호환 볼륨 드라이버가 쿠버네티스 클러스터에 배포되면 #### CSI 원시(raw) 블록 볼륨 지원 -{{< feature-state for_k8s_version="v1.14" state="beta" >}} +{{< feature-state for_k8s_version="v1.18" state="stable" >}} -1.11 버전부터 CSI는 이전 버전의 쿠버네티스에서 도입된 원시 -블록 볼륨 기능에 의존하는 원시 블록 볼륨에 대한 지원을 -도입했다. 이 기능을 사용하면 외부 CSI 드라이버가 있는 벤더들이 쿠버네티스 -워크로드에서 원시 블록 볼륨 지원을 구현할 수 있다. +외부 CSI 드라이버가 있는 벤더들은 쿠버네티스 워크로드에서 원시(raw) 블록 볼륨 +지원을 구현할 수 있다. -CSI 블록 볼륨은 기능 게이트로 지원하지만, 기본적으로 활성화되어있다. 이 -기능을 위해 활성화 되어야하는 두개의 기능 게이트는 `BlockVolume` 과 -`CSIBlockVolume` 이다. - -[원시 블록 볼륨 지원으로 PV/PVC 설정](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support) -방법을 알아본다. +CSI 설정 변경 없이 평소와 같이 +[원시 블록 볼륨 지원으로 PV/PVC 설정](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)을 할 수 있다. #### CSI 임시(ephemeral) 볼륨 diff --git a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md index 4bbcf22ebb11d..b72607d41a984 100644 --- a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md @@ -14,12 +14,7 @@ _크론 잡은_ 시간 기반의 일정에 따라 [잡](/ko/docs/concepts/worklo {{< caution >}} -모든 **크론잡** `일정:` 시간은 {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} -의 시간대를 기준으로 한다. - -컨트롤 플레인이 파드 또는 베어 컨테이너에서 kube-controller-manager를 -실행하는 경우 kube-controller-manager 컨테이너의 설정된 시간대는 크론 잡 컨트롤러가 -사용하는 시간대로 설정한다. +모든 **크론잡** `일정:` 시간은 UTC로 표시된다. {{< /caution >}} 크론잡 리소스에 대한 매니페스트를 생성할때에는 제공하는 이름이 diff --git a/content/ko/docs/concepts/workloads/controllers/deployment.md b/content/ko/docs/concepts/workloads/controllers/deployment.md index 2d9097c7183f0..5afb68b9bc2b6 100644 --- a/content/ko/docs/concepts/workloads/controllers/deployment.md +++ b/content/ko/docs/concepts/workloads/controllers/deployment.md @@ -61,7 +61,7 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 * `template` 필드에는 다음 하위 필드가 포함되어있다. * 파드는 `labels` 필드를 사용해서 `app: nginx` 이라는 레이블을 붙인다. * 파드 템플릿의 사양 또는 `.template.spec` 필드는 - 파드가 [도커 허브](https://hub.docker.com/)의 `nginx` 1.7.9 버전 이미지를 실행하는 + 파드가 [도커 허브](https://hub.docker.com/)의 `nginx` 1.14.2 버전 이미지를 실행하는 `nginx` 컨테이너 1개를 실행하는 것을 나타낸다. * 컨테이너 1개를 생성하고, `name` 필드를 사용해서 `nginx` 이름을 붙인다. @@ -151,15 +151,15 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 다음 단계에 따라 디플로이먼트를 업데이트한다. -1. `nginx:1.7.9` 이미지 대신 `nginx:1.9.1` 이미지를 사용하도록 nginx 파드를 업데이트 한다. +1. `nginx:1.14.2` 이미지 대신 `nginx:1.16.1` 이미지를 사용하도록 nginx 파드를 업데이트 한다. ```shell - kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` 또는 간단하게 다음의 명령어를 사용한다. ```shell - kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record + kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record ``` 이와 유사하게 출력된다. @@ -167,7 +167,7 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 deployment.apps/nginx-deployment image updated ``` - 대안으로 디플로이먼트를 `edit` 해서 `.spec.template.spec.containers[0].image` 를 `nginx:1.7.9` 에서 `nginx:1.9.1` 로 변경한다. + 대안으로 디플로이먼트를 `edit` 해서 `.spec.template.spec.containers[0].image` 를 `nginx:1.14.2` 에서 `nginx:1.16.1` 로 변경한다. ```shell kubectl edit deployment.v1.apps/nginx-deployment @@ -263,7 +263,7 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 Labels: app=nginx Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP Environment: Mounts: @@ -304,11 +304,11 @@ _디플로이먼트_ 는 [파드](/ko/docs/concepts/workloads/pods/pod/)와 스케일 업하기 시작한다. 그리고 이전에 스케일 업 하던 레플리카셋에 롤오버 한다. --이것은 기존 레플리카셋 목록에 추가하고 스케일 다운을 할 것이다. -예를 들어 디플로이먼트로 `nginx:1.7.9` 레플리카를 5개 생성을 한다. -하지만 `nginx:1.7.9` 레플리카 3개가 생성되었을 때 디플로이먼트를 업데이트해서 `nginx:1.9.1` +예를 들어 디플로이먼트로 `nginx:1.14.2` 레플리카를 5개 생성을 한다. +하지만 `nginx:1.14.2` 레플리카 3개가 생성되었을 때 디플로이먼트를 업데이트해서 `nginx:1.16.1` 레플리카 5개를 생성성하도록 업데이트를 한다고 가정한다. 이 경우 디플로이먼트는 즉시 생성된 3개의 -`nginx:1.7.9` 파드 3개를 죽이기 시작하고 `nginx:1.9.1` 파드를 생성하기 시작한다. -이것은 과정이 변경되기 전 `nginx:1.7.9` 레플리카 5개가 +`nginx:1.14.2` 파드 3개를 죽이기 시작하고 `nginx:1.16.1` 파드를 생성하기 시작한다. +이것은 과정이 변경되기 전 `nginx:1.14.2` 레플리카 5개가 생성되는 것을 기다리지 않는다. ### 레이블 셀렉터 업데이트 @@ -345,10 +345,10 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 롤백된다는 것을 의미한다. {{< /note >}} -* 디플로이먼트를 업데이트하는 동안 이미지 이름을 `nginx:1.9.1` 이 아닌 `nginx:1.91` 로 입력해서 오타를 냈다고 가정한다. +* 디플로이먼트를 업데이트하는 동안 이미지 이름을 `nginx:1.16.1` 이 아닌 `nginx:1.161` 로 입력해서 오타를 냈다고 가정한다. ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true ``` 이와 유사하게 출력된다. @@ -425,7 +425,7 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 Labels: app=nginx Containers: nginx: - Image: nginx:1.91 + Image: nginx:1.161 Port: 80/TCP Host Port: 0/TCP Environment: @@ -466,13 +466,13 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 deployments "nginx-deployment" REVISION CHANGE-CAUSE 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true - 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true - 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true + 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true ``` `CHANGE-CAUSE` 는 수정 생성시 디플로이먼트 주석인 `kubernetes.io/change-cause` 에서 복사한다. 다음에 대해 `CHANGE-CAUSE` 메시지를 지정할 수 있다. - * 디플로이먼트에 `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"` 로 주석을 단다. + * 디플로이먼트에 `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"` 로 주석을 단다. * `kubectl` 명령어 이용시 `--record` 플래그를 추가해서 리소스 변경을 저장한다. * 수동으로 리소스 매니페스트 편집. @@ -486,10 +486,10 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 deployments "nginx-deployment" revision 2 Labels: app=nginx pod-template-hash=1159050644 - Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP QoS Tier: cpu: BestEffort @@ -547,7 +547,7 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 Labels: app=nginx Annotations: deployment.kubernetes.io/revision=4 - kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate @@ -557,7 +557,7 @@ API 버전 `apps/v1` 에서 디플로이먼트의 레이블 셀렉터는 생성 Labels: app=nginx Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP Host Port: 0/TCP Environment: @@ -720,7 +720,7 @@ nginx-deployment-618515232 11 11 11 7m * 그런 다음 디플로이먼트의 이미지를 업데이트 한다. ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` 이와 유사하게 출력된다. @@ -1075,7 +1075,7 @@ API 버전 `apps/v1` 에서는 `.spec.selector` 와 `.metadata.labels` 이 설 #### 디플로이먼트 롤링 업데이트 -디플로이먼트는 `.spec.strategy.type==RollingUpdate` 이면 파드를 [롤링 업데이트](/docs/tasks/run-application/rolling-update-replication-controller/) +디플로이먼트는 `.spec.strategy.type==RollingUpdate` 이면 파드를 롤링 업데이트 방식으로 업데이트 한다. `maxUnavailable` 와 `maxSurge` 를 명시해서 롤링 업데이트 프로세스를 제어할 수 있다. @@ -1141,12 +1141,4 @@ API 버전 `apps/v1` 에서는 `.spec.selector` 와 `.metadata.labels` 이 설 일시 중지된 디플로이먼트는 PodTemplateSpec에 대한 변경 사항이 일시중지 된 경우 새 롤아웃을 트리거 하지 않는다. 디플로이먼트는 생성시 기본적으로 일시 중지되지 않는다. -## 디플로이먼트의 대안 - -### kubectl 롤링 업데이트 - -[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update)도 -비슷한 방식으로 파드와 레플리케이션 컨트롤러를 업데이트한다. 그러나 디플로이먼트는 선언적이고, 서버 측면이며, -롤링 업데이트가 완료된 후에도 이전 수정 버전으로 롤백하는 것과 같은 추가 기능을 가지고 있으므로 권장한다. - {{% /capture %}} diff --git a/content/ko/docs/concepts/workloads/controllers/replicaset.md b/content/ko/docs/concepts/workloads/controllers/replicaset.md index d98fbcd7cff5c..6058fbab1cdf8 100644 --- a/content/ko/docs/concepts/workloads/controllers/replicaset.md +++ b/content/ko/docs/concepts/workloads/controllers/replicaset.md @@ -22,8 +22,8 @@ weight: 10 레플리카셋이 새로운 파드를 생성해야 할 경우, 명시된 파드 템플릿을 사용한다. -레플리카셋과 파드와의 링크는 파드의 [metadata.ownerReferences](/ko/docs/concepts/workloads/controllers/garbage-collection/#소유자-owner-와-종속-dependent) -필드를 통해서 제공되며, 이는 현재 오브젝트가 소유한 리소스를 명시한다. +레플리카셋은 파드의 [metadata.ownerReferences](/ko/docs/concepts/workloads/controllers/garbage-collection/#소유자-owner-와-종속-dependent) +필드를 통해 파드에 연결되며, 이는 현재 오브젝트가 소유한 리소스를 명시한다. 레플리카셋이 가지고 있는 모든 파드의 ownerReferences 필드는 해당 파드를 소유한 레플리카셋을 식별하기 위한 소유자 정보를 가진다. 이 링크를 통해 레플리카셋은 자신이 유지하는 파드의 상태를 확인하고 이에 따라 관리 한다. diff --git a/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md b/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md index 1b80519f02fe6..4551dc99b427e 100644 --- a/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md @@ -217,9 +217,6 @@ REST API나 go 클라이언트 라이브러리를 사용하는 경우 간단히 두 레플리케이션 컨트롤러는 일반적으로 롤링 업데이트를 동기화 하는 이미지 업데이트이기 때문에 파드의 기본 컨테이너 이미지 태그와 같이 적어도 하나의 차별화된 레이블로 파드를 생성해야 한다. -롤링 업데이트는 클라이언트 툴에서 [`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) 로 수행된다. -좀 더 상세한 예제는 [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) 를 방문하라. - ### 다수의 릴리스 트랙 롤링 업데이트가 진행되는 동안 다수의 애플리케이션 릴리스를 실행하는 것 외에도 다수의 릴리스 트랙을 사용하여 장기간에 걸쳐 또는 연속적으로 실행하는 것이 일반적이다. 트랙은 레이블 별로 구분된다. @@ -243,7 +240,7 @@ REST API나 go 클라이언트 라이브러리를 사용하는 경우 간단히 레플리케이션 컨트롤러는 이 좁은 책임에 영원히 제약을 받는다. 그 자체로는 준비성 또는 활성 프로브를 실행하지 않을 것이다. 오토 스케일링을 수행하는 대신, 외부 오토 스케일러 ([#492](http://issue.k8s.io/492)에서 논의된) 가 레플리케이션 컨트롤러의 `replicas` 필드를 변경함으로써 제어되도록 의도되었다. 레플리케이션 컨트롤러에 스케줄링 정책 (예를 들어 [spreading](http://issue.k8s.io/367#issuecomment-48428019)) 을 추가하지 않을 것이다. 오토사이징 및 기타 자동화 된 프로세스를 방해할 수 있으므로 제어된 파드가 현재 지정된 템플릿과 일치하는지 확인해야 한다. 마찬가지로 기한 완료, 순서 종속성, 구성 확장 및 기타 기능은 다른 곳에 속한다. 대량의 파드 생성 메커니즘 ([#170](http://issue.k8s.io/170)) 까지도 고려해야 한다. -레플리케이션 컨트롤러는 조합 가능한 빌딩-블록 프리미티브가 되도록 고안되었다. 향후 사용자의 편의를 위해 더 상위 수준의 API 및/또는 도구와 그리고 다른 보완적인 기본 요소가 그 위에 구축 될 것으로 기대한다. 현재 kubectl이 지원하는 "매크로" 작업 (실행, 스케일, 롤링 업데이트)은 개념 증명의 예시이다. 예를 들어 [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) 와 같이 레플리케이션 컨트롤러, 오토 스케일러, 서비스, 정책 스케줄링, 카나리 등을 관리할 수 있다. +레플리케이션 컨트롤러는 조합 가능한 빌딩-블록 프리미티브가 되도록 고안되었다. 향후 사용자의 편의를 위해 더 상위 수준의 API 및/또는 도구와 그리고 다른 보완적인 기본 요소가 그 위에 구축 될 것으로 기대한다. 현재 kubectl이 지원하는 "매크로" 작업 (실행, 스케일)은 개념 증명의 예시이다. 예를 들어 [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) 와 같이 레플리케이션 컨트롤러, 오토 스케일러, 서비스, 정책 스케줄링, 카나리 등을 관리할 수 있다. ## API 오브젝트 @@ -263,9 +260,7 @@ API 오브젝트에 대한 더 자세한 것은 ### 디플로이먼트 (권장되는) -[`디플로이먼트`](/ko/docs/concepts/workloads/controllers/deployment/) 는 `kubectl rolling-update` 와 비슷한 방식으로 기본 레플리카셋과 그 파드를 업데이트하는 상위 수준의 API 오브젝트이다. -`kubectl rolling-update` 와는 다르게 선언적이며, 서버 사이드이고, -추가 기능이 있기 때문에 롤링 업데이트 기능을 원한다면 디플로이먼트를 권장한다. +[`디플로이먼트`](/ko/docs/concepts/workloads/controllers/deployment/) 는 기본 레플리카셋과 그 파드를 업데이트하는 상위 수준의 API 오브젝트이다. 선언적이며, 서버 사이드이고, 추가 기능이 있기 때문에 롤링 업데이트 기능을 원한다면 디플로이먼트를 권장한다. ### 베어 파드 diff --git a/content/ko/docs/concepts/workloads/controllers/statefulset.md b/content/ko/docs/concepts/workloads/controllers/statefulset.md index 88b975ec6e540..e3739842194bb 100644 --- a/content/ko/docs/concepts/workloads/controllers/statefulset.md +++ b/content/ko/docs/concepts/workloads/controllers/statefulset.md @@ -99,7 +99,7 @@ spec: * 이름이 nginx라는 헤드리스 서비스는 네트워크 도메인을 컨트롤하는데 사용 한다. * 이름이 web인 스테이트풀셋은 3개의 nginx 컨테이너의 레플리카가 고유의 파드에서 구동될 것이라 지시하는 Spec을 갖는다. -* volumeClaimTemplates은 퍼시스턴트 볼륨 프로비저너에서 프로비전한 [퍼시스턴트 볼륨](/docs/concepts/storage/persistent-volumes/)을 사용해서 안정적인 스토리지를 제공한다. +* volumeClaimTemplates은 퍼시스턴트 볼륨 프로비저너에서 프로비전한 [퍼시스턴트 볼륨](/ko/docs/concepts/storage/persistent-volumes/)을 사용해서 안정적인 스토리지를 제공한다. 스테이트풀셋 오브젝트의 이름은 유효한 [DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. @@ -153,7 +153,7 @@ N개의 레플리카가 있는 스테이트풀셋은 스테이트풀셋에 있 ### 안정된 스토리지 -쿠버네티스는 각 VolumeClaimTemplate마다 하나의 [퍼시스턴트 볼륨](/docs/concepts/storage/persistent-volumes/)을 +쿠버네티스는 각 VolumeClaimTemplate마다 하나의 [퍼시스턴트 볼륨](/ko/docs/concepts/storage/persistent-volumes/)을 생성한다. 위의 nginx 예시에서 각 파드는 `my-storage-class` 라는 스토리지 클래스와 1 Gib의 프로비전된 스토리지를 가지는 단일 퍼시스턴트 볼륨을 받게된다. 만약 스토리지 클래스가 명시되지 않은 경우 기본 스토리지 클래스를 사용된다. 파드가 노드에서 스케줄 혹은 재스케줄이되면 diff --git a/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md b/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md index c394c58ab74d1..dd061e513016b 100644 --- a/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/ko/docs/concepts/workloads/pods/ephemeral-containers.md @@ -15,10 +15,8 @@ weight: 80 {{< warning >}} 임시 컨테이너는 초기 알파 상태이며, 프로덕션 클러스터에는 -적합하지 않다. 사용자는 컨테이너 네임스페이스를 대상으로 하는 경우와 -같은 어떤 상황에서 기능이 작동하지 않을 것으로 예상해야 한다. [쿠버네티스 -사용중단(deprecation) 정책](/docs/reference/using-api/deprecation-policy/)에 따라 이 알파 -기능은 향후 크게 변경되거나, 완전히 제거될 수 있다. +적합하지 않다. [쿠버네티스 사용중단(deprecation) 정책](/docs/reference/using-api/deprecation-policy/)에 따라 +이 알파 기능은 향후 크게 변경되거나, 완전히 제거될 수 있다. {{< /warning >}} {{% /capture %}} @@ -75,7 +73,11 @@ API에서 특별한 `ephemeralcontainers` 핸들러를 사용해서 만들어지 공유](/docs/tasks/configure-pod-container/share-process-namespace/)를 활성화하면 다른 컨테이너 안의 프로세스를 보는데 도움이 된다. -### 예시 +임시 컨테이너를 사용해서 문제를 해결하는 예시는 +[임시 디버깅 컨테이너로 디버깅하기] +(/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)를 참조한다. + +## 임시 컨테이너 API {{< note >}} 이 섹션의 예시는 `EphemeralContainers` [기능 @@ -84,8 +86,9 @@ API에서 특별한 `ephemeralcontainers` 핸들러를 사용해서 만들어지 {{< /note >}} 이 섹션의 에시는 임시 컨테이너가 어떻게 API에 나타나는지 -보여준다. 사용자는 일반적으로 자동화하는 단계의 문제 해결을 위해 `kubectl` -플러그인을 사용했을 것이다. +보여준다. 일반적으로 `kubectl alpha debug` 또는 +다른 `kubectl` [플러그인](/docs/tasks/extend-kubectl/kubectl-plugins/)을 +사용해서 API를 직접 호출하지 않고 이런 단계들을 자동화 한다. 임시 컨테이너는 파드의 `ephemeralcontainers` 하위 리소스를 사용해서 생성되며, `kubectl --raw` 를 사용해서 보여준다. 먼저 @@ -177,35 +180,12 @@ Ephemeral Containers: ... ``` -사용자는 `kubectl attach` 를 사용해서 새로운 임시 컨테이너에 붙을 수 있다. +예시와 같이 `kubectl attach`, `kubectl exec`, 그리고 `kubectl logs` 를 사용해서 +다른 컨테이너와 같은 방식으로 새로운 임시 컨테이너와 +상호작용할 수 있다. ```shell kubectl attach -it example-pod -c debugger ``` -만약 프로세스 네임스페이스를 공유를 활성화하면, 사용자는 해당 파드 안의 모든 컨테이너의 프로세스를 볼 수 있다. -예를 들어, 임시 컨테이너에 붙은 이후에 디버거 컨테이너에서 `ps` 를 실행한다. - -```shell -# "디버거" 임시 컨테이너 내부 쉘에서 이것을 실행한다. -ps auxww -``` -다음과 유사하게 출력된다. -``` -PID USER TIME COMMAND - 1 root 0:00 /pause - 6 root 0:00 nginx: master process nginx -g daemon off; - 11 101 0:00 nginx: worker process - 12 101 0:00 nginx: worker process - 13 101 0:00 nginx: worker process - 14 101 0:00 nginx: worker process - 15 101 0:00 nginx: worker process - 16 101 0:00 nginx: worker process - 17 101 0:00 nginx: worker process - 18 101 0:00 nginx: worker process - 19 root 0:00 /pause - 24 root 0:00 sh - 29 root 0:00 ps auxww -``` - {{% /capture %}} diff --git a/content/ko/docs/concepts/workloads/pods/init-containers.md b/content/ko/docs/concepts/workloads/pods/init-containers.md index 9e90cb0d77d2b..d4b73562b648f 100644 --- a/content/ko/docs/concepts/workloads/pods/init-containers.md +++ b/content/ko/docs/concepts/workloads/pods/init-containers.md @@ -243,8 +243,11 @@ myapp-pod 1/1 Running 0 9m ## 자세한 동작 -파드 시동 시, 네트워크와 볼륨이 초기화되고 나면, 초기화 컨테이너가 -순서대로 시작된다. 각 초기화 컨테이너는 다음 컨테이너가 시작되기 전에 성공적으로 +파드 시작 시에 kubelet은 네트워크와 스토리지가 준비될 때까지 +초기화 컨테이너의 실행을 지연시킨다. 그런 다음 kubelet은 파드 사양에 +나와있는 순서대로 파드의 초기화 컨테이너를 실행한다. + +각 초기화 컨테이너는 다음 컨테이너가 시작되기 전에 성공적으로 종료되어야 한다. 만약 런타임 문제나 실패 상태로 종료되는 문제로인하여 초기화 컨테이너의 시작이 실패된다면, 초기화 컨테이너는 파드의 `restartPolicy`에 따라서 재시도 된다. 다만, 파드의 `restartPolicy`이 항상(Always)으로 설정된 경우, 해당 초기화 컨테이너는 diff --git a/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 8ebdab23455bf..7bde7139a93ef 100644 --- a/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -6,7 +6,7 @@ weight: 50 {{% capture overview %}} -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.18" state="beta" >}} 사용자는 _토폴로지 분배 제약 조건_ 을 사용해서 지역, 영역, 노드 그리고 기타 사용자-정의 토폴로지 도메인과 같이 장애-도메인으로 설정된 클러스터에 걸쳐 파드가 분산되는 방식을 제어할 수 있다. 이를 통해 고가용성뿐만 아니라, 효율적인 리소스 활용의 목적을 이루는 데 도움이 된다. @@ -18,11 +18,10 @@ weight: 50 ### 기능 게이트 활성화 -`EvenPodsSpread` 기능 게이트의 활성화가 되었는지 확인한다(기본적으로 1.16에서는 -비활성화되어있다). 기능 게이트의 활성화에 대한 설명은 [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/) 를 참조한다. {{< glossary_tooltip text="API 서버" term_id="kube-apiserver" >}} **와** {{< glossary_tooltip text="스케줄러" term_id="kube-scheduler" >}}에 -대해 `EvenPodsSpread` 기능 게이트가 활성화되어야 한다. +대해 `EvenPodsSpread` +[기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)가 활성화되어야 한다. ### 노드 레이블 @@ -184,6 +183,46 @@ spec: {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} +### 클러스터 수준의 기본 제약 조건 + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +클러스터에 대한 기본 토폴로지 분배 제약 조건을 설정할 수 있다. 기본 +토폴로지 분배 제약 조건은 다음과 같은 경우에만 파드에 적용된다. + +- `.spec.topologySpreadConstraints` 에는 어떠한 제약도 정의되어 있지 않는 경우. +- 서비스, 레플리케이션 컨트롤러, 레플리카 셋 또는 스테이트풀 셋에 속해있는 경우. + +기본 제약 조건은 [스케줄링 프로파일](/docs/reference/scheduling/profiles)에서 +`PodTopologySpread` 플러그인의 일부로 설정할 수 있다. +제약 조건은 `labelSelector` 가 비어 있어야 한다는 점을 제외하고, [위와 동일한 API](#api)로 +제약 조건을 지정한다. 셀렉터는 파드가 속한 서비스, 레플리케이션 컨트롤러, +레플리카 셋 또는 스테이트풀 셋에서 계산한다. + +예시 구성은 다음과 같다. + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1alpha2 +kind: KubeSchedulerConfiguration + +profiles: + pluginConfig: + - name: PodTopologySpread + args: + defaultConstraints: + - maxSkew: 1 + topologyKey: failure-domain.beta.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway +``` + +{{< note >}} +기본 스케줄링 제약 조건에 의해 생성된 점수는 +[`DefaultPodTopologySpread` 플러그인](/docs/reference/scheduling/profiles/#scheduling-plugins)에 +의해 생성된 점수와 충돌 할 수 있다. +`PodTopologySpread` 에 대한 기본 제약 조건을 사용할 때 스케줄링 프로파일에서 +이 플러그인을 비활성화 하는 것을 권장한다. +{{< /note >}} + ## 파드어피니티(PodAffinity)/파드안티어피니티(PodAntiAffinity)와의 비교 쿠버네티스에서 "어피니티(Affinity)"와 관련된 지침은 파드가 @@ -201,9 +240,9 @@ spec: ## 알려진 제한사항 -1.16을 기준으로 이 기능은 알파(Alpha)이며, 몇 가지 알려진 제한사항이 있다. +1.18을 기준으로 이 기능은 베타(Beta)이며, 몇 가지 알려진 제한사항이 있다. -- `Deployment` 를 스케일링 다운하면 그 결과로 파드의 분포가 불균형이 될 수 있다. +- 디플로이먼트를 스케일링 다운하면 그 결과로 파드의 분포가 불균형이 될 수 있다. - 파드와 일치하는 테인트(taint)가 된 노드가 존중된다. [이슈 80921](https://github.com/kubernetes/kubernetes/issues/80921)을 본다. {{% /capture %}} diff --git a/content/ko/docs/concepts/workloads/pods/pod.md b/content/ko/docs/concepts/workloads/pods/pod.md index edeb4b3a1f734..b4c7e63fbe687 100644 --- a/content/ko/docs/concepts/workloads/pods/pod.md +++ b/content/ko/docs/concepts/workloads/pods/pod.md @@ -33,7 +33,7 @@ _파드_ 는 (고래 떼(pod of whales)나 콩꼬투리(pea pod)와 마찬가지 그들은 또한 SystemV 세마포어나, POSIX 공유 메모리와 같은 표준 프로세스 간 통신 방식으로 서로 통신할 수 있다. 다른 파드의 컨테이너에는 고유한 IP 주소가 있고, -[특별한 구성](/docs/concepts/policy/pod-security-policy/) 없이는 IPC에 의해서 통신 할 수 없다. +[특별한 구성](/ko/docs/concepts/policy/pod-security-policy/) 없이는 IPC에 의해서 통신 할 수 없다. 컨테이너는 주로 서로의 IP 주소를 통해 소통한다. 또한 파드 안의 애플리케이션은 파드의 일부로 정의되어, diff --git a/content/ko/docs/contribute/_index.md b/content/ko/docs/contribute/_index.md index 12d70b2c79e43..18958c7531e23 100644 --- a/content/ko/docs/contribute/_index.md +++ b/content/ko/docs/contribute/_index.md @@ -4,14 +4,24 @@ title: 쿠버네티스 문서에 기여하기 linktitle: 기여 main_menu: true weight: 80 +card: + name: contribute + weight: 10 + title: 기여 시작하기 --- {{% capture overview %}} -쿠버네티스 문서 또는 웹사이트에 기여하여 도움을 제공하고 싶다면, -우리는 당신의 도움을 기쁘게 생각한다! 당신이 새로운 프로젝트에 참여했거나 -오랜 시간 동안 진행해온 누군가로서, -혹은 개발자, 최종 사용자 또는 단지 오타를 보고 참지 못하는 누군가로서 기여할 수 있다. +이 웹사이트는 [쿠버네티스 SIG Docs](/docs/contribute/#get-involved-with-sig-docs)에 의해서 관리됩니다. + +쿠버네티스 문서 기여자들은 + +- 기존 콘텐츠를 개선합니다. +- 새 콘텐츠를 만듭니다. +- 문서를 번역합니다. +- 쿠버네티스 릴리스 주기에 맞추어 문서 부분을 관리하고 발행합니다. + +쿠버네티스 문서는 새롭고 경험이 풍부한 모든 기여자의 개선을 환영합니다! {{% /capture %}} @@ -19,44 +29,49 @@ weight: 80 ## 시작하기 -누구든지 문제에 대한 설명이나, 원하는 문서의 개선사항에 대한 이슈를 오픈 또는 풀 리퀘스트(PR)로 변경하는 기여를 할 수 있다. -일부 작업에는 쿠버네티스 조직에서 더 많은 신뢰와 더 많은 접근이 필요할 수 있다. +누구든지 문서에 대한 이슈를 오픈 또는 풀 리퀘스트(PR)를 사용해서 [`kubernetes/website` GitHub 리포지터리](https://github.com/kubernetes/website)에 변경하는 기여를 할 수 있습니다. 당신이 쿠버네티스 커뮤니티에 효과적으로 기여하려면 [git](https://git-scm.com/)과 [GitHub](https://lab.github.com/)에 익숙해야 합니다. + +문서에 참여하려면 + +1. CNCF [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md)에 서명합니다. +2. [문서 리포지터리](https://github.com/kubernetes/website) 와 웹사이트의 [정적 사이트 생성기](https://gohugo.io)를 숙지합니다. +3. [풀 리퀘스트 열기](/docs/contribute/new-content/open-a-pr/)와 [변경 검토](/docs/contribute/review/reviewing-prs/)의 기본 프로세스를 이해하도록 합니다. + +일부 작업에는 쿠버네티스 조직에서 더 많은 신뢰와 더 많은 접근이 필요할 수 있습니다. 역할과 권한에 대한 자세한 내용은 -[SIG Docs 참여](/ko/docs/contribute/participating/)를 본다. +[SIG Docs 참여](/ko/docs/contribute/participating/)를 봅니다. -쿠버네티스 문서는 GitHub 리포지터리에 있다. 우리는 누구나 -기여를 환경하지만, 쿠버네티스 커뮤니티에서 효과적으로 활동하려면 git과 GitHub의 -기초적인 이해가 필요하다. +## 첫 번째 기여 -문서에 참여하려면 +- [기여 개요](/docs/contribute/new-content/overview/)를 읽고 기여할 수 있는 다양한 방법에 대해 알아봅니다. +- 기존 문서에 대해 [GitHub을 사용해서 풀 리퀘스트 열거나](/docs/contribute/new-content/new-content/#changes-using-github) GitHub에서의 이슈 제기에 대해 자세히 알아봅니다. +- 정확성과 언어에 대해 다른 쿠버네티스 커뮤니티 맴버의 [풀 리퀘스트 검토](/docs/contribute/review/reviewing-prs/)를 합니다. +- 쿠버네티스 [컨텐츠](/docs/contribute/style/content-guide/)와 [스타일 가이드](/docs/contribute/style/style-guide/)를 읽고 정보에 대한 코멘트를 남길 수 있습니다. +- [페이지 템플릿 사용](/docs/contribute/style/page-templates/)과 [휴고(Hugo) 단축코드(shortcodes)](/docs/contribute/style/hugo-shortcodes/)를 사용해서 큰 변경을 하는 방법에 대해 배워봅니다. -1. CNCF [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md)에 서명한다. -2. [문서 리포지터리](https://github.com/kubernetes/website) 와 웹사이트의 [정적 사이트 생성기](https://gohugo.io)를 숙지한다. -3. [콘텐츠 향상](https://kubernetes.io/docs/contribute/start/#improve-existing-content)과 [변경 검토](https://kubernetes.io/docs/contribute/start/#review-docs-pull-requests)의 기본 프로세스를 이해하도록 한다. +## 다음 단계 -## 기여 모범 사례 +- 리포지터리의 [로컬 복제본에서 작업](/docs/contribute/new-content/new-content/#fork-the-repo)하는 방법을 배워봅니다. +- [릴리스된 기능](/docs/contribute/new-content/new-features/)을 문서화 합니다. +- [SIG Docs](/ko/docs/contribute/participating/)에 참여하고, [멤버 또는 검토자](/ko/docs/contribute/participating/#역할과-책임)가 되어봅니다. +- [현지화](/ko/docs/contribute/localization_ko/)를 시작하거나 도와줍니다. -- 명확하고, 의미있는 GIT 커밋 메시지를 작성한다. -- 이슈를 참조하고, PR이 병합될 때 이슈를 자동으로 닫는 _Github 특수 키워드_ 를 포함한다. -- 오타 수정, 스타일 변경 또는 문법 변경과 같이 변경이 적은 PR을 생성할때, 비교적으로 적은 변화로 많은 커밋 개수를 받지 않도록 반드시 커밋을 스쿼시(squash)한다. -- 변경한 코드를 묘사하고, 코드를 변경한 이유를 포함하는 멋진 PR 설명을 포함하고 있는지와 리뷰어를 위한 충분한 정보가 있는지 꼭 확인한다. -- 추가적인 읽을거리들 - - [chris.beams.io/posts/git-commit/](https://chris.beams.io/posts/git-commit/) - - [github.com/blog/1506-closing-issues-via-pull-requests ](https://github.com/blog/1506-closing-issues-via-pull-requests ) - - [davidwalsh.name/squash-commits-git ](https://davidwalsh.name/squash-commits-git ) +## SIG Docs에 참여 -## 다른 방법으로 기여하기 +[SIG Docs](/ko/docs/contribute/participating/)는 쿠버네티스 문서와 웹 사이트를 게시하고 관리하는 기여자 그룹입니다. SIG Docs에 참여하는 것은 쿠버네티스 기여자(기능 개발 및 다른 여러가지)가 쿠버네티스 프로젝트에 가장 큰 영향을 미칠 수 있는 좋은 방법입니다. -- 트위터나 스택오버플로(Stack Overflow) 등의 온라인 포럼의 쿠버네티스 커뮤니티에 기여하거나 지역 모임과 쿠버네티스 이벤트에 관하여 알고 싶다면 [쿠버네티스 커뮤니티 사이트](/community/)를 확인한다. -- 기능 개발에 기여하려면 [기여자 치트시트](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet)를 읽고 시작한다. +SIG Docs는 여러가지 방법으로 의견을 나누고 있습니다. -{{% /capture %}} +- [쿠버네티스 슬랙 인스턴스에서 `#sig-docs` 에 가입](http://slack.k8s.io/)을 하고, + 자신을 소개하세요! +- 더 광범위한 토론이 이루어지고 공식적인 결정이 기록이 되는 + [`kubernetes-sig-docs` 메일링 리스트에 가입](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) 하세요. +- [주간 SIG Docs 화상 회의](https://github.com/kubernetes/community/tree/master/sig-docs)에 참여하세요. 회의는 항상 `#sig-docs` 에 발표되며 [쿠버네티스 커뮤니티 회의 일정](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles)에 추가됩니다. [줌(Zoon) 클라이언트](https://zoom.us/download)를 다운로드 하거나 전화를 이용하여 전화 접속해야 합니다. -{{% capture whatsnext %}} +## 다른 기여 방법들 -- 문서에 기여하는 기본적인 사항들에 대한 자세한 내용은 [기여 시작](/docs/contribute/start/)을 본다. -- 변경을 제안할 때는 [쿠버네티스 문서 스타일가이드](/docs/contribute/style/style-guide/)를 따른다. -- SIG Docs에 대한 더 자세한 정보는 [SIG Docs에 참여하기](/ko/docs/contribute/participating/)를 본다. -- 쿠버네티스 문서 현지화에 대한 자세한 내용은 [쿠버네티스 문서 현지화](/docs/contribute/localization/)를 본다. +- [쿠버네티스 커뮤니티 사이트](/community/)를 방문하십시오. 트위터 또는 스택 오버플로우에 참여하고, 현지 쿠버네티스 모임과 이벤트 등에 대해 알아봅니다. +- [기여자 치트시트](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet)를 읽고 쿠버네티스 기능 개발에 참여합니다. +- [블로그 게시물 또는 사례 연구](/docs/contribute/new-content/blogs-case-studies/)를 제출합니다. {{% /capture %}} diff --git a/content/ko/docs/contribute/participating.md b/content/ko/docs/contribute/participating.md index 5334d859cecd2..401c91844e875 100644 --- a/content/ko/docs/contribute/participating.md +++ b/content/ko/docs/contribute/participating.md @@ -1,9 +1,10 @@ --- title: SIG Docs에 참여하기 content_template: templates/concept +weight: 60 card: name: contribute - weight: 40 + weight: 60 --- {{% capture overview %}} @@ -35,7 +36,7 @@ SIG Docs는 모든 컨트리뷰터의 콘텐츠와 리뷰를 환영한다. ## 역할과 책임 -- **모든 사람** 은 쿠버네티스 문서에 기여할 수 있다. 기여시 [CLA에 서명](/docs/contribute/start#sign-the-cla)하고 GitHub 계정을 가지고 있어야 한다. +- **모든 사람** 은 쿠버네티스 문서에 기여할 수 있다. 기여시 [CLA에 서명](/docs/contribute/new-content/overview/#sign-the-cla))하고 GitHub 계정을 가지고 있어야 한다. - 쿠버네티스 조직의 **멤버** 는 쿠버네티스 프로젝트에 시간과 노력을 투자한 기여자이다. 일반적으로 승인되는 변경이 되는 풀 리퀘스트를 연다. 멤버십 기준은 [커뮤니티 멤버십](https://github.com/kubernetes/community/blob/master/community-membership.md)을 참조한다. - SIG Docs의 **리뷰어** 는 쿠버네티스 조직의 일원으로 문서 풀 리퀘스트에 관심을 표명했고, SIG Docs 승인자에 @@ -61,7 +62,7 @@ SIG Docs는 모든 컨트리뷰터의 콘텐츠와 리뷰를 환영한다. 만약 쿠버네티스 조직의 멤버가 아니라면, `/lgtm` 을 사용하는 것은 자동화된 시스템에 아무런 영향을 주지 않는다. {{< /note >}} -[CLA에 서명](/docs/contribute/start#sign-the-cla) 후에 누구나 다음을 할 수 있다. +[CLA에 서명](/docs/contribute/new-content/overview/#sign-the-cla)) 후에 누구나 다음을 할 수 있다. - 기존 콘텐츠를 개선하거나, 새 콘텐츠를 추가하거나, 블로그 게시물 또는 사례연구 작성을 위해 풀 리퀘스트를 연다. ## 멤버 @@ -222,7 +223,7 @@ GitHub 그룹에 당신을 추가하기를 요청한다. `kubernetes-website-adm - 승인 전에 PR에 대한 Netlify 프리뷰 페이지를 방문하여, 제대로 보이는지 확인한다. -- 주간 로테이션을 위해 [PR Wrangler 로테이션 스케줄러](https://github.com/kubernetes/website/wiki/PR-Wranglers)에 참여한다. SIG Docs는 모든 승인자들이 이 로테이션에 참여할 +- 주간 로테이션을 위해 [PR Wrangler 로테이션 스케줄](https://github.com/kubernetes/website/wiki/PR-Wranglers)에 참여한다. SIG Docs는 모든 승인자들이 이 로테이션에 참여할 것으로 기대한다. [일주일 간 PR Wrangler 되기](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) 문서를 참고한다. @@ -298,7 +299,7 @@ PR 소유자에게 조언하는데 활용된다. - 모든 쿠버네티스 멤버는 코멘트에 `/lgtm` 을 추가해서 `lgtm` 레이블을 추가할 수 있다. - SIG Docs 승인자들만이 코멘트에 `/approve` 를 추가해서 풀 리퀘스트를 병합할 수 있다. 일부 승인자들은 - [PR Wrangler](#pr-wrangler) 또는 [SIG Docs 의장](#sig-docs-의장)과 + [PR Wrangler](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) 또는 [SIG Docs 의장](#sig-docs-의장)과 같은 특정 역할도 수행한다. {{% /capture %}} @@ -307,8 +308,9 @@ PR 소유자에게 조언하는데 활용된다. 쿠버네티스 문서화에 기여하는 일에 대한 보다 많은 정보는 다음 문서를 참고한다. -- [기여 시작하기](/docs/contribute/start/) -- [문서 스타일](/docs/contribute/style/) +- [신규 컨텐츠 기여하기](/docs/contribute/overview/) +- [컨텐츠 검토하기](/docs/contribute/review/reviewing-prs) +- [문서 스타일 가이드](/docs/contribute/style/) {{% /capture %}} diff --git a/content/ko/docs/contribute/style/write-new-topic.md b/content/ko/docs/contribute/style/write-new-topic.md index c08248e7833bc..4313b3bd2f07f 100644 --- a/content/ko/docs/contribute/style/write-new-topic.md +++ b/content/ko/docs/contribute/style/write-new-topic.md @@ -162,7 +162,6 @@ kubectl create -f https://k8s.io/examples/pods/storage/gce-volume.yaml {{% /capture %}} {{% capture whatsnext %}} -* [페이지 템플릿 사용](/docs/home/contribute/page-templates/)에 대해 알아보기. -* [변경 사항 준비](/docs/home/contribute/stage-documentation-changes/)에 대해 알아보기. -* [풀 리퀘스트 작성](/docs/home/contribute/create-pull-request/)에 대해 알아보기. +* [페이지 템플릿 사용](/docs/contribute/page-templates/))에 대해 알아보기. +* [풀 리퀘스트 작성](/docs/contribute/new-content/open-a-pr/)에 대해 알아보기. {{% /capture %}} diff --git a/content/ko/docs/home/_index.md b/content/ko/docs/home/_index.md index 1a12f2028ac1d..94927fbfc1cbb 100644 --- a/content/ko/docs/home/_index.md +++ b/content/ko/docs/home/_index.md @@ -3,7 +3,7 @@ title: 쿠버네티스 문서 noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "홈" main_menu: true weight: 10 @@ -40,7 +40,7 @@ cards: button: "태스크 보기" button_path: "/ko/docs/tasks" - name: training - title: 교육" + title: "교육" description: "공인 쿠버네티스 인증을 획득하고 클라우드 네이티브 프로젝트를 성공적으로 수행하세요!" button: "교육 보기" button_path: "/training" diff --git a/content/ko/docs/reference/_index.md b/content/ko/docs/reference/_index.md index ada3246855fc2..a9ce09b988951 100644 --- a/content/ko/docs/reference/_index.md +++ b/content/ko/docs/reference/_index.md @@ -36,13 +36,15 @@ content_template: templates/concept * [JSONPath](/docs/reference/kubectl/jsonpath/) - kubectl에서 [JSONPath 표현](http://goessner.net/articles/JsonPath/)을 사용하기 위한 문법 가이드. * [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) - 안정적인 쿠버네티스 클러스터를 쉽게 프로비전하기 위한 CLI 도구. -## 설정 레퍼런스 +## 컴포넌트 레퍼런스 * [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - 각 노드에서 구동되는 주요한 *노드 에이전트*. kubelet은 PodSpecs 집합을 가지며 기술된 컨테이너가 구동되고 있는지, 정상 작동하는지를 보장한다. * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - 파드, 서비스, 레플리케이션 컨트롤러와 같은 API 오브젝트에 대한 검증과 구성을 수행하는 REST API. * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - 쿠버네티스에 탑재된 핵심 제어 루프를 포함하는 데몬. * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - 간단한 TCP/UDP 스트림 포워딩이나 백-엔드 집합에 걸쳐서 라운드-로빈 TCP/UDP 포워딩을 할 수 있다. * [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - 가용성, 성능 및 용량을 관리하는 스케줄러. + * [kube-scheduler 정책](/docs/reference/scheduling/policies) + * [kube-scheduler 프로파일](/docs/reference/scheduling/profiles) ## 설계 문서 diff --git a/content/ko/docs/reference/glossary/container-runtime.md b/content/ko/docs/reference/glossary/container-runtime.md index 47fd7031b17c2..76bdb71b91c96 100644 --- a/content/ko/docs/reference/glossary/container-runtime.md +++ b/content/ko/docs/reference/glossary/container-runtime.md @@ -2,7 +2,7 @@ title: 컨테이너 런타임 id: container-runtime date: 2019-06-05 -full_link: /docs/reference/generated/container-runtime +full_link: /docs/setup/production-environment/container-runtimes short_description: > 컨테이너 런타임은 컨테이너 실행을 담당하는 소프트웨어이다. diff --git a/content/ko/docs/reference/glossary/extensions.md b/content/ko/docs/reference/glossary/extensions.md index c51b52e08c33d..caf7bfa2269be 100644 --- a/content/ko/docs/reference/glossary/extensions.md +++ b/content/ko/docs/reference/glossary/extensions.md @@ -2,7 +2,7 @@ title: 익스텐션(Extensions) id: Extensions date: 2019-02-01 -full_link: /docs/concepts/extend-kubernetes/extend-cluster/#extensions +full_link: /ko/docs/concepts/extend-kubernetes/extend-cluster/#익스텐션 short_description: > 익스텐션은 새로운 타입의 하드웨어를 지원하기 위해 쿠버네티스를 확장하고 깊게 통합시키는 소프트웨어 컴포넌트이다. @@ -15,4 +15,4 @@ tags: -대부분의 클러스터 관리자는 호스트된 쿠버네티스 또는 쿠버네티스의 배포 인스턴스를 사용할 것이다. 그 결과, 대부분의 쿠버네티스 사용자는 [익스텐션](/docs/concepts/extend-kubernetes/extend-cluster/#extensions)의 설치가 필요할 것이며, 일부 사용자만 직접 새로운 것을 만들 것이다. +대부분의 클러스터 관리자는 호스트된 쿠버네티스 또는 쿠버네티스의 배포 인스턴스를 사용할 것이다. 그 결과, 대부분의 쿠버네티스 사용자는 [익스텐션](/ko/docs/concepts/extend-kubernetes/extend-cluster/#익스텐션)의 설치가 필요할 것이며, 일부 사용자만 직접 새로운 것을 만들 것이다. diff --git a/content/ko/docs/reference/glossary/managed-service.md b/content/ko/docs/reference/glossary/managed-service.md new file mode 100644 index 0000000000000..282bc7e9d6b04 --- /dev/null +++ b/content/ko/docs/reference/glossary/managed-service.md @@ -0,0 +1,18 @@ +--- +title: 매니지드 서비스 +id: managed-service +date: 2018-04-12 +full_link: +short_description: > + 타사 공급자가 유지보수하는 소프트웨어. + +aka: + +tags: +- extension +--- + 타사 공급자가 유지보수하는 소프트웨어. + + + +매니지드 서비스의 몇 가지 예시로 AWS EC2, Azure SQL Database 그리고 GCP Pub/Sub이 있으나, 애플리케이션에서 사용할 수 있는 모든 소프트웨어 제품이 될 수 있다. [서비스 카탈로그](/docs/concepts/service-catalog/)는 {{< glossary_tooltip text="서비스 브로커" term_id="service-broker" >}}가 제공하는 매니지드 서비스의 목록과 프로비전, 바인딩하는 방법을 제공한다. diff --git a/content/ko/docs/reference/glossary/replication-controller.md b/content/ko/docs/reference/glossary/replication-controller.md index 30f2aabf9411c..352529f4ce522 100755 --- a/content/ko/docs/reference/glossary/replication-controller.md +++ b/content/ko/docs/reference/glossary/replication-controller.md @@ -1,19 +1,25 @@ --- -title: 레플리케이션 컨트롤러(Replication Controller) +title: 레플리케이션 컨트롤러(ReplicationController) id: replication-controller date: 2018-04-12 full_link: short_description: > - 특정 수의 파드 인스턴스가 항상 동작하도록 보장하는 쿠버네티스 서비스. + (사용 중단된) 복제된 애플리케이션을 관리하는 API 오브젝트 aka: tags: - workload - core-object --- - 특정 수의 {{< glossary_tooltip text="파드" term_id="pod" >}} 인스턴스가 항상 동작하도록 보장하는 쿠버네티스 서비스. + 특정한 수의 {{< glossary_tooltip text="파드" term_id="pod" >}} 인스턴스가 +실행 중인지 확인하면서 복제된 애플리케이션을 관리하는 워크로드 리소스이다. -레플리케이션 컨트롤러는 파드에 설정된 값에 따라서, 동작하는 파드의 인스턴스를 자동으로 추가하거나 제거할 것이다. 파드가 삭제되거나 실수로 너무 많은 수의 파드가 시작된 경우, 파드가 지정된 수의 인스턴스로 돌아갈 수 있게 허용한다. +컨트롤 플레인은 일부 파드에 장애가 발생하거나, 수동으로 파드를 삭제하거나, +실수로 너무 많은 수의 파드가 시작된 경우에도 정의된 수량의 파드가 실행되도록 한다. +{{< note >}} +레플리케이션컨트롤러는 사용 중단되었다. 유사한 +것으로는 {{< glossary_tooltip text="디플로이먼트" term_id="deployment" >}}를 본다. +{{< /note >}} diff --git a/content/ko/docs/reference/glossary/service-account.md b/content/ko/docs/reference/glossary/service-account.md index ce110187e4706..1de65927ba031 100755 --- a/content/ko/docs/reference/glossary/service-account.md +++ b/content/ko/docs/reference/glossary/service-account.md @@ -1,5 +1,5 @@ --- -title: 서비스 어카운트(Service Account) +title: 서비스 어카운트(ServiceAccount) id: service-account date: 2018-04-12 full_link: /docs/tasks/configure-pod-container/configure-service-account/ diff --git a/content/ko/docs/reference/glossary/service-catalog.md b/content/ko/docs/reference/glossary/service-catalog.md new file mode 100644 index 0000000000000..25f166785347b --- /dev/null +++ b/content/ko/docs/reference/glossary/service-catalog.md @@ -0,0 +1,18 @@ +--- +title: 서비스 카탈로그(Service Catalog) +id: service-catalog +date: 2018-04-12 +full_link: +short_description: > + 쿠버네티스 클러스터 내에서 실행되는 응용 프로그램이 클라우드 공급자가 제공하는 데이터 저장소 서비스와 같은 외부 관리 소프트웨어 제품을 쉽게 사용할 수 있도록하는 확장 API이다. + +aka: +tags: +- extension +--- + 쿠버네티스 클러스터 내에서 실행되는 응용 프로그램이 클라우드 공급자가 제공하는 데이터 저장소 서비스와 같은 외부 관리 소프트웨어 제품을 쉽게 사용할 수 있도록하는 확장 API이다. + + + +서비스 생성 또는 관리에 대한 자세한 지식 없이도 {{< glossary_tooltip text="서비스 브로커" term_id="service-broker" >}}를 통해 외부의 {{< glossary_tooltip text="매니지드 서비스" term_id="managed-service" >}}의 목록과 프로비전, 바인딩하는 방법을 제공한다. + diff --git a/content/ko/docs/reference/issues-security/security.md b/content/ko/docs/reference/issues-security/security.md index 234ec3c01863d..ba79a2d473a6c 100644 --- a/content/ko/docs/reference/issues-security/security.md +++ b/content/ko/docs/reference/issues-security/security.md @@ -11,7 +11,7 @@ weight: 20 {{% capture body %}} ## 보안 공지 -보안 및 주요 API 공지에 대한 이메일을 위해 [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) 그룹에 가입하세요. +보안 및 주요 API 공지에 대한 이메일을 위해 [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce)) 그룹에 가입하세요. [이 링크](https://groups.google.com/forum/feed/kubernetes-announce/msgs/rss_v2_0.xml?num=50)를 사용하여 RSS 피드를 구독할 수 있다. diff --git a/content/ko/docs/reference/kubectl/cheatsheet.md b/content/ko/docs/reference/kubectl/cheatsheet.md index 83d8e591ab8a1..94fd790fb9c06 100644 --- a/content/ko/docs/reference/kubectl/cheatsheet.md +++ b/content/ko/docs/reference/kubectl/cheatsheet.md @@ -200,7 +200,6 @@ kubectl diff -f ./my-manifest.yaml ## 리소스 업데이트 -1.11 버전에서 `rolling-update`는 사용 중단(deprecated)되었다. ([CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md) 참고) 대신 `rollout`를 사용한다. ```bash kubectl set image deployment/frontend www=image:v2 # "frontend" 디플로이먼트의 "www" 컨테이너 이미지를 업데이트하는 롤링 업데이트 @@ -211,12 +210,6 @@ kubectl rollout status -w deployment/frontend # 완료될 때 kubectl rollout restart deployment/frontend # "frontend" 디플로이먼트의 롤링 재시작 -# 버전 1.11 부터 사용 중단 -kubectl rolling-update frontend-v1 -f frontend-v2.json # (사용중단) frontend-v1 파드의 롤링 업데이트 -kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # (사용중단) 리소스 이름 변경과 이미지 업데이트 -kubectl rolling-update frontend --image=image:v2 # (사용중단) 프론트엔드의 파드 이미지 업데이트 -kubectl rolling-update frontend-v1 frontend-v2 --rollback # (사용중단) 진행중인 기존 롤아웃 중단 - cat pod.json | kubectl replace -f - # std로 전달된 JSON을 기반으로 파드 교체 # 리소스를 강제 교체, 삭제 후 재생성함. 이것은 서비스를 중단시킴. diff --git a/content/ko/docs/reference/tools.md b/content/ko/docs/reference/tools.md index d7d78be1c838d..8ca0b453f0e35 100644 --- a/content/ko/docs/reference/tools.md +++ b/content/ko/docs/reference/tools.md @@ -18,11 +18,6 @@ content_template: templates/concept [`kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)은 물리적 환경, 클라우드 서버, 또는 가상머신 상에서 안전한 쿠버네티스를 쉽게 프로비저닝하기 위한 커맨드라인 툴이다(현재는 알파 상태). -## Kubefed - -[`kubefed`](/docs/tasks/federation/set-up-cluster-federation-kubefed/)는 페더레이션 클러스터를 -관리하는데 도움이 되는 커맨드라인 툴이다. - ## Minikube [`minikube`](/ko/docs/tasks/tools/install-minikube/)는 개발과 테스팅 목적으로 하는 diff --git a/content/ko/docs/setup/_index.md b/content/ko/docs/setup/_index.md index fe3e81d84f12b..2e2f854220926 100644 --- a/content/ko/docs/setup/_index.md +++ b/content/ko/docs/setup/_index.md @@ -36,76 +36,15 @@ card: |커뮤니티 |생태계 | | ------------ | -------- | -| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) | -| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| -| | [Minishift](https://docs.okd.io/latest/minishift/)| +| [Minikube](/docs/setup/learning-environment/minikube/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| +| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Minishift](https://docs.okd.io/latest/minishift/)| | | [MicroK8s](https://microk8s.io/)| -| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | -| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| -| | [k3s](https://k3s.io)| ## 운영 환경 운영 환경을 위한 솔루션을 평가할 때에는, 쿠버네티스 클러스터 운영에 대한 어떤 측면(또는 _추상적인 개념_)을 스스로 관리하기를 원하는지, 제공자에게 넘기기를 원하는지 고려하자. -몇 가지 가능한 쿠버네티스 클러스터의 추상적인 개념은 {{< glossary_tooltip text="애플리케이션" term_id="applications" >}}, {{< glossary_tooltip text="데이터 플레인" term_id="data-plane" >}}, {{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}, {{< glossary_tooltip text="클러스터 인프라스트럭처" term_id="cluster-infrastructure" >}}, 및 {{< glossary_tooltip text="클러스터 운영" term_id="cluster-operations" >}}이다. - -다음의 다이어그램은 쿠버네티스 클러스터에 대해 가능한 추상적인 개념을 나열하고, 각 추상적인 개념을 사용자 스스로 관리하는지 제공자에 의해 관리되는지를 보여준다. - -운영 환경 솔루션![운영 환경 솔루션](/images/docs/KubernetesSolutions.svg) - -{{< table caption="제공자와 솔루션을 나열한 운영 환경 솔루션 표." >}} -다음 운영 환경 솔루션 표는 제공자와 솔루션을 나열한다. - -|제공자 | 매니지드 | 턴키 클라우드 | 온-프렘(on-prem) 데이터센터 | 커스텀 (클라우드) | 커스텀 (온-프레미스 VMs)| 커스텀 (베어 메탈) | -| --------- | ------ | ------ | ------ | ------ | ------ | ----- | -| [Agile Stacks](https://www.agilestacks.com/products/kubernetes)| | ✔ | ✔ | | | -| [Alibaba Cloud](https://www.alibabacloud.com/product/kubernetes)| | ✔ | | | | -| [Amazon](https://aws.amazon.com) | [Amazon EKS](https://aws.amazon.com/eks/) |[Amazon EC2](https://aws.amazon.com/ec2/) | | | | -| [AppsCode](https://appscode.com/products/pharmer/) | ✔ | | | | | -| [APPUiO](https://appuio.ch/)  | ✔ | ✔ | ✔ | | | | -| [Banzai Cloud Pipeline Kubernetes Engine (PKE)](https://banzaicloud.com/products/pke/) | | ✔ | | ✔ | ✔ | ✔ | -| [CenturyLink Cloud](https://www.ctl.io/) | | ✔ | | | | -| [Cisco Container Platform](https://cisco.com/go/containers) | | | ✔ | | | -| [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) | | | | ✔ |✔ | -| [CloudStack](https://cloudstack.apache.org/) | | | | | ✔| -| [Canonical](https://ubuntu.com/kubernetes) | ✔ | ✔ | ✔ | ✔ |✔ | ✔ -| [Containership](https://containership.io) | ✔ |✔ | | | | -| [D2iQ](https://d2iq.com/) | | [Kommander](https://docs.d2iq.com/ksphere/kommander/) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | -| [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | ✔ -| [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | ✔ | | | | | -| [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |✔ | ✔ | | | ✔ -| [Gardener](https://gardener.cloud/) | ✔ | ✔ | ✔ | ✔ | ✔ | [사용자 정의 확장](https://github.com/gardener/gardener/blob/master/docs/extensions/overview.md) | -| [Giant Swarm](https://www.giantswarm.io/) | ✔ | ✔ | ✔ | | -| [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | | -| [Hidora](https://hidora.com/) | ✔ | ✔| ✔ | | | | | | | | -| [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | | -| [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | | -| [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | | -| [KubeOne](https://kubeone.io/) | | ✔ | ✔ | ✔ | ✔ | ✔ | -| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | -| [KubeSail](https://kubesail.com/) | ✔ | | | | | -| [Kubespray](https://kubespray.io/#/) | | | |✔ | ✔ | ✔ | -| [Kublr](https://kublr.com/) |✔ | ✔ |✔ |✔ |✔ |✔ | -| [Microsoft Azure](https://azure.microsoft.com) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) | | | | | -| [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) | | | ✔ | | | -| [NetApp Kubernetes Service (NKS)](https://cloud.netapp.com/kubernetes-service) | ✔ | ✔ | ✔ | | | -| [Nirmata](https://www.nirmata.com/) | | ✔ | ✔ | | | -| [Nutanix](https://www.nutanix.com/en) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | | | [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization) | -| [OpenNebula](https://www.opennebula.org) |[OpenNebula Kubernetes](https://marketplace.opennebula.systems/docs/service/kubernetes.html) | | | | | -| [OpenShift](https://www.openshift.com) |[OpenShift Dedicated](https://www.openshift.com/products/dedicated/) and [OpenShift Online](https://www.openshift.com/products/online/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) |[OpenShift Container Platform](https://www.openshift.com/products/container-platform/) -| [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | ✔ | ✔ | | | | -| [oVirt](https://www.ovirt.org/) | | | | | ✔ | -| [Pivotal](https://pivotal.io/) | | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | | | -| [Platform9](https://platform9.com/) | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | ✔ | ✔ | ✔ -| [Rancher](https://rancher.com/) | | [Rancher 2.x](https://rancher.com/docs/rancher/v2.x/en/) | | [Rancher Kubernetes Engine (RKE)](https://rancher.com/docs/rke/latest/en/) | | [k3s](https://k3s.io/) -| [Supergiant](https://supergiant.io/) | |✔ | | | | -| [SUSE](https://www.suse.com/) | | ✔ | | | | -| [SysEleven](https://www.syseleven.io/) | ✔ | | | | | -| [Tencent Cloud](https://intl.cloud.tencent.com/) | [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/product/tke) | ✔ | ✔ | | | ✔ | -| [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | | -| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) -| [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | | +[쿠버네티스 파트너](https://kubernetes.io/partners/#conformance)에는 [공인 쿠버네티스](https://github.com/cncf/k8s-conformance/#certified-kubernetes) 공급자 목록이 포함되어 있다. {{% /capture %}} diff --git a/content/ko/docs/setup/learning-environment/minikube.md b/content/ko/docs/setup/learning-environment/minikube.md index f6af768b6891c..3aed39f4001c6 100644 --- a/content/ko/docs/setup/learning-environment/minikube.md +++ b/content/ko/docs/setup/learning-environment/minikube.md @@ -183,23 +183,25 @@ Minikube는 또한 "minikube" 컨텍스트를 생성하고 이를 kubectl의 기 minikube start --kubernetes-version {{< param "fullversion" >}} ``` #### VM 드라이버 지정하기 -`minikube start` 코멘드에 `--vm-driver=` 플래그를 추가해서 VM 드라이버를 변경할 수 있다. +`minikube start` 코멘드에 `--driver=` 플래그를 추가해서 VM 드라이버를 변경할 수 있다. 코멘드를 예를 들면 다음과 같다. ```shell -minikube start --vm-driver= +minikube start --driver= ``` Minikube는 다음의 드라이버를 지원한다. {{< note >}} - 지원되는 드라이버와 플러그인 설치 방법에 대한 보다 상세한 정보는 [드라이버](https://git.k8s.io/minikube/docs/drivers.md)를 참조한다. + 지원되는 드라이버와 플러그인 설치 방법에 대한 보다 상세한 정보는 [드라이버](https://minikube.sigs.k8s.io/docs/reference/drivers/)를 참조한다. {{< /note >}} * virtualbox * vmwarefusion -* kvm2 ([드라이버 설치](https://git.k8s.io/minikube/docs/drivers.md#kvm2-driver)) -* hyperkit ([드라이버 설치](https://git.k8s.io/minikube/docs/drivers.md#hyperkit-driver)) -* hyperv ([드라이버 설치](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver)) +* docker (EXPERIMENTAL) +* kvm2 ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/)) +* hyperkit ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/)) +* hyperv ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/)) 다음 IP는 동적이며 변경할 수 있다. `minikube ip`로 알아낼 수 있다. -* vmware ([드라이버 설치](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#vmware-unified-driver)) (VMware unified driver) +* vmware ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/)) (VMware unified driver) +* parallels ([드라이버 설치](https://minikube.sigs.k8s.io/docs/reference/drivers/parallels/)) * none (쿠버네티스 컴포넌트를 가상 머신이 아닌 호스트 상에서 구동한다. 리눅스를 실행중이어야 하고, {{< glossary_tooltip term_id="docker" >}}가 설치되어야 한다.) {{< caution >}} @@ -325,8 +327,8 @@ Minikube는 사용자가 쿠버네티스 컴포넌트를 다양한 값으로 설 `minikube delete` 명령은 클러스터를 삭제하는데 사용할 수 있다. 이 명령어는 Minikube 가상 머신을 종료하고 삭제한다. 어떤 데이터나 상태도 보존되지 않다. -### minikube 업그레이드 -[minikube 업그레이드](https://minikube.sigs.k8s.io/docs/start/macos/)를 본다. +### Minikube 업그레이드 +macOS를 사용하는 경우 기존에 설치된 Minikube를 업그레이드하려면 [Minikube 업그레이드](https://minikube.sigs.k8s.io/docs/start/macos/#upgrading-minikube)를 참조한다. ## 클러스터와 상호 작용하기 @@ -367,7 +369,7 @@ Minikube VM은 host-only IP 주소를 통해 호스트 시스템에 노출되고 `kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].nodePort}"'` ## 퍼시스턴트 볼륨 -Minikube는 [퍼시스턴트 볼륨](/docs/concepts/storage/persistent-volumes/)을 `hostPath` 타입으로 지원한다. +Minikube는 [퍼시스턴트 볼륨](/ko/docs/concepts/storage/persistent-volumes/)을 `hostPath` 타입으로 지원한다. 이런 퍼시스턴트 볼륨은 Minikube VM 내에 디렉터리로 매핑됩니다. Minikube VM은 tmpfs에서 부트하는데, 매우 많은 디렉터리가 재부트(`minikube stop`)까지는 유지되지 않다. diff --git a/content/ko/docs/setup/production-environment/container-runtimes.md b/content/ko/docs/setup/production-environment/container-runtimes.md index 4429049bfb0eb..0331440eac81a 100644 --- a/content/ko/docs/setup/production-environment/container-runtimes.md +++ b/content/ko/docs/setup/production-environment/container-runtimes.md @@ -62,7 +62,7 @@ kubelet을 재시작 하는 것은 에러를 해결할 수 없을 것이다. ## Docker 각 머신들에 대해서, Docker를 설치한다. -버전 19.03.4가 추천된다. 그러나 1.13.1, 17.03, 17.06, 17.09, 18.06 그리고 18.09도 동작하는 것으로 알려져 있다. +버전 19.03.8이 추천된다. 그러나 1.13.1, 17.03, 17.06, 17.09, 18.06 그리고 18.09도 동작하는 것으로 알려져 있다. 쿠버네티스 릴리스 노트를 통해서, 최신에 검증된 Docker 버전의 지속적인 파악이 필요하다. 시스템에 Docker를 설치하기 위해서 아래의 커맨드들을 사용한다. @@ -86,9 +86,9 @@ add-apt-repository \ ## Docker CE 설치. apt-get update && apt-get install -y \ - containerd.io=1.2.10-3 \ - docker-ce=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) \ - docker-ce-cli=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) + containerd.io=1.2.13-1 \ + docker-ce=5:19.03.8~3-0~ubuntu-$(lsb_release -cs) \ + docker-ce-cli=5:19.03.8~3-0~ubuntu-$(lsb_release -cs) # 데몬 설정. cat > /etc/docker/daemon.json <}} -{{< tab name="Ubuntu 16.04" codelang="bash" >}} +{{< tab name="Debian" codelang="bash" >}} +# Debian 개발 배포본(Unstable/Sid) +echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Unstable/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Unstable/Release.key -O- | sudo apt-key add - -# 선행 조건 설치 -apt-get update -apt-get install -y software-properties-common +# Debian Testing +echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Testing/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Testing/Release.key -O- | sudo apt-key add - + +# Debian 10 +echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_10/Release.key -O- | sudo apt-key add - -add-apt-repository ppa:projectatomic/ppa -apt-get update +# Raspbian 10 +echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Raspbian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Raspbian_10/Release.key -O- | sudo apt-key add - # CRI-O 설치 -apt-get install -y cri-o-1.15 +sudo apt-get install cri-o-1.17 +{{< /tab >}} + +{{< tab name="Ubuntu 18.04, 19.04 and 19.10" codelang="bash" >}} +# 리포지터리 설치 +. /etc/os-release +sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list" +wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add - +sudo apt-get update +# CRI-O 설치 +sudo apt-get install cri-o-1.17 {{< /tab >}} -{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}} +{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}} # 선행 조건 설치 yum-config-manager --add-repo=https://cbs.centos.org/repos/paas7-crio-115-release/x86_64/os/ # CRI-O 설치 yum install --nogpgcheck -y cri-o +{{< tab name="openSUSE Tumbleweed" codelang="bash" >}} +sudo zypper install cri-o {{< /tab >}} {{< /tabs >}} @@ -304,4 +324,4 @@ kubeadm을 사용하는 경우에도 마찬가지로, 수동으로 자세한 정보는 [Frakti 빠른 시작 가이드](https://github.com/kubernetes/frakti#quickstart)를 참고한다. -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md index c12f9e68a3902..c1ba8d3646679 100644 --- a/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -19,7 +19,7 @@ weight: 75 ## 시작하기 전에 -* [윈도우 서버에서 운영하는 마스터와 워커 노드](../user-guide-windows-nodes)를 포함한 쿠버네티스 클러스터를 생성한다. +* [윈도우 서버에서 운영하는 마스터와 워커 노드](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)를 포함한 쿠버네티스 클러스터를 생성한다. * 쿠버네티스에서 서비스와 워크로드를 생성하고 배포하는 것은 리눅스나 윈도우 컨테이너 모두 비슷한 방식이라는 것이 중요하다. [Kubectl 커맨드](/docs/reference/kubectl/overview/)로 클러스터에 접속하는 것은 동일하다. 아래 단원의 예시는 윈도우 컨테이너를 경험하기 위해 제공한다. ## 시작하기: 윈도우 컨테이너 배포하기 diff --git a/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md b/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md deleted file mode 100644 index e6720c9b0b5cb..0000000000000 --- a/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md +++ /dev/null @@ -1,354 +0,0 @@ ---- -reviewers: -title: 쿠버네티스에서 윈도우 노드 추가 가이드 -min-kubernetes-server-version: v1.14 -content_template: templates/tutorial -weight: 70 ---- - -{{% capture overview %}} - -쿠버네티스 플랫폼은 이제 리눅스와 윈도우 컨테이너 모두 운영할 수 있다. 윈도우 노드도 클러스터에 등록할 수 있다. 이 페이지에서는 어떻게 하나 또는 그 이상의 윈도우 노드를 클러스터에 등록할 수 있는지 보여준다. -{{% /capture %}} - - -{{% capture prerequisites %}} - -* 윈도우 컨테이너를 호스트하는 윈도우 노드를 구성하려면 [윈도우 서버 2019 라이선스](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing)를 소유해야 한다. 클러스터를 위해서 소속 기관의 라이선스를 사용하거나, Microsoft, 리셀러로 부터 취득할 수 있으며, GCP, AWS, Azure와 같은 주요 클라우드 제공자의 마켓플레이스를 통해 윈도우 서버를 운영하는 가상머신을 프로비저닝하여 취득할 수도 있다. [사용시간이 제한된 시험판](https://www.microsoft.com/en-us/cloud-platform/windows-server-trial)도 활용 가능하다. - -* 컨트롤 플레인에 접근할 수 있는 리눅스 기반의 쿠버네티스 클러스터를 구축한다.(몇 가지 예시는 [kubeadm으로 단일 컨트롤플레인 클러스터 만들기](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), [AKS Engine](/docs/setup/production-environment/turnkey/azure/), [GCE](/docs/setup/production-environment/turnkey/gce/), [AWS](/docs/setup/production-environment/turnkey/aws/)를 포함한다) - -{{% /capture %}} - - -{{% capture objectives %}} - -* 윈도우 노드를 클러스터에 등록하기 -* 리눅스와 윈도우에서 동작하는 파드와 서비스가 상호 간에 통신할 수 있게 네트워크를 구성하기 - -{{% /capture %}} - - -{{% capture lessoncontent %}} - -## 시작하기: 사용자 클러스터에 윈도우 노드 추가하기 - -### IP 주소 체계 설계하기 - -쿠버네티스 클러스터 관리를 위해 실수로 네트워크 충돌을 일으키지 않도록 IP 주소에 대해 신중히 설계해야 한다. 이 가이드는 [쿠버네티스 네트워킹 개념](/docs/concepts/cluster-administration/networking/)에 익숙하다 가정한다. - -클러스터를 배포하려면 다음 주소 공간이 필요하다. - -| 서브넷 / 주소 범위 | 비고 | 기본값 | -| --- | --- | --- | -| 서비스 서브넷 | 라우트 불가한 순수한 가상 서브넷으로 네트워크 토플로지에 관계없이 파드에서 서비스로 단일화된 접근을 제공하기 위해 사용한다. 서비스 서브넷은 노드에서 실행 중인 `kube-proxy`에 의해서 라우팅 가능한 주소 공간으로(또는 반대로) 번역된다. | 10.96.0.0/12 | -| 클러스터 서브넷 | 클러스터 내에 모든 파드에 사용되는 글로벌 서브넷이다. 각 노드에는 파드가 사용하기 위한 /24 보다 작거나 같은 서브넷을 할당한다. 서브넷은 클러스터 내에 모든 파드를 수용할 수 있을 정도로 충분히 큰 값이어야 한다. *최소 서브넷*의 크기를 계산하려면: `(노드의 개수) + (노드의 개수 * 구성하려는 노드 당 최대 파드 개수)`. 예: 노드 당 100개 파드인 5 노드짜리 클러스터 = `(5) + (5 * 100) = 505.` | 10.244.0.0/16 | -| 쿠버네티스 DNS 서비스 IP | DNS 확인 및 클러스터 서비스 검색에 사용되는 서비스인 `kube-dns`의 IP 주소이다. | 10.96.0.10 | - -클러스터에 IP 주소를 얼마나 할당해야 할지 결정하기 위해 '쿠버네티스에서 윈도우 컨테이너: 지원되는 기능: 네트워킹'에서 소개한 네트워킹 선택 사항을 검토하자. - -### 윈도우에서 실행되는 구성 요소 - -쿠버네티스 컨트롤 플레인이 리눅스 노드에서 운영되는 반면, 다음 요소는 윈도우 노드에서 구성되고 운영된다. - -1. kubelet -2. kube-proxy -3. kubectl (선택적) -4. 컨테이너 런타임 - -v1.14 이후의 최신 바이너리를 [https://github.com/kubernetes/kubernetes/releases](https://github.com/kubernetes/kubernetes/releases)에서 받아온다. kubeadm, kubectl, kubelet, kube-proxy의 Windows-amd64 바이너리는 CHANGELOG 링크에서 찾아볼 수 있다. - -### 네트워크 구성 - -리눅스 기반의 쿠버네티스 컨트롤 플레인("마스터") 노드를 가지고 있다면 네트워킹 솔루션을 선택할 준비가 된 것이다. 이 가이드는 단순화를 위해 VXLAN 방식의 플라넬(Flannel)을 사용하여 설명한다. - -#### 리눅스 컨트롤 플레인에서 VXLAN 방식으로 플라넬 구성하기 - -1. 플라넬을 위해 쿠버네티스 마스터를 준비한다. - - 클러스터의 쿠버네티스 마스터에서 사소한 준비를 권장한다. 플라넬을 사용할 때에 iptables 체인으로 IPv4 트래픽을 브릿지할 수 있게 하는 것은 추천한다. 이는 다음 커맨드를 이용하여 수행할 수 있다. - - ```bash - sudo sysctl net.bridge.bridge-nf-call-iptables=1 - ``` - -1. 플라넬 다운로드 받고 구성하기 - - 가장 최신의 플라넬 메니페스트를 다운로드한다. - - ```bash - wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml - ``` - - VXLAN 네트워킹 벡엔드를 가능하게 하기 위해 수정할 곳은 두 곳이다. - - 아래 단계를 적용하면 `kube-flannel.yml`의 `net-conf.json`부분을 다음과 같게 된다. - - ```json - net-conf.json: | - { - "Network": "10.244.0.0/16", - "Backend": { - "Type": "vxlan", - "VNI" : 4096, - "Port": 4789 - } - } - ``` - - {{< note >}}리눅스의 플라넬과 윈도우의 플라넬이 상호운용하기 위해서 `VNI`는 반드시 4096이고, `Port`는 4789여야 한다. 다른 VNI는 곧 지원될 예정이다. [VXLAN 문서](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)에서 - 이 필드의 설명 부분을 보자.{{< /note >}} - -1. `kube-flannel.yml`의 `net-conf.json` 부분을 거듭 확인하자. - 1. 클러스터 서브넷(예, "10.244.0.0/16")은 IP 주소 설계에 따라 설정되어야 한다. - * VNI 4096 은 벡엔드에 설정한다. - * Port 4789 는 벡엔드에 설정한다. - 1. `kube-flannel.yml`의 `cni-conf.json` 부분에서 네트워크 이름을 `vxlan0`로 바꾼다. - - `cni-conf.json`는 다음과 같다. - - ```json - cni-conf.json: | - { - "name": "vxlan0", - "plugins": [ - { - "type": "flannel", - "delegate": { - "hairpinMode": true, - "isDefaultGateway": true - } - }, - { - "type": "portmap", - "capabilities": { - "portMappings": true - } - } - ] - } - ``` - -1. 플라넬 매니페스트를 적용하고 확인하기 - - 플라넬 구성을 적용하자. - - ```bash - kubectl apply -f kube-flannel.yml - ``` - - 몇 분 뒤에 플라넬 파드 네트워크가 배포되었다면, 모든 파드에서 운영 중인 것을 확인할 수 있다. - - ```bash - kubectl get pods --all-namespaces - ``` - - 결과는 다음과 같다. - - ``` - NAMESPACE NAME READY STATUS RESTARTS AGE - kube-system etcd-flannel-master 1/1 Running 0 1m - kube-system kube-apiserver-flannel-master 1/1 Running 0 1m - kube-system kube-controller-manager-flannel-master 1/1 Running 0 1m - kube-system kube-dns-86f4d74b45-hcx8x 3/3 Running 0 12m - kube-system kube-flannel-ds-54954 1/1 Running 0 1m - kube-system kube-proxy-Zjlxz 1/1 Running 0 1m - kube-system kube-scheduler-flannel-master 1/1 Running 0 1m - ``` - - 플라넬 데몬셋에 노드 셀렉터가 적용되었음을 확인한다. - - ```bash - kubectl get ds -n kube-system - ``` - - 결과는 다음과 같다. 노드 셀렉터 `beta.kubernetes.io/os=linux`가 적용되었다. - - ``` - NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE - kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux 21d - kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 26d - ``` - - - -### 윈도우 워커 노드 추가하기 - -이번 단원은 맨 땅에서부터 온프레미스 클러스터에 가입하기까지 윈도우 노드 구성을 다룬다. 클러스터가 클라우드상에 있다면, [퍼블릭 클라우드 제공자 단원](#퍼블릭-클라우드-제공자)에 있는 클라우드에 특정한 가이드를 따르도록 된다. - -#### 윈도우 노드 준비하기 - -{{< note >}} -윈도우 단원에서 모든 코드 부분은 윈도우 워커 노드에서 높은 권한(Administrator)으로 파워쉘(PowerShell) 환경에서 구동한다. -{{< /note >}} - -1. 설치 및 참여(join) 스크립트가 포함된 [SIG Windows tools](https://github.com/kubernetes-sigs/sig-windows-tools) 리포지터리를 내려받는다. - ```PowerShell - [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 - Start-BitsTransfer https://github.com/kubernetes-sigs/sig-windows-tools/archive/master.zip - tar -xvf .\master.zip --strip-components 3 sig-windows-tools-master/kubeadm/v1.15.0/* - Remove-Item .\master.zip - ``` - -1. 쿠버네티스 [구성 파일](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/Kubeclustervxlan.json)을 커스터마이즈한다. - - ``` - { - "Cri" : { // Contains values for container runtime and base container setup - "Name" : "dockerd", // Container runtime name - "Images" : { - "Pause" : "mcr.microsoft.com/k8s/core/pause:1.2.0", // Infrastructure container image - "Nanoserver" : "mcr.microsoft.com/windows/nanoserver:1809", // Base Nanoserver container image - "ServerCore" : "mcr.microsoft.com/windows/servercore:ltsc2019" // Base ServerCore container image - } - }, - "Cni" : { // Contains values for networking executables - "Name" : "flannel", // Name of network fabric - "Source" : [{ // Contains array of objects containing values for network daemon(s) - "Name" : "flanneld", // Name of network daemon - "Url" : "https://github.com/coreos/flannel/releases/download/v0.11.0/flanneld.exe" // Direct URL pointing to network daemon executable - } - ], - "Plugin" : { // Contains values for CNI network plugin - "Name": "vxlan" // Backend network mechanism to use: ["vxlan" | "bridge"] - }, - "InterfaceName" : "Ethernet" // Designated network interface name on Windows node to use as container network - }, - "Kubernetes" : { // Contains values for Kubernetes node binaries - "Source" : { // Contains values for Kubernetes node binaries - "Release" : "1.15.0", // Version of Kubernetes node binaries - "Url" : "https://dl.k8s.io/v1.15.0/kubernetes-node-windows-amd64.tar.gz" // Direct URL pointing to Kubernetes node binaries tarball - }, - "ControlPlane" : { // Contains values associated with Kubernetes control-plane ("Master") node - "IpAddress" : "kubemasterIP", // IP address of control-plane ("Master") node - "Username" : "localadmin", // Username on control-plane ("Master") node with remote SSH access - "KubeadmToken" : "token", // Kubeadm bootstrap token - "KubeadmCAHash" : "discovery-token-ca-cert-hash" // Kubeadm CA key hash - }, - "KubeProxy" : { // Contains values for Kubernetes network proxy configuration - "Gates" : "WinOverlay=true" // Comma-separated key-value pairs passed to kube-proxy feature gate flag - }, - "Network" : { // Contains values for IP ranges in CIDR notation for Kubernetes networking - "ServiceCidr" : "10.96.0.0/12", // Service IP subnet used by Services in CIDR notation - "ClusterCidr" : "10.244.0.0/16" // Cluster IP subnet used by Pods in CIDR notation - } - }, - "Install" : { // Contains values and configurations for Windows node installation - "Destination" : "C:\\ProgramData\\Kubernetes" // Absolute DOS path where Kubernetes will be installed on the Windows node - } -} - ``` - -{{< note >}} -사용자는 쿠버네티스 컨트롤 플레인("마스터") 노드에서 `kubeadm token create --print-join-command`를 실행해서 `ControlPlane.KubeadmToken`과 `ControlPlane.KubeadmCAHash` 필드를 위한 값을 생성할 수 있다. -{{< /note >}} - -1. 컨테이너와 쿠버네티스를 설치 (시스템 재시작 필요) - -기존에 내려받은 [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) 스크립트를 사용해서 쿠버네티스를 윈도우 서버 컨테이너 호스트에 설치한다. - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -install - ``` - 이 때 `-ConfigFile`는 쿠버네티스 구성 파일의 경로를 가리킨다. - -{{< note >}} -아래 예제에서, 우리는 오버레이 네트워킹 모드를 사용한다. 이는 [KB4489899](https://support.microsoft.com/help/4489899)를 포함한 윈도우 서버 버전 2019와 최소 쿠버네티스 v1.14 이상이 필요하다. 이 요구사항을 만족시키기 어려운 사용자는 구성 파일의 [플러그인](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/Kubeclusterbridge.json#L18)으로 `bridge`를 선택하지 말고 `L2bridge` 네트워킹을 사용해야만 한다. -{{< /note >}} - - ![alt_text](../kubecluster.ps1-install.gif "KubeCluster.ps1 install output") - - -대상으로 하는 윈도우 노드에서, 본 단계는 - -1. 윈도우 서버 컨테이너 역할을 활성화(및 재시작) 한다. -1. 선택된 컨테이너 런타임을 내려받아 설치한다. -1. 필요한 컨테이너 이미지를 모두 내려받는다. -1. 쿠버네티스 바이너리를 내려받아서 `$PATH` 환경 변수에 추가한다. -1. 쿠버네티스 구성 파일에서 선택한 내용을 기반으로 CNI 플러그인을 내려받는다. -1. (선택적으로) 참여(join) 중에 컨트롤 플레인("마스터") 노드에 접속하기 위한 새로운 SSH 키를 생성한다. - - {{< note >}}또한, SSH 키 생성 단계에서 생성된 공개 SSH 키를 (리눅스) 컨트롤 플레인 노드의 `authorized_keys` 파일에 추가해야 한다. 이는 한 번만 수행하면 된다. 스크립트가 출력물의 마지막 부분에 이를 따라 할 수 있도록 단계를 출력해 준다.{{< /note >}} - -일단 설치가 완료되면, 생성된 모든 구성 파일이나 바이너리는 윈도우 노드가 참여하기 전에 수정될 수 있다. - -#### 윈도우 노드를 쿠버네티스 클러스터에 참여시키기 - -이 섹션에서는 클러스터를 구성하기 위해서 [쿠버네티스가 설치된 윈도우 노드](#윈도우-노드-준비하기)를 기존의 (리눅스) 컨트롤 플레인에 참여시키는 방법을 다룬다. - -앞서 내려받은 [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) 스크립트를 사용해서 윈도우 노드를 클러스터에 참여시킨다. - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -join - ``` - 이 때 `-ConfigFile` 쿠버네티스 구성 파일의 경로를 가리킨다. - -![alt_text](../kubecluster.ps1-join.gif "KubeCluster.ps1 join output") - -{{< note >}} -어떤 이유에서든 부트스트랩 동안이나 참여 과정에서 스크립트가 실패하면, 뒤따르는 참여 시도를 시작하기 전에 신규 PowerShell 세션을 시작해야한다. -{{< /note >}} - -본 단계는 다음의 행위를 수행한다. - -1. 컨트롤 플레인("마스터") 노드에 SSH로 접속해서 [Kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)을 얻어온다. -1. kubelet을 윈도우 서비스로 등록한다. -1. CNI 네트워크 플러그인을 구성한다. -1. 선택된 네트워크 인터페이스 상에서 HNS 네트워크를 생성한다. - {{< note >}} - 이는 vSwitch가 생성되는 동안 몇 초간의 네트워크 순단현상을 야기할 수 있다. - {{< /note >}} -1. (vxlan 플러그인을 선택한 경우) 오버레이 트래픽을 위해서 인바운드(inbound) 방화벽의 UDP 포트 4789를 열어준다. -1. flanneld를 윈도우 서비스로 등록한다. -1. kube-proxy를 윈도우 서비스로 등록한다. - -이제 클러스터에서 다음의 명령을 실행해서 윈도우 노드를 볼 수 있다. - -```bash -kubectl get nodes -``` - -#### 윈도우 노드를 쿠버네티스 클러스터에서 제거하기 -이 섹션에서는 윈도우 노드를 쿠버네티스 클러스터에서 제거하는 방법을 다룬다. - -앞서 내려받은 [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) 스크립트를 사용해서 클러스터에서 윈도우 노드를 제거한다. - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -reset - ``` - 이 때 `-ConfigFile` 쿠버네티스 구성 파일의 경로를 가리킨다. - -![alt_text](../kubecluster.ps1-reset.gif "KubeCluster.ps1 reset output") - -본 단계는 다음의 행위를 대상이되는 윈도우 노드에서 수행한다. - -1. 윈도우 노드를 쿠버네티스 클러스터에서 삭제한다. -1. 구동 중인 모든 컨테이너를 중지시킨다. -1. 모든 컨테이너 네트워킹(HNS) 자원을 삭제한다. -1. 등록된 모든 쿠버네티스 서비스(flanneld, kubelet, kube-proxy)를 해지한다. -1. 쿠버네티스 바이너리(kube-proxy.exe, kubelet.exe, flanneld.exe, kubeadm.exe)를 모두 삭제한다. -1. CNI 네트워크 플러그인 바이너리를 모두 삭제한다. -1. 쿠버네티스 클러스터에 접근하기 위한 [Kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)을 삭제한다. - - -### 퍼블릭 클라우드 제공자 - -#### Azure - -AKS-Engine은 완전하고, 맞춤 설정이 가능한 쿠버네티스 클러스터를 리눅스와 윈도우 노드에 배포할 수 있다. 단계별 안내가 [GitHub에 있는 문서](https://github.com/Azure/aks-engine/blob/master/docs/topics/windows.md)로 제공된다. - -#### GCP - -사용자가 [GitHub](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/windows/README-GCE-Windows-kube-up.md)에 있는 단계별 안내를 따라서 완전한 쿠버네티스 클러스터를 GCE 상에 쉽게 배포할 수 있다. - -#### kubeadm과 클러스터 API로 배포하기 - -Kubeadm은 쿠버네티스 클러스터를 배포하는 사용자에게 산업 표준이 되었다. Kubeadm에서 윈도우 노드 지원은 쿠버네티스 v1.16 이후 부터 알파 기능이다. 또한 윈도우 노드가 올바르게 프로비저닝되도록 클러스터 API에 투자하고 있다. 보다 자세한 내용은, [Windows KEP를 위한 kubeadm](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/20190424-kubeadm-for-windows.md)을 통해 상담하도록 하자. - - -### 다음 단계 - -이제 클러스터 내에 윈도우 컨테이너를 실행하도록 윈도우 워커를 구성했으니, 리눅스 컨테이너를 실행할 리눅스 노드를 1개 이상 추가할 수 있다. 이제 윈도우 컨테이너를 클러스터에 스케줄링할 준비가 됬다. - -{{% /capture %}} - diff --git a/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index 4ca2cc1fafb67..659abc69bac4f 100644 --- a/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -149,17 +149,17 @@ users: username: exp ``` -위 `fake-ca-file`, `fake-cert-file`, `fake-key-file`은 인증서 파일들의 실제 경로를 위한 +위 `fake-ca-file`, `fake-cert-file`, `fake-key-file`은 인증서 파일들의 실제 경로 이름을 위한 플레이스홀더(placeholder)이다. 당신의 환경에 맞게 이들을 실제 인증서 경로로 변경해줘야 한다. -만약 당신이 인증서 파일들의 경로 대신에 base64로 인코딩된 데이터를 여기에 사용하려고 한다면 -키에 `-data` 접미사를 추가해야 한다. 예를 들면 `certificate-authority-data`, +만약 당신이 인증서 파일들의 경로 대신에 여기에 포함된 base64로 인코딩된 데이터를 사용하려고 한다면 +이 경우 키에 `-data` 접미사를 추가해야 한다. 예를 들면 `certificate-authority-data`, `client-certificate-data`, `client-key-data` 같이 사용할 수 있다. 컨텍스트는 세 가지(클러스터, 사용자, 네임스페이스) 요소들로 이뤄진다. 예를 들어 -`dev-frontend` 컨텍스트는 `development` 클러스터의 `frontend` 네임스페이스에 접근하는데 -`developer` 사용자 자격증명을 사용하라고 알려준다. +`dev-frontend` 컨텍스트는 "`development` 클러스터의 `frontend` 네임스페이스에 접근하는데 +`developer` 사용자 자격증명을 사용하라고 알려준다." 현재 컨텍스트를 설정한다. @@ -275,7 +275,7 @@ Linux와 Mac에서는 콜론으로 구분되며 Windows에서는 세미콜론으 `KUBECONFIG` 환경 변수를 가지고 있다면, 리스트에 포함된 구성 파일들에 익숙해지길 바란다. -다음 예와 같이 임시로 `KUBECONFIG` 환경 변수에 두 개의 경로들을 덧붙여보자.
      +다음 예와 같이 임시로 `KUBECONFIG` 환경 변수에 두 개의 경로들을 덧붙여보자. ### Linux ```shell @@ -359,11 +359,13 @@ kubectl config view ## 정리 `KUBECONFIG` 환경 변수를 원래 값으로 되돌려 놓자. 예를 들면:
      -Linux: + +### Linux ```shell export KUBECONFIG=$KUBECONFIG_SAVED ``` -Windows PowerShell + +### Windows PowerShell ```shell $Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED ``` @@ -378,4 +380,3 @@ Windows PowerShell {{% /capture %}} - diff --git a/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md index 0f55dd30c25ae..92ef718d2b2f4 100644 --- a/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -26,7 +26,7 @@ card: 대시보드 UI는 기본으로 배포되지 않는다. 배포하려면 다음 커맨드를 동작한다. ``` -kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml ``` ## 대시보드 UI 접근 diff --git a/content/ko/docs/tasks/administer-cluster/cluster-management.md b/content/ko/docs/tasks/administer-cluster/cluster-management.md index 4d1446526fdc9..1fd99c0895251 100644 --- a/content/ko/docs/tasks/administer-cluster/cluster-management.md +++ b/content/ko/docs/tasks/administer-cluster/cluster-management.md @@ -21,7 +21,7 @@ content_template: templates/concept ## 클러스터 업그레이드 -클러스터 업그레이드 상태의 현황은 제공자에 따라 달라지며, 몇몇 릴리스들은 업그레이드에 각별한 주의를 요하기도 한다. 관리자들에게는 클러스터 업그레이드에 앞서 [릴리스 노트](https://git.k8s.io/kubernetes/CHANGELOG.md)와 버전에 맞는 업그레이드 노트 모두를 검토하도록 권장하고 있다. +클러스터 업그레이드 상태의 현황은 제공자에 따라 달라지며, 몇몇 릴리스들은 업그레이드에 각별한 주의를 요하기도 한다. 관리자들에게는 클러스터 업그레이드에 앞서 [릴리스 노트](https://git.k8s.io/kubernetes/CHANGELOG/README.md)와 버전에 맞는 업그레이드 노트 모두를 검토하도록 권장하고 있다. ### Azure Kubernetes Service (AKS) 클러스터 업그레이드 diff --git a/content/ko/docs/tasks/configure-pod-container/_index.md b/content/ko/docs/tasks/configure-pod-container/_index.md new file mode 100644 index 0000000000000..560261ecad440 --- /dev/null +++ b/content/ko/docs/tasks/configure-pod-container/_index.md @@ -0,0 +1,4 @@ +--- +title: "파드와 컨테이너 설정" +weight: 20 +--- diff --git a/content/ko/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/ko/docs/tasks/configure-pod-container/assign-memory-resource.md new file mode 100644 index 0000000000000..21f946f67d9dd --- /dev/null +++ b/content/ko/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -0,0 +1,356 @@ +--- +title: 컨테이너 및 파드 메모리 리소스 할당 +content_template: templates/task +weight: 10 +--- + +{{% capture overview %}} + +이 페이지는 메모리 *요청량* 과 메모리 *상한* 을 컨테이너에 어떻게 지정하는지 보여준다. +컨테이너는 요청량 만큼의 메모리 확보가 보장되나 +상한보다 더 많은 메모리는 사용할 수 없다. + +{{% /capture %}} + + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +클러스터의 각 노드에 최소 300 MiB 메모리가 있어야 한다. + +이 페이지의 몇 가지 단계를 수행하기 위해서는 클러스터 내 +[metrics-server](https://github.com/kubernetes-incubator/metrics-server) +서비스 실행이 필요하다. 이미 실행중인 metrics-server가 있다면 +다음 단계를 건너뛸 수 있다. + +Minikube를 사용 중이라면, 다음 명령어를 실행해 metric-server를 활성화 할 수 있다. + +```shell +minikube addons enable metrics-server +``` + +metric-server가 실행 중인지 확인하거나 다른 제공자의 리소스 메트릭 API (`metrics.k8s.io`)를 확인하기 위해 +다음의 명령어를 실행한다. + +```shell +kubectl get apiservices +``` + +리소스 메트릭 API를 사용할 수 있다면 출력에 +`metrics.k8s.io`에 대한 참조가 포함되어 있다. + +```shell +NAME +v1beta1.metrics.k8s.io +``` + +{{% /capture %}} + +{{% capture steps %}} + +## 네임스페이스 생성 + +이 예제에서 생성할 자원과 클러스터 내 나머지를 분리하기 위해 네임스페이스를 생성한다. + +```shell +kubectl create namespace mem-example +``` + +## 메모리 요청량 및 상한을 지정 + +컨테이너에 메모리 요청량을 지정하기 위해서는 컨테이너의 리소스 매니페스트에 +`resources:requests` 필드를 포함한다. 리소스 상한을 지정하기 위해서는 +`resources:limits` 필드를 포함한다. + +이 예제에서 하나의 컨테이너를 가진 파드를 생성한다. 생성된 컨테이너는 +100 MiB 메모리 요청량과 200 MiB 메모리 상한을 갖는다. 이 것이 파드 구성 파일이다. + +{{< codenew file="pods/resource/memory-request-limit.yaml" >}} + +구성 파일 내 `args` 섹션은 컨테이너가 시작될 때 아규먼트를 제공한다. +`"--vm-bytes", "150M"` 아규먼트는 컨테이너가 150 MiB 할당을 시도 하도록 한다. + +파드 생성: + +```shell +kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit.yaml --namespace=mem-example +``` + +파드 컨테이너가 실행 중인지 확인: + +```shell +kubectl get pod memory-demo --namespace=mem-example +``` + +파드에 대한 자세한 정보 보기: + +```shell +kubectl get pod memory-demo --output=yaml --namespace=mem-example +``` + +출력은 파드 내 하나의 컨테이너에 100MiB 메모리 요청량과 +200 MiB 메모리 상한이 있는 것을 보여준다. + + +```yaml +... +resources: + limits: + memory: 200Mi + requests: + memory: 100Mi +... +``` + +`kubectl top`을 실행하여 파드 메트릭 가져오기: + +```shell +kubectl top pod memory-demo --namespace=mem-example +``` + +출력은 파드가 약 150MiB 해당하는 약 162,900,000 바이트 메모리를 사용하는 것을 보여준다. +이는 파드의 100 MiB 요청 보다 많으나 파드의 200 MiB 상한보다는 적다. + +``` +NAME CPU(cores) MEMORY(bytes) +memory-demo 162856960 +``` + +파드 삭제: + +```shell +kubectl delete pod memory-demo --namespace=mem-example +``` + +## 컨테이너의 메모리 상한을 초과 + +노드 내 메모리가 충분하다면 컨테이너는 지정한 요청량보다 많은 메모리를 사용 할 수 있다. 그러나 +컨테이너는 지정한 메모리 상한보다 많은 메모리를 사용할 수 없다. 만약 컨테이너가 지정한 메모리 상한보다 +많은 메모리를 할당하면 해당 컨테이너는 종료 대상 후보가 된다. 만약 컨테이너가 지속적으로 +지정된 상한보다 많은 메모리를 사용한다면, 해당 컨테이너는 종료된다. 만약 종료된 컨테이너가 +재실행 가능하다면 다른 런타임 실패와 마찬가지로 kubelet에 의해 재실행된다. + +이 예제에서는 상한보다 많은 메모리를 할당하려는 파드를 생성한다. +이 것은 50 MiB 메모리 요청량과 100 MiB 메모리 상한을 갖는 +하나의 컨테이너를 갖는 파드의 구성 파일이다. + +{{< codenew file="pods/resource/memory-request-limit-2.yaml" >}} + +구성 파일의 'args' 섹션에서 컨테이너가 +100 MiB 상한을 훨씬 초과하는 250 MiB의 메모리를 할당하려는 것을 볼 수 있다. + +파드 생성: + +```shell +kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example +``` + +파드에 대한 자세한 정보 보기: + +```shell +kubectl get pod memory-demo-2 --namespace=mem-example +``` + +이 시점에 컨테이너가 실행되거나 종료되었을 수 있다. 컨테이너가 종료될 때까지 이전의 명령을 반복한다. + +```shell +NAME READY STATUS RESTARTS AGE +memory-demo-2 0/1 OOMKilled 1 24s +``` + +컨테이너 상태의 상세 상태 보기: + +```shell +kubectl get pod memory-demo-2 --output=yaml --namespace=mem-example +``` + +컨테이너가 메모리 부족 (OOM) 으로 종료되었음이 출력된다. + +```shell +lastState: + terminated: + containerID: docker://65183c1877aaec2e8427bc95609cc52677a454b56fcb24340dbd22917c23b10f + exitCode: 137 + finishedAt: 2017-06-20T20:52:19Z + reason: OOMKilled + startedAt: null +``` + +이 예제에서 컨테이너는 재실행 가능하여 kubelet에 의해 재실행된다. +컨테이너가 종료되었다 재실행되는 것을 보기 위해 다음 명령을 몇 번 반복한다. + +```shell +kubectl get pod memory-demo-2 --namespace=mem-example +``` + +출력은 컨테이너의 종료, 재실행, 재종료, 재실행 등을 보여준다. + +``` +kubectl get pod memory-demo-2 --namespace=mem-example +NAME READY STATUS RESTARTS AGE +memory-demo-2 0/1 OOMKilled 1 37s +``` +``` + +kubectl get pod memory-demo-2 --namespace=mem-example +NAME READY STATUS RESTARTS AGE +memory-demo-2 1/1 Running 2 40s +``` + +파드 내역에 대한 상세 정보 보기: + +``` +kubectl describe pod memory-demo-2 --namespace=mem-example +``` + +컨테이너가 반복적으로 시작하고 실패 하는 출력을 보여준다. + +``` +... Normal Created Created container with id 66a3a20aa7980e61be4922780bf9d24d1a1d8b7395c09861225b0eba1b1f8511 +... Warning BackOff Back-off restarting failed container +``` + +클러스터 노드에 대한 자세한 정보 보기: + +``` +kubectl describe nodes +``` + +출력에는 컨테이너가 메모리 부족으로 종료된 기록이 포함된다. + +``` +Warning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child +``` + +파드 삭제: + +```shell +kubectl delete pod memory-demo-2 --namespace=mem-example +``` + +## 노드에 비해 너무 큰 메모리 요청량의 지정 + +메모리 요청량과 상한은 컨테이너와 관련있지만, 파드가 가지는 +메모리 요청량과 상한으로 이해하면 유용하다. 파드의 메모리 요청량은 +파드 내 모든 컨테이너의 메모리 요청량의 합이다. 마찬가지로 +파드의 메모리 상한은 파드 내 모든 컨테이너의 메모리 상한의 합이다. + +파드는 요청량을 기반하여 스케줄링된다. 노드에 파드의 메모리 요청량을 충족하기에 충분한 메모리가 있는 +경우에만 파드가 노드에서 스케줄링된다. + +이 예제에서는 메모리 요청량이 너무 커 클러스터 내 모든 노드의 용량을 초과하는 파드를 생성한다. +다음은 클러스터 내 모든 노드의 용량을 초과할 수 있는 1000 GiB 메모리 요청을 포함하는 +컨테이너를 갖는 파드의 구성 파일이다. + +{{< codenew file="pods/resource/memory-request-limit-3.yaml" >}} + +파드 생성: + +```shell +kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-3.yaml --namespace=mem-example +``` + +파드 상태 보기: + +```shell +kubectl get pod memory-demo-3 --namespace=mem-example +``` + +파드 상태가 PENDING 상태임이 출력된다. 즉 파드는 어떤 노드에서도 실행되도록 스케줄 되지 않고 PENDING가 계속 지속된다. + +``` +kubectl get pod memory-demo-3 --namespace=mem-example +NAME READY STATUS RESTARTS AGE +memory-demo-3 0/1 Pending 0 25s +``` + +이벤트를 포함한 파드 상세 정보 보기: + +```shell +kubectl describe pod memory-demo-3 --namespace=mem-example +``` + +출력은 노드 내 메모리가 부족하여 파드가 스케줄링될 수 없음을 보여준다. + +```shell +Events: + ... Reason Message + ------ ------- + ... FailedScheduling No nodes are available that match all of the following predicates:: Insufficient memory (3). +``` + +## 메모리 단위 + +메모리 리소스는 byte 단위로 측정된다. 다음 접미사 중 하나로 정수 또는 고정 소수점으로 +메모리를 표시할 수 있다. E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki. +예를 들어 다음은 거의 유사한 값을 나타낸다. + +```shell +128974848, 129e6, 129M , 123Mi +``` + +파드 삭제: + +```shell +kubectl delete pod memory-demo-3 --namespace=mem-example +``` + +## 메모리 상한을 지정하지 않으면 + +컨테이너에 메모리 상한을 지정하지 않으면 다음 중 하나가 적용된다. + +* 컨테이너가 사용할 수 있는 메모리 상한은 없다. 컨테이너가 +실행 중인 노드에서 사용 가능한 모든 메모리를 사용하여 OOM Killer가 실행 될 수 있다. 또한 메모리 부족으로 인한 종료 시 메모리 상한이 +없는 컨테이너가 종료될 가능성이 크다. + +* 기본 메모리 상한을 갖는 네임스페이스 내에서 실행중인 컨테이너는 +자동으로 기본 메모리 상한이 할당된다. 클러스터 관리자들은 +[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core)를 +사용해 메모리 상한의 기본 값을 지정 가능하다. + +## 메모리 요청량과 상한 동기부여 + +클러스터에서 실행되는 컨테이너에 메모리 요청량과 상한을 구성하여 +클러스터 내 노드들의 메모리 리소스를 효율적으로 사용할 수 있게 할 수 있다. +파드의 메모리 요청량을 적게 유지하여 파드가 높은 확률로 스케줄링 될 수 있도록 한다. +메모리 상한이 메모리 요청량보다 크면 다음 두 가지가 수행된다. + +* 가용한 메모리가 있는 경우 파드가 이를 사용할 수 있는 버스트(burst) 활동을 할 수 있다. +* 파드가 버스트 중 사용 가능한 메모리 양이 적절히 제한된다. + +## 정리 + +네임스페이스를 지운다. 이 작업을 통해 네임스페이스 내 생성했던 모든 파드들은 삭제된다. + +```shell +kubectl delete namespace mem-example +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + +### 앱 개발자들을 위한 + +* [CPU 리소스를 컨테이너와 파드에 할당](/docs/tasks/configure-pod-container/assign-cpu-resource/) + +* [파드에 서비스 품질 설정](/docs/tasks/configure-pod-container/quality-service-pod/) + +### 클러스터 관리자들을 위한 + +* [네임스페이스에 기본 메모리 요청량 및 상한을 구성](/docs/tasks/administer-cluster/memory-default-namespace/) + +* [네임스페이스에 기본 CPU 요청량 및 상한을 구성](/docs/tasks/administer-cluster/cpu-default-namespace/) + +* [네임스페이스에 최소 및 최대 메모리 제약 조건 구성](/docs/tasks/administer-cluster/memory-constraint-namespace/) + +* [네임스페이스에 최소 및 최대 CPU 제약 조건 구성](/docs/tasks/administer-cluster/cpu-constraint-namespace/) + +* [네임스페이스에 메모리 및 CPU 할당량 구성](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) + +* [네임스페이스에 파드 할당량 구성](/docs/tasks/administer-cluster/quota-pod-namespace/) + +* [API 오브젝트에 할당량 구성 ](/docs/tasks/administer-cluster/quota-api-object/) + +{{% /capture %}} diff --git a/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md b/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md new file mode 100644 index 0000000000000..c0c92c8b0aac6 --- /dev/null +++ b/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md @@ -0,0 +1,120 @@ +--- +title: 노드 어피니티를 사용해 노드에 파드 할당 +min-kubernetes-server-version: v1.10 +content_template: templates/task +weight: 120 +--- + +{{% capture overview %}} +이 문서는 쿠버네티스 클러스터의 특정 노드에 노드 어피니티를 사용해 쿠버네티스 파드를 할당하는 +방법을 설명한다. +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## 노드에 레이블 추가 + +1. 클러스터의 노드를 레이블과 함께 나열하자. + + ```shell + kubectl get nodes --show-labels + ``` + 결과는 아래와 같다. + + ```shell + NAME STATUS ROLES AGE VERSION LABELS + worker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2 + ``` +1. 노드 한 개를 선택하고, 레이블을 추가하자. + + ```shell + kubectl label nodes disktype=ssd + ``` + `` 는 선택한 노드의 이름이다. + +1. 선택한 노드가 `disktype=ssd` 레이블을 갖고 있는지 확인하자. + + ```shell + kubectl get nodes --show-labels + ``` + + 결과는 아래와 같다. + + ``` + NAME STATUS ROLES AGE VERSION LABELS + worker0 Ready 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2 + ``` + + 위의 결과에서, `worker0` 노드에 `disktype=ssd` 레이블이 있는 것을 + 확인할 수 있다. + +## 필수적인 노드 어피니티를 사용해 파드 스케줄하기 + +이 매니페스트는 `disktype: ssd` 라는 `requiredDuringSchedulingIgnoredDuringExecution` 노드 어피니티를 가진 파드를 설명한다. +파드가 `disktype=ssd` 레이블이 있는 노드에만 스케줄될 것이라는 것을 의미한다. + +{{< codenew file="pods/pod-nginx-required-affinity.yaml" >}} + +1. 매니페스트를 적용하여 선택한 노드에 스케줄된 파드를 + 생성한다. + + ```shell + kubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml + ``` + +1. 파드가 선택한 노드에서 실행 중인지 확인하자. + + ```shell + kubectl get pods --output=wide + ``` + + 결과는 아래와 같다. + + ``` + NAME READY STATUS RESTARTS AGE IP NODE + nginx 1/1 Running 0 13s 10.200.0.4 worker0 + ``` + +## 선호하는 노드 어피니티를 사용해 파드 스케줄하기 + +이 매니페스트는 `disktype: ssd` 라는 `preferredDuringSchedulingIgnoredDuringExecution` 노드 어피니티를 가진 파드를 설명한다. +파드가 `disktype=ssd` 레이블이 있는 노드를 선호한다는 것을 의미한다. + +{{< codenew file="pods/pod-nginx-preferred-affinity.yaml" >}} + +1. 매니페스트를 적용하여 선택한 노드에 스케줄된 파드를 + 생성한다. + + ```shell + kubectl apply -f https://k8s.io/examples/pods/pod-nginx-preferred-affinity.yaml + ``` + +1. 파드가 선택한 노드에서 실행 중인지 확인하자. + + ```shell + kubectl get pods --output=wide + ``` + + 결과는 아래와 같다. + + ``` + NAME READY STATUS RESTARTS AGE IP NODE + nginx 1/1 Running 0 13s 10.200.0.4 worker0 + ``` + +{{% /capture %}} + +{{% capture whatsnext %}} +[노드 어피니티](/ko/docs/concepts/configuration/assign-pod-node/#node-affinity)에 +대해 더 알아보기. +{{% /capture %}} diff --git a/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes.md b/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes.md new file mode 100644 index 0000000000000..8ce67986bc3ea --- /dev/null +++ b/content/ko/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -0,0 +1,104 @@ +--- +title: 노드에 파드 할당 +content_template: templates/task +weight: 120 +--- + +{{% capture overview %}} +이 문서는 쿠버네티스 클러스터의 특정 노드에 쿠버네티스 파드를 할당하는 +방법을 설명한다. +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## 노드에 레이블 추가 + +1. 클러스터의 {{< glossary_tooltip term_id="node" text="노드" >}}를 레이블과 함께 나열하자. + + ```shell + kubectl get nodes --show-labels + ``` + + 결과는 아래와 같다. + + ```shell + NAME STATUS ROLES AGE VERSION LABELS + worker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2 + ``` +1. 노드 한 개를 선택하고, 레이블을 추가하자. + + ```shell + kubectl label nodes disktype=ssd + ``` + + ``는 선택한 노드의 이름이다. + +1. 선택한 노드가 `disktype=ssd` 레이블을 갖고 있는지 확인하자. + + ```shell + kubectl get nodes --show-labels + ``` + + 결과는 아래와 같다. + + ```shell + NAME STATUS ROLES AGE VERSION LABELS + worker0 Ready 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2 + ``` + + 위의 결과에서, `worker0` 노드에 `disktype=ssd` 레이블이 있는 것을 + 확인할 수 있다. + +## 선택한 노드에 스케줄되도록 파드 생성하기 + +이 파드 구성 파일은 `disktype: ssd`라는 선택하는 노드 셀렉터를 가진 파드를 +설명한다. +즉, `disktype=ssd` 레이블이 있는 노드에 파드가 스케줄될 것이라는 +것을 의미한다. + +{{< codenew file="pods/pod-nginx.yaml" >}} + +1. 구성 파일을 사용해서 선택한 노드로 스케줄되도록 파드를 + 생성하자. + + ```shell + kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml + ``` + +1. 파드가 선택한 노드에서 실행 중인지 확인하자. + + ```shell + kubectl get pods --output=wide + ``` + + 결과는 아래와 같다. + + ```shell + NAME READY STATUS RESTARTS AGE IP NODE + nginx 1/1 Running 0 13s 10.200.0.4 worker0 + ``` + +## 특정 노드에 스케줄되도록 파드 생성하기 + +`nodeName` 설정을 통해 특정 노드로 파드를 배포할 수 있다. + +{{< codenew file="pods/pod-nginx-specific-node.yaml" >}} + +설정 파일을 사용해 `foo-node` 노드에 파드를 스케줄되도록 만들어 보자. + +{{% /capture %}} + +{{% capture whatsnext %}} +* [레이블과 셀렉터](/ko/docs/concepts/overview/working-with-objects/labels/)에 대해 배우기. +* [노드](/ko/docs/concepts/architecture/nodes/)에 대해 배우기. +{{% /capture %}} diff --git a/content/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md b/content/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md new file mode 100644 index 0000000000000..82c40239c067b --- /dev/null +++ b/content/ko/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md @@ -0,0 +1,121 @@ +--- +content_template: templates/concept +title: 엘라스틱서치(Elasticsearch) 및 키바나(Kibana)를 사용한 로깅 +--- + +{{% capture overview %}} + +Google 컴퓨트 엔진(Compute Engine, GCE) 플랫폼에서, 기본 로깅 지원은 +[스택드라이버(Stackdriver) 로깅](https://cloud.google.com/logging/)을 대상으로 한다. 이는 +[스택드라이버 로깅으로 로깅하기](/docs/user-guide/logging/stackdriver)에 자세히 설명되어 있다. + +이 문서에서는 GCE에서 운영할 때 스택드라이버 로깅의 대안으로, +[엘라스틱서치](https://www.elastic.co/products/elasticsearch)에 로그를 수집하고 +[키바나](https://www.elastic.co/products/kibana)를 사용하여 볼 수 있도록 +클러스터를 설정하는 방법에 대해 설명한다. + +{{< note >}} +Google 쿠버네티스 엔진(Kubernetes Engine)에서 호스팅되는 쿠버네티스 클러스터에는 엘라스틱서치 및 키바나를 자동으로 배포할 수 없다. 수동으로 배포해야 한다. +{{< /note >}} + +{{% /capture %}} + +{{% capture body %}} + +클러스터 로깅에 엘라스틱서치, 키바나를 사용하려면 kube-up.sh를 사용하여 +클러스터를 생성할 때 아래와 같이 다음의 환경 변수를 +설정해야 한다. + +```shell +KUBE_LOGGING_DESTINATION=elasticsearch +``` + +또한 `KUBE_ENABLE_NODE_LOGGING=true`(GCE 플랫폼의 기본값)인지 확인해야 한다. + +이제, 클러스터를 만들 때, 각 노드에서 실행되는 Fluentd 로그 수집 데몬이 +엘라스틱서치를 대상으로 한다는 메시지가 나타난다. + +```shell +cluster/kube-up.sh +``` +``` +... +Project: kubernetes-satnam +Zone: us-central1-b +... calling kube-up +Project: kubernetes-satnam +Zone: us-central1-b ++++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel ++++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d) ++++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0) +Looking for already existing resources +Starting master and configuring firewalls +Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd]. +NAME ZONE SIZE_GB TYPE STATUS +kubernetes-master-pd us-central1-b 20 pd-ssd READY +Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip]. ++++ Logging using Fluentd to elasticsearch +``` + +노드별 Fluentd 파드, 엘라스틱서치 파드 및 키바나 파드는 +클러스터가 활성화된 직후 kube-system 네임스페이스에서 모두 실행되어야 +한다. + +```shell +kubectl get pods --namespace=kube-system +``` +``` +NAME READY STATUS RESTARTS AGE +elasticsearch-logging-v1-78nog 1/1 Running 0 2h +elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h +fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h +fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h +fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h +fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h +kibana-logging-v1-bhpo8 1/1 Running 0 2h +kube-dns-v3-7r1l9 3/3 Running 0 2h +monitoring-heapster-v4-yl332 1/1 Running 1 2h +monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h +``` + +`fluentd-elasticsearch` 파드는 각 노드에서 로그를 수집하여 +`elasticsearch-logging` 파드로 전송한다. 이 로그는 `elasticsearch-logging` 이라는 +[서비스](/ko/docs/concepts/services-networking/service/)의 일부이다. 이 +엘라스틱서치 파드는 로그를 저장하고 REST API를 통해 노출한다. +`kibana-logging` 파드는 엘라스틱서치에 저장된 로그를 읽기 위한 웹 UI를 +제공하며, `kibana-logging` 이라는 서비스의 일부이다. + +엘라스틱서치 및 키바나 서비스는 모두 `kube-system` 네임스페이스에 +있으며 공개적으로 접근 가능한 IP 주소를 통해 직접 노출되지 않는다. 이를 위해, +[클러스터에서 실행 중인 서비스 접근](/ko/docs/tasks/access-application-cluster/access-cluster/#클러스터에서-실행되는-서비스로-액세스)에 대한 지침을 참고한다. + +브라우저에서 `elasticsearch-logging` 서비스에 접근하려고 하면, +다음과 같은 상태 페이지가 표시된다. + +![엘라스틱서치 상태](/images/docs/es-browser.png) + +원할 경우, 이제 엘라스틱서치 쿼리를 브라우저에 직접 입력할 수 +있다. 수행 방법에 대한 자세한 내용은 [엘라스틱서치의 문서](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html)를 +참조한다. + +또는, 키바나를 사용하여 클러스터의 로그를 볼 수도 있다(다시 +[클러스터에서 실행되는 서비스에 접근하기 위한 지침](/ko/docs/tasks/access-application-cluster/access-cluster/#클러스터에서-실행되는-서비스로-액세스)을 참고). +키바나 URL을 처음 방문하면 수집된 로그 보기를 +구성하도록 요청하는 페이지가 표시된다. 시계열 값에 +대한 옵션을 선택하고 `@timestamp` 를 선택한다. 다음 페이지에서 +`Discover` 탭을 선택하면 수집된 로그를 볼 수 있다. +로그를 정기적으로 새로 고치려면 새로 고침 간격을 5초로 +설정할 수 있다. + +키바나 뷰어에서 수집된 로그의 일반적인 보기는 다음과 같다. + +![키바나 로그](/images/docs/kibana-logs.png) + +{{% /capture %}} + +{{% capture whatsnext %}} + +키바나는 로그를 탐색하기 위한 모든 종류의 강력한 옵션을 제공한다! 이를 파헤치는 방법에 대한 +아이디어는 [키바나의 문서](https://www.elastic.co/guide/en/kibana/current/discover.html)를 확인한다. + +{{% /capture %}} diff --git a/content/ko/docs/tasks/manage-gpus/scheduling-gpus.md b/content/ko/docs/tasks/manage-gpus/scheduling-gpus.md new file mode 100644 index 0000000000000..bd8a3fe1aa91f --- /dev/null +++ b/content/ko/docs/tasks/manage-gpus/scheduling-gpus.md @@ -0,0 +1,219 @@ +--- + + +content_template: templates/concept +title: GPU 스케줄링 +--- + +{{% capture overview %}} + +{{< feature-state state="beta" for_k8s_version="1.10" >}} + +쿠버네티스는 AMD 및 NVIDIA GPU(그래픽 프로세싱 유닛)를 노드들에 걸쳐 관리하기 위한 **실험적인** +지원을 포함한다. + +이 페이지는 다른 쿠버네티스 버전 간에 걸쳐 사용자가 GPU들을 소비할 수 있는 방법과 +현재의 제약 사항을 설명한다. + +{{% /capture %}} + + +{{% capture body %}} + +## 디바이스 플러그인 사용하기 + +쿠버네티스는 {{< glossary_tooltip text="디바이스 플러그인" term_id="device-plugin" >}}을 구현하여 +파드가 GPU와 같이 특별한 하드웨어 기능에 접근할 수 있게 한다. + +관리자는 해당하는 하드웨어 벤더의 GPU 드라이버를 노드에 +설치해야 하며, GPU 벤더가 제공하는 디바이스 플러그인을 +실행해야 한다. + +* [AMD](#amd-gpu-디바이스-플러그인-배치하기) +* [NVIDIA](#nvidia-gpu-디바이스-플러그인-배치하기) + +위의 조건이 만족되면, 쿠버네티스는 `amd.com/gpu` 또는 +`nvidia.com/gpu` 를 스케줄 가능한 리소스로써 노출시킨다. + +사용자는 이 GPU들을 `cpu` 나 `memory` 를 요청하는 방식과 동일하게 +`.com/gpu` 를 요청함으로써 컨테이너를 통해 소비할 수 있다. +그러나 GPU를 사용할 때는 리소스 요구 사항을 명시하는 방식에 약간의 +제약이 있다. + +- GPU는 `limits` 섹션에서만 명시되는 것을 가정한다. 그 의미는 다음과 같다. + * 쿠버네티스는 limits를 requests의 기본 값으로 사용하게 되므로 + 사용자는 GPU `limits` 를 명시할 때 `requests` 명시하지 않아도 된다. + * 사용자는 `limits` 과 `requests` 를 모두 명시할 수 있지만, 두 값은 + 동일해야 한다. + * 사용자는 `limits` 명시 없이는 GPU `requests` 를 명시할 수 없다. +- 컨테이너들(그리고 파드들)은 GPU를 공유하지 않는다. GPU에 대한 초과 할당(overcommitting)은 제공되지 않는다. +- 각 컨테이너는 하나 이상의 GPU를 요청할 수 있다. GPU의 일부(fraction)를 요청하는 것은 + 불가능하다. + +다음은 한 예제를 보여준다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: cuda-vector-add +spec: + restartPolicy: OnFailure + containers: + - name: cuda-vector-add + # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile + image: "k8s.gcr.io/cuda-vector-add:v0.1" + resources: + limits: + nvidia.com/gpu: 1 # GPU 1개 요청하기 +``` + +### AMD GPU 디바이스 플러그인 배치하기 + +[공식 AMD GPU 디바이스 플러그인](https://github.com/RadeonOpenCompute/k8s-device-plugin)에는 +다음의 요구 사항이 있다. + +- 쿠버네티스 노드들에는 AMD GPU 리눅스 드라이버가 미리 설치되어 있어야 한다. + +클러스터가 실행 중이고 위의 요구 사항이 만족된 후, AMD 디바이스 플러그인을 배치하기 위해서는 +아래 명령어를 실행한다. +```shell +kubectl create -f https://raw.githubusercontent.com/RadeonOpenCompute/k8s-device-plugin/v1.10/k8s-ds-amdgpu-dp.yaml +``` + +[RadeonOpenCompute/k8s-device-plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)에 이슈를 로깅하여 +해당 서드 파티 디바이스 플러그인에 대한 이슈를 리포트할 수 있다. + +### NVIDIA GPU 디바이스 플러그인 배치하기 + +현재는 NVIDIA GPU에 대한 두 개의 디바이스 플러그인 구현체가 있다. + +#### 공식 NVIDIA GPU 디바이스 플러그인 + +[공식 NVIDIA GPU 디바이스 플러그인](https://github.com/NVIDIA/k8s-device-plugin)은 +다음의 요구 사항을 가진다. + +- 쿠버네티스 노드에는 NVIDIA 드라이버가 미리 설치되어 있어야 한다. +- 쿠버네티스 노드에는 [nvidia-docker 2.0](https://github.com/NVIDIA/nvidia-docker)이 미리 설치되어 있어야 한다. +- Kubelet은 자신의 컨테이너 런타임으로 도커를 사용해야 한다. +- 도커는 runc 대신 `nvidia-container-runtime` 이 [기본 런타임](https://github.com/NVIDIA/k8s-device-plugin#preparing-your-gpu-nodes)으로 + 설정되어야 한다. +- NVIDIA 드라이버의 버전은 조건 ~= 361.93 을 만족해야 한다. + +클러스터가 실행 중이고 위의 요구 사항이 만족된 후, NVIDIA 디바이스 플러그인을 배치하기 위해서는 +아래 명령어를 실행한다. + +```shell +kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml +``` + +[NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin)에 이슈를 로깅하여 +해당 서드 파티 디바이스 플러그인에 대한 이슈를 리포트할 수 있다. + +#### GCE에서 사용되는 NVIDIA GPU 디바이스 플러그인 + +[GCE에서 사용되는 NVIDIA GPU 디바이스 플러그인](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu)은 +nvidia-docker의 사용이 필수가 아니며 컨테이너 런타임 인터페이스(CRI)에 +호환되는 다른 컨테이너 런타임을 사용할 수 있다. 해당 사항은 +[컨테이너에 최적화된 OS](https://cloud.google.com/container-optimized-os/)에서 테스트되었고, +우분투 1.9 이후 버전에 대한 실험적인 코드를 가지고 있다. + +사용자는 다음 커맨드를 사용하여 NVIDIA 드라이버와 디바이스 플러그인을 설치할 수 있다. + +```shell +# 컨테이너에 최적회된 OS에 NVIDIA 드라이버 설치: +kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/stable/daemonset.yaml + +# 우분투에 NVIDIA 드라이버 설치(실험적): +kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/stable/nvidia-driver-installer/ubuntu/daemonset.yaml + +# 디바이스 플러그인 설치: +kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.14/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml +``` + +[GoogleCloudPlatform/container-engine-accelerators](https://github.com/GoogleCloudPlatform/container-engine-accelerators)에 이슈를 로깅하여 +해당 서드 파티 디바이스 플러그인에 대한 이슈를 리포트할 수 있다. + +Google은 GKE에서 NVIDIA GPU 사용에 대한 자체 [설명서](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus)를 게재하고 있다. + +## 다른 타입의 GPU들을 포함하는 클러스터 + +만약 클러스터의 노드들이 서로 다른 타입의 GPU를 가지고 있다면, 사용자는 +파드를 적합한 노드에 스케줄 하기 위해서 +[노드 레이블과 노드 셀렉터](/docs/tasks/configure-pod-container/assign-pods-nodes/)를 사용할 수 있다. + +예를 들면, + +```shell +# 노드가 가진 가속기 타입에 따라 레이블을 단다. +kubectl label nodes accelerator=nvidia-tesla-k80 +kubectl label nodes accelerator=nvidia-tesla-p100 +``` + +## 노드 레이블링 자동화 {#node-labeller} + +만약 AMD GPU 디바이스를 사용하고 있다면, +[노드 레이블러](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller)를 배치할 수 있다. +노드 레이블러는 GPU 디바이스의 속성에 따라서 노드에 자동으로 레이블을 달아 주는 +{{< glossary_tooltip text="컨트롤러" term_id="controller" >}}이다. + +현재 이 컨트롤러는 다음의 속성에 대해 레이블을 추가할 수 있다. + +* 디바이스 ID (-device-id) +* VRAM 크기 (-vram) +* SIMD 개수 (-simd-count) +* 계산 유닛 개수 (-cu-count) +* 펌웨어 및 기능 버전 (-firmware) +* GPU 계열, 두 개 문자 형태의 축약어 (-family) + * SI - Southern Islands + * CI - Sea Islands + * KV - Kaveri + * VI - Volcanic Islands + * CZ - Carrizo + * AI - Arctic Islands + * RV - Raven + +```shell +kubectl describe node cluster-node-23 +``` + +``` + Name: cluster-node-23 + Roles: + Labels: beta.amd.com/gpu.cu-count.64=1 + beta.amd.com/gpu.device-id.6860=1 + beta.amd.com/gpu.family.AI=1 + beta.amd.com/gpu.simd-count.256=1 + beta.amd.com/gpu.vram.16G=1 + beta.kubernetes.io/arch=amd64 + beta.kubernetes.io/os=linux + kubernetes.io/hostname=cluster-node-23 + Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock + node.alpha.kubernetes.io/ttl: 0 + … +``` + +노드 레이블러가 사용된 경우, GPU 타입을 파드 스펙에 명시할 수 있다. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: cuda-vector-add +spec: + restartPolicy: OnFailure + containers: + - name: cuda-vector-add + # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile + image: "k8s.gcr.io/cuda-vector-add:v0.1" + resources: + limits: + nvidia.com/gpu: 1 + nodeSelector: + accelerator: nvidia-tesla-p100 # 또는 nvidia-tesla-k80 등. +``` + +이것은 파드가 사용자가 지정한 GPU 타입을 가진 노드에 스케줄 되도록 +만든다. + +{{% /capture %}} diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/ko/docs/tasks/manage-kubernetes-objects/declarative-config.md index dde6650e20b1f..013854884ad5c 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -117,7 +117,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -134,7 +134,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 # ... name: nginx ports: @@ -197,7 +197,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -214,7 +214,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 # ... name: nginx ports: @@ -253,7 +253,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -271,7 +271,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 # ... name: nginx ports: @@ -279,7 +279,7 @@ spec: # ... ``` -`nginx:1.7.9`에서 `nginx:1.11.9`로 이미지를 변경하기 위해 `simple_deployment.yaml` +`nginx:1.14.2`에서 `nginx:1.16.1`로 이미지를 변경하기 위해 `simple_deployment.yaml` 구성 파일을 업데이트 하고, `minReadySeconds` 필드를 삭제한다. {{< codenew file="application/update_deployment.yaml" >}} @@ -301,7 +301,7 @@ kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yam * `replicas` 필드는 `kubectl scale`에 의해 설정된 값 2를 유지한다. 이는 구성 파일에서 생략되었기 때문에 가능하다. -* `image` 필드는 `nginx:1.7.9`에서 `nginx:1.11.9`로 업데이트되었다. +* `image` 필드는 `nginx:1.14.2`에서 `nginx:1.16.1`로 업데이트되었다. * `last-applied-configuration` 어노테이션은 새로운 이미지로 업데이트되었다. * `minReadySeconds` 필드는 지워졌다. * `last-applied-configuration` 어노테이션은 더 이상 `minReadySeconds` 필드를 포함하지 않는다. @@ -318,7 +318,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.11.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.16.1","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -336,7 +336,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.11.9 # Set by `kubectl apply` + - image: nginx:1.16.1 # Set by `kubectl apply` # ... name: nginx ports: @@ -458,7 +458,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -476,7 +476,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 # ... name: nginx ports: @@ -516,7 +516,7 @@ metadata: {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, - "spec":{"containers":[{"image":"nginx:1.11.9","name":"nginx", + "spec":{"containers":[{"image":"nginx:1.16.1","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: @@ -534,7 +534,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.11.9 # Set by `kubectl apply` + - image: nginx:1.16.1 # Set by `kubectl apply` # ... name: nginx ports: @@ -777,7 +777,7 @@ spec: app: nginx spec: containers: - - image: nginx:1.7.9 + - image: nginx:1.14.2 imagePullPolicy: IfNotPresent # defaulted by apiserver name: nginx ports: @@ -817,7 +817,7 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 @@ -832,7 +832,7 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 @@ -850,7 +850,7 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 @@ -868,7 +868,7 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 ``` diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/ko/docs/tasks/manage-kubernetes-objects/imperative-command.md index 695ec57d095de..7f2d17ca5f7ce 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/imperative-command.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/imperative-command.md @@ -139,10 +139,10 @@ TODO(pwittrock): 구현이 이루어지면 주석을 해제한다. 다음은 관련 예제이다. ```sh -kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f - +kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f - ``` -1. `kubectl create service -o yaml --dry-run` 커맨드는 서비스에 대한 구성을 생성하지만, 이를 쿠버네티스 API 서버에 전송하는 대신 YAML 형식으로 stdout에 출력한다. +1. `kubectl create service -o yaml --dry-run=client` 커맨드는 서비스에 대한 구성을 생성하지만, 이를 쿠버네티스 API 서버에 전송하는 대신 YAML 형식으로 stdout에 출력한다. 1. `kubectl set selector --local -f - -o yaml` 커맨드는 stdin으로부터 구성을 읽어, YAML 형식으로 stdout에 업데이트된 구성을 기록한다. 1. `kubectl create -f -` 커맨드는 stdin을 통해 제공된 구성을 사용하여 오브젝트를 생성한다. @@ -152,7 +152,7 @@ kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run | k 다음은 관련 예제이다. ```sh -kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run > /tmp/srv.yaml +kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client > /tmp/srv.yaml kubectl create --edit -f /tmp/srv.yaml ``` diff --git a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md index 6cd56ae881477..87ca9269086bc 100644 --- a/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/ko/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -791,6 +791,12 @@ kubectl get -k ./ kubectl describe -k ./ ``` +다음 명령을 실행해서 디플로이먼트 오브젝트 `dev-my-nginx` 를 매니페스트가 적용된 경우의 클러스터 상태와 비교한다. + +```shell +kubectl diff -k ./ +``` + 디플로이먼트 오브젝트 `dev-my-nginx`를 삭제하려면 다음 명령어를 실행한다. ```shell diff --git a/content/ko/docs/tasks/network/_index.md b/content/ko/docs/tasks/network/_index.md new file mode 100644 index 0000000000000..26317241ec603 --- /dev/null +++ b/content/ko/docs/tasks/network/_index.md @@ -0,0 +1,4 @@ +--- +title: "네트워크" +weight: 160 +--- diff --git a/content/ko/docs/tasks/network/validate-dual-stack.md b/content/ko/docs/tasks/network/validate-dual-stack.md new file mode 100644 index 0000000000000..c86948142ef61 --- /dev/null +++ b/content/ko/docs/tasks/network/validate-dual-stack.md @@ -0,0 +1,158 @@ +--- +min-kubernetes-server-version: v1.16 +title: IPv4/IPv6 이중 스택 검증 +content_template: templates/task +--- + +{{% capture overview %}} +이 문서는 IPv4/IPv6 이중 스택이 활성화된 쿠버네티스 클러스터들을 어떻게 검증하는지 설명한다. +{{% /capture %}} + +{{% capture prerequisites %}} + +* 이중 스택 네트워킹을 위한 제공자 지원 (클라우드 제공자 또는 기타 제공자들은 라우팅 가능한 IPv4/IPv6 네트워크 인터페이스를 제공하는 쿠버네티스 노드들을 제공해야 한다.) +* 이중 스택을 지원하는 [네트워크 플러그인](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (예. Kubenet 또는 Calico) +* IPVS 모드로 구동되는 Kube-proxy +* [이중 스택 활성화](/ko/docs/concepts/services-networking/dual-stack/) 클러스터 + +{{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## 어드레싱 검증 + +### 노드 어드레싱 검증 + +각각의 이중 스택 노드는 단일 IPv4 블록 및 단일 IPv6 블록을 할당받아야 한다. IPv4/IPv6 파드 주소 범위를 다음 커맨드를 실행하여 검증한다. 샘플 노드 이름을 클러스터 내 검증된 이중 스택 노드로 대체한다. 본 예제에서, 노드 이름은 `k8s-linuxpool1-34450317-0` 이다. + +```shell +kubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .spec.podCIDRs}}{{printf "%s\n" .}}{{end}}' +``` +``` +10.244.1.0/24 +a00:100::/24 +``` +단일 IPv4 블록과 단일 IPv6 블록이 할당되어야 한다. + +노드가 IPv4 및 IPv6 인터페이스를 가지고 있는지 검증한다. (노드 이름을 클러스터의 검증된 노드로 대체한다. 본 예제에서 노드 이름은 k8s-linuxpool1-34450317-0) 이다. +```shell +kubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf "%s: %s \n" .type .address}}{{end}}' +``` +``` +Hostname: k8s-linuxpool1-34450317-0 +InternalIP: 10.240.0.5 +InternalIP: 2001:1234:5678:9abc::5 +``` + +### 파드 어드레싱 검증 + +파드가 IPv4 및 IPv6 주소를 할당받았는지 검증한다. (파드 이름을 클러스터에서 검증된 파드로 대체한다. 본 예제에서 파드 이름은 pod01 이다.) +```shell +kubectl get pods pod01 -o go-template --template='{{range .status.podIPs}}{{printf "%s \n" .ip}}{{end}}' +``` +``` +10.244.1.4 +a00:100::4 +``` + +`status.podIPs` fieldPath를 통한 다운워드(downward) API로 파드 IP들을 검증할 수도 있다. 다음 스니펫은 컨테이너 내 `MY_POD_IPS` 라는 환경 변수를 통해 파드 IP들을 어떻게 노출시킬 수 있는지 보여준다. + +``` + env: + - name: MY_POD_IPS + valueFrom: + fieldRef: + fieldPath: status.podIPs +``` + +다음 커맨드는 컨테이너 내 `MY_POD_IPS` 환경 변수의 값을 출력한다. 해당 값은 파드의 IPv4 및 IPv6 주소를 나타내는 쉼표로 구분된 목록이다. +```shell +kubectl exec -it pod01 -- set | grep MY_POD_IPS +``` +``` +MY_POD_IPS=10.244.1.4,a00:100::4 +``` + +파드의 IP 주소는 또한 컨테이너 내 `/etc/hosts` 에 적힐 것이다. 다음 커맨드는 이중 스택 파드의 `/etc/hosts` 에 cat을 실행시킨다. 출력 값을 통해 파드의 IPv4 및 IPv6 주소 모두 검증할 수 있다. + +```shell +kubectl exec -it pod01 -- cat /etc/hosts +``` +``` +# Kubernetes-managed hosts file. +127.0.0.1 localhost +::1 localhost ip6-localhost ip6-loopback +fe00::0 ip6-localnet +fe00::0 ip6-mcastprefix +fe00::1 ip6-allnodes +fe00::2 ip6-allrouters +10.244.1.4 pod01 +a00:100::4 pod01 +``` + +## 서비스 검증 + +`ipFamily` 필드 세트 없이 다음 서비스를 생성한다. 필드가 구성되지 않았으면 서비스는 kube-controller-manager의 `--service-cluster-ip-range` 플래그를 통해 설정된 범위 내 첫 IP를 할당받는다. + +{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} + +해당 서비스의 YAML을 보면, 서비스의 `ipFamily` 필드가 kube-controller-manager의 `--service-cluster-ip-range` 플래그를 통해 첫 번째 설정된 범위의 주소 패밀리를 반영하도록 설정되어 있음을 확인할 수 있다. + +```shell +kubectl get svc my-service -o yaml +``` + +```yaml +apiVersion: v1 +kind: Service +metadata: + creationTimestamp: "2019-09-03T20:45:13Z" + labels: + app: MyApp + name: my-service + namespace: default + resourceVersion: "485836" + selfLink: /api/v1/namespaces/default/services/my-service + uid: b6fa83ef-fe7e-47a3-96a1-ac212fa5b030 +spec: + clusterIP: 10.0.29.179 + ipFamily: IPv4 + ports: + - port: 80 + protocol: TCP + targetPort: 9376 + selector: + app: MyApp + sessionAffinity: None + type: ClusterIP +status: + loadBalancer: {} +``` + +`ipFamily` 필드를 `IPv6`로 설정하여 다음의 서비스를 생성한다. + +{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} + +서비스가 IPv6 주소 블록에서 클러스터 IP 주소를 할당받는 것을 검증한다. 그리고 나서 IP 및 포트로 서비스 접근이 가능한지 검증할 수 있다. +``` + kubectl get svc -l app=MyApp +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-service ClusterIP fe80:20d::d06b 80/TCP 9s +``` + +### 이중 스택 로드 밸런싱 서비스 생성 + +만약 클라우드 제공자가 IPv6 기반 외부 로드 밸런서 구성을 지원한다면 `ipFamily` 필드를 `IPv6`로, `type` 필드를 `LoadBalancer` 로 설정하여 다음의 서비스를 생성한다. + +{{< codenew file="service/networking/dual-stack-ipv6-lb-svc.yaml" >}} + +서비스가 IPv6 주소 블록에서 `CLUSTER-IP` 주소 및 `EXTERNAL-IP` 주소를 할당받는지 검증한다. 그리고 나서 IP 및 포트로 서비스 접근이 가능한지 검증할 수 있다. +``` + kubectl get svc -l app=MyApp +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-service ClusterIP fe80:20d::d06b 2001:db8:f100:4002::9d37:c0d7 80:31868/TCP 30s +``` + +{{% /capture %}} diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 6b421f5aed9e7..b847a0d691db2 100644 --- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -113,7 +113,7 @@ kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/ Hit enter for command prompt -while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done +while true; do wget -q -O- http://php-apache; done ``` 실행 후, 약 1분 정도 후에 CPU 부하가 올라가는 것을 볼 수 있다. @@ -197,7 +197,6 @@ apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: php-apache - namespace: default spec: scaleTargetRef: apiVersion: apps/v1 @@ -294,7 +293,6 @@ apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: php-apache - namespace: default spec: scaleTargetRef: apiVersion: apps/v1 diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md index 136ff556e917d..c276257bfad91 100644 --- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -199,13 +199,12 @@ Horizontal Pod Autoscaler는 모든 API 리소스와 마찬가지로 `kubectl` ## 롤링 업데이트 중 오토스케일링 -현재 쿠버네티스에서는 레플리케이션 컨트롤러를 직접 관리하거나, -기본 레플리카 셋를 관리하는 디플로이먼트 오브젝트를 사용하여 [롤링 업데이트](/docs/tasks/run-application/rolling-update-replication-controller/)를 수행 할 수 있다. +현재 쿠버네티스에서는 기본 레플리카 셋를 관리하는 디플로이먼트 오브젝트를 사용하여 롤링 업데이트를 수행할 수 있다. Horizontal Pod Autoscaler는 후자의 방법을 지원한다. Horizontal Pod Autoscaler는 디플로이먼트 오브젝트에 바인딩되고, 디플로이먼트 오브젝트를 위한 크기를 설정하며, 디플로이먼트는 기본 레플리카 셋의 크기를 결정한다. Horizontal Pod Autoscaler는 레플리케이션 컨트롤러를 직접 조작하는 롤링 업데이트에서 작동하지 않는다. -즉, Horizontal Pod Autoscaler를 레플리케이션 컨트롤러에 바인딩하고 롤링 업데이트를 수행할 수ㄴ 없다. (예 : `kubectl rolling-update`) +즉, Horizontal Pod Autoscaler를 레플리케이션 컨트롤러에 바인딩하고 롤링 업데이트를 수행할 수 없다. (예 : `kubectl rolling-update`) 작동하지 않는 이유는 롤링 업데이트에서 새 레플리케이션 컨트롤러를 만들 때, Horizontal Pod Autoscaler가 새 레플리케이션 컨트롤러에 바인딩되지 않기 때문이다. @@ -291,7 +290,7 @@ API에 접속하려면 클러스터 관리자는 다음을 확인해야 한다. ## 구성가능한 스케일링 동작 지원 -[v1.17](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md) +[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md) 부터 `v2beta2` API는 HPA `behavior` 필드를 통해 스케일링 동작을 구성할 수 있다. 동작은 `behavior` 필드 아래의 `scaleUp` 또는 `scaleDown` diff --git a/content/ko/docs/tasks/tools/install-minikube.md b/content/ko/docs/tasks/tools/install-minikube.md index 82a899bba0866..41ff7092ec668 100644 --- a/content/ko/docs/tasks/tools/install-minikube.md +++ b/content/ko/docs/tasks/tools/install-minikube.md @@ -26,7 +26,7 @@ grep -E --color 'vmx|svm' /proc/cpuinfo {{% tab name="맥OS" %}} 맥OS에서 가상화 지원 여부를 확인하려면, 아래 명령어를 터미널에서 실행한다. ``` -sysctl -a | grep -E --color 'machdep.cpu.features|VMX' +sysctl -a | grep -E --color 'machdep.cpu.features|VMX' ``` 만약 출력 중에 (색상으로 강조된) `VMX`를 볼 수 있다면, VT-x 기능이 머신에서 활성화된 것이다. {{% /tab %}} @@ -74,7 +74,7 @@ kubectl이 설치되었는지 확인한다. kubectl은 [kubectl 설치하고 설 • [VirtualBox](https://www.virtualbox.org/wiki/Downloads) -Minikube는 쿠버네티스 컴포넌트를 VM이 아닌 호스트에서도 동작하도록 `--vm-driver=none` 옵션도 지원한다. +Minikube는 쿠버네티스 컴포넌트를 VM이 아닌 호스트에서도 동작하도록 `--driver=none` 옵션도 지원한다. 이 드라이버를 사용하려면 [도커](https://www.docker.com/products/docker-desktop) 와 Linux 환경이 필요하지만, 하이퍼바이저는 필요하지 않다. 데비안(Debian) 또는 파생된 배포판에서 `none` 드라이버를 사용하는 경우, @@ -83,7 +83,7 @@ Minikube에서는 동작하지 않는 스냅 패키지 대신 도커용 `.deb` {{< caution >}} `none` VM 드라이버는 보안과 데이터 손실 이슈를 일으킬 수 있다. -`--vm-driver=none` 을 사용하기 전에 [이 문서](https://minikube.sigs.k8s.io/docs/reference/drivers/none/)를 참조해서 더 자세한 내용을 본다. +`--driver=none` 을 사용하기 전에 [이 문서](https://minikube.sigs.k8s.io/docs/reference/drivers/none/)를 참조해서 더 자세한 내용을 본다. {{< /caution >}} Minikube는 도커 드라이브와 비슷한 `vm-driver=podman` 도 지원한다. 슈퍼사용자 권한(root 사용자)으로 실행되는 Podman은 컨테이너가 시스템에서 사용 가능한 모든 기능에 완전히 접근할 수 있는 가장 좋은 방법이다. @@ -214,12 +214,12 @@ Minikube 설치를 마친 후, 현재 CLI 세션을 닫고 재시작한다. Mini {{< note >}} -`minikube start` 시 `--vm-driver` 를 설정하려면, 아래에 `` 로 소문자로 언급된 곳에 설치된 하이퍼바이저의 이름을 입력한다. `--vm-driver` 값의 전체 목록은 [VM driver 문서에서 지정하기](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)에서 확인할 수 있다. +`minikube start` 시 `--driver` 를 설정하려면, 아래에 `` 로 소문자로 언급된 곳에 설치된 하이퍼바이저의 이름을 입력한다. `--driver` 값의 전체 목록은 [VM driver 문서에서 지정하기](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver)에서 확인할 수 있다. {{< /note >}} ```shell -minikube start --vm-driver= +minikube start --driver= ``` `minikube start` 가 완료되면, 아래 명령을 실행해서 클러스터의 상태를 확인한다. diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md index 149a2d014996f..7334c6b0c707c 100644 --- a/content/ko/docs/tutorials/hello-minikube.md +++ b/content/ko/docs/tutorials/hello-minikube.md @@ -78,7 +78,7 @@ Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다. 파드는 제공된 Docker 이미지를 기반으로 한 컨테이너를 실행한다. ```shell - kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 ``` 2. 디플로이먼트 보기 diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 33a8f8dccb13d..9f5711e5c7ade 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -31,7 +31,7 @@

      쿠버네티스 디플로이먼트

      일단 쿠버네티스 클러스터를 구동시키면, 그 위에 컨테이너화된 애플리케이션을 배포할 수 있다. 그러기 위해서, 쿠버네티스 디플로이먼트 설정을 만들어야 한다. 디플로이먼트는 쿠버네티스가 애플리케이션의 인스턴스를 어떻게 생성하고 업데이트해야 하는지를 지시한다. 디플로이먼트가 만들어지면, - 쿠버네티스 마스터가 해당 애플리케이션 인스턴스를 클러스터의 개별 노드에 스케줄한다. + 쿠버네티스 마스터가 해당 디플로이먼트에 포함된 애플리케이션 인스턴스가 클러스터의 개별 노드에서 실행되도록 스케줄한다.

      애플리케이션 인스턴스가 생성되면, 쿠버네티스 디플로이먼트 컨트롤러는 지속적으로 이들 인스턴스를 diff --git a/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md b/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md index 59fdbbc7e364a..f5d3cf7b5cc92 100644 --- a/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/ko/docs/tutorials/stateful-application/basic-stateful-set.md @@ -18,7 +18,7 @@ weight: 10 * [파드](/docs/user-guide/pods/single-container/) * [클러스터 DNS(Cluster DNS)](/ko/docs/concepts/services-networking/dns-pod-service/) * [헤드리스 서비스(Headless Services)](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스) -* [퍼시스턴트볼륨(PersistentVolumes)](/docs/concepts/storage/persistent-volumes/) +* [퍼시스턴트볼륨(PersistentVolumes)](/ko/docs/concepts/storage/persistent-volumes/) * [퍼시턴트볼륨 프로비저닝](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/) * [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/) * [kubectl CLI](/docs/user-guide/kubectl/) @@ -257,7 +257,7 @@ NAME STATUS VOLUME CAPACITY ACCE www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO 48s www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s ``` -스테이트풀셋 컨트롤러는 2개의 [퍼시스턴트볼륨](/docs/concepts/storage/persistent-volumes/)에 +스테이트풀셋 컨트롤러는 2개의 [퍼시스턴트볼륨](/ko/docs/concepts/storage/persistent-volumes/)에 묶인 2개의 퍼시스턴트볼륨클레임을 생성했다. 본 튜토리얼에서 사용되는 클러스터는 퍼시스턴트볼륨을 동적으로 프로비저닝하도록 설정되었으므로 생성된 퍼시스턴트볼륨도 자동으로 묶인다. diff --git a/content/ko/docs/tutorials/stateful-application/cassandra.md b/content/ko/docs/tutorials/stateful-application/cassandra.md index 72c090988c4b0..6ba3feeab5bf6 100644 --- a/content/ko/docs/tutorials/stateful-application/cassandra.md +++ b/content/ko/docs/tutorials/stateful-application/cassandra.md @@ -6,34 +6,31 @@ weight: 30 --- {{% capture overview %}} -이 튜토리얼은 네이티브 클라우드 [카산드라](http://cassandra.apache.org/)를 쿠버네티스에서 배포하는 방법을 소개한다. 이 예제에서 커스텀 카산드라 *시드 제공자(SeedProvider)* 는 카산드라가 클러스터에 조인한 새 카산드라 노드를 발견할 수 있게 한다. +이 튜토리얼은 쿠버네티스에서 [아파치 카산드라](http://cassandra.apache.org/)를 실행하는 방법을 소개한다. 데이터베이스인 카산드라는 데이터 내구성을 제공하기 위해 퍼시스턴트 스토리지가 필요하다(애플리케이션 _상태_). 이 예제에서 사용자 지정 카산드라 시드 공급자는 카산드라가 클러스터에 가입할 때 카산드라가 인스턴스를 검색할 수 있도록 한다. -*스테이트풀셋* 은 상태있는 애플리케이션을 클러스터 환경에서 쉽게 배포할 수 있게 한다. 이 튜토리얼에서 이용할 기능의 자세한 정보는 [*스테이트풀셋*](/ko/docs/concepts/workloads/controllers/statefulset/) 문서를 참조하자. +*스테이트풀셋* 은 상태있는 애플리케이션을 쿠버네티스 클러스터에 쉽게 배포할 수 있게 한다. 이 튜토리얼에서 이용할 기능의 자세한 정보는 [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/)을 참조한다. -**도커에서 카산드라** - -이 튜토리얼의 *파드* 는 구글의 [컨테이너 레지스트리](https://cloud.google.com/container-registry/docs/)에 -[`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile) 이미지를 이용한다. -이 도커 이미지는 [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base)에 -기반하였고 OpenJDK 8을 포함한다. - -이 이미지는 아파치 데비안 리포의 표준 카산드라 설치본을 포함한다. -환경변수를 이용하여 `cassandra.yaml`에 삽입된 값을 바꿀 수 있다. - -| 환경 변수 | 기본값 | -| ------------- |:-------------: | -| `CASSANDRA_CLUSTER_NAME` | `'Test Cluster'` | -| `CASSANDRA_NUM_TOKENS` | `32` | -| `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` | +{{< note >}} +카산드라와 쿠버네티스는 클러스터 맴버라는 의미로 _노드_ 라는 용어를 사용한다. 이 +튜토리얼에서 스테이트풀셋에 속하는 파드는 카산드라 노드이며 카산드라 +클로스터의 맴버(_링_ 이라 함)이다. 해당 파드가 쿠버네티스 클러스터에서 실행될 때, +쿠버네티스 컨트롤 플레인은 해당 파드를 쿠버네티스 +{{< glossary_tooltip text="노드" term_id="node" >}}에 스케줄 한다. + +카산드라 노드가 시작되면 _시드 목록_ 을 사용해서 링에 있는 다른 노드 검색을 위한 +위한 부트스트랩을 한다. +이 튜토리얼에는 데이터베이스가 쿠버네티스 클러스터 내부에 나타날 때 새로운 카산드라 +파드를 검색할 수 있는 사용자 지정 카산드라 시드 공급자를 배포한다. +{{< /note >}} {{% /capture %}} {{% capture objectives %}} -* 카산드라 헤드리스 [*서비스*](/ko/docs/concepts/services-networking/service/)를 생성하고 검증한다. -* [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/)을 이용하여 카산드라 링을 생성한다. -* [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/)을 검증한다. -* [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/)을 수정한다. -* [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/)과 포함된 [파드](/ko/docs/concepts/workloads/pods/pod/)를 삭제한다. +* 카산드라 헤드리스 {{< glossary_tooltip text="Service" term_id="service" >}}를 생성하고 검증한다. +* {{< glossary_tooltip term_id="StatefulSet" >}}을 이용하여 카산드라 링을 생성한다. +* 스테이트풀셋을 검증한다. +* 스테이트풀셋을 수정한다. +* 스테이트풀셋과 포함된 {{< glossary_tooltip text="파드" term_id="pod" >}}를 삭제한다. {{% /capture %}} {{% capture prerequisites %}} @@ -53,7 +50,7 @@ weight: 30 ### 추가적인 Minikube 설정 요령 {{< caution >}} -[Minikube](/docs/getting-started-guides/minikube/)는 1024MB 메모리와 1개 CPU가 기본 설정이다. 이 튜토리얼에서 Minikube를 기본 리소스 설정으로 실행하면 리소스 부족 오류가 발생한다. 이런 오류를 피하려면 Minikube를 다음 설정으로 실행하자. +[Minikube](/docs/getting-started-guides/minikube/)는 1024MiB 메모리와 1개 CPU가 기본 설정이다. 이 튜토리얼에서 Minikube를 기본 리소스 설정으로 실행하면 리소스 부족 오류가 발생한다. 이런 오류를 피하려면 Minikube를 다음 설정으로 실행하자. ```shell minikube start --memory 5120 --cpus=4 @@ -63,22 +60,22 @@ minikube start --memory 5120 --cpus=4 {{% /capture %}} {{% capture lessoncontent %}} -## 카산드라 헤드리스 서비스 생성하기 +## 카산드라를 위한 헤드리스 서비스 생성하기 {#creating-a-cassandra-headless-service} -쿠버네티스 [서비스](/ko/docs/concepts/services-networking/service/)는 동일 작업을 수행하는 [파드](/ko/docs/concepts/workloads/pods/pod/)의 집합을 기술한다. +쿠버네티스 에서 {{< glossary_tooltip text="서비스" term_id="service" >}}는 동일 작업을 수행하는 {{< glossary_tooltip text="파드" term_id="pod" >}}의 집합을 기술한다. -다음의 `서비스`는 쿠버네티스 클러스터에서 카산드라 파드와 클라이언트 간에 DNS 찾아보기 용도로 사용한다. +다음의 서비스는 클러스터에서 카산드라 파드와 클라이언트 간에 DNS 찾아보기 용도로 사용한다. {{< codenew file="application/cassandra/cassandra-service.yaml" >}} -1. 다운로드 받은 매니페스트 파일 디렉터리에 터미널 윈도우를 열자. -1. `cassandra-service.yaml` 파일에서 카산드라 스테이트풀셋 노드를 모두 추적하는 서비스를 생성한다. +`cassandra-service.yaml` 파일에서 카산드라 스테이트풀셋 노드를 모두 추적하는 서비스를 생성한다. + +```shell +kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml +``` - ```shell - kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml - ``` -### 검증하기 (선택) +### 검증하기(선택) {#validating} 카산드라 서비스 살펴보기 @@ -105,12 +102,21 @@ cassandra ClusterIP None 9042/TCP 45s {{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}} -1. 필요하면 스테이트풀셋 갱신 -1. `cassandra-statefulset.yaml` 파일로 카산드라 스테이트풀셋 생성 +`cassandra-statefulset.yaml` 파일로 카산드라 스테이트풀셋 생성 + +```shell +# cassandra-statefulset.yaml을 수정하지 않은 경우에 이것을 사용한다. +kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml +``` + +클러스터에 맞게 `cassandra-statefulset.yaml` 를 수정해야 하는 경우 다음을 다운로드 한 다음 +수정된 버전을 저장한 폴더에서 해당 매니페스트를 적용한다. +https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml +```shell +# cassandra-statefulset.yaml을 로컬에서 수정한 경우에 사용한다. +kubectl apply -f cassandra-statefulset.yaml +``` - ```shell - kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml - ``` ## 카산드라 스테이트풀셋 검증하기 @@ -120,7 +126,7 @@ cassandra ClusterIP None 9042/TCP 45s kubectl get statefulset cassandra ``` - 응답은 다음과 같다. + 응답은 다음과 유사하다. ``` NAME DESIRED CURRENT AGE @@ -135,7 +141,7 @@ cassandra ClusterIP None 9042/TCP 45s kubectl get pods -l="app=cassandra" ``` - 응답은 다음과 같다. + 응답은 다음과 유사하다. ```shell NAME READY STATUS RESTARTS AGE @@ -143,7 +149,8 @@ cassandra ClusterIP None 9042/TCP 45s cassandra-1 0/1 ContainerCreating 0 8s ``` - 모든 3개 파드가 배포되기까지 몇 분이 소요될 수 있다. 배포 후에는 같은 명령은 다음같이 응답한다. + 모든 3개 파드가 배포되기까지 몇 분이 소요될 수 있다. 배포 후, 동일 명령은 다음과 유사하게 + 응답한다. ``` NAME READY STATUS RESTARTS AGE @@ -152,7 +159,8 @@ cassandra ClusterIP None 9042/TCP 45s cassandra-2 1/1 Running 0 8m ``` -3. 링의 상태를 보여주는 카산드라 [nodetool](https://wiki.apache.org/cassandra/NodeTool)을 실행하자. +3. 첫 번째 파드 내부에 링의 상태를 보여주는 카산드라 + [nodetool](https://cwiki.apache.org/confluence/display/CASSANDRA2/NodeTool)을 실행하자. ```shell kubectl exec -it cassandra-0 -- nodetool status @@ -181,14 +189,14 @@ cassandra ClusterIP None 9042/TCP 45s kubectl edit statefulset cassandra ``` - 이 명령은 터미널에서 편집기를 연다. 변경해야할 행은 `replicas` 필드이다. 다음 예제는 `StatefulSet` 파일에서 발췌했다. + 이 명령은 터미널에서 편집기를 연다. 변경해야할 행은 `replicas` 필드이다. 다음 예제는 스테이트풀셋 파일에서 발췌했다. ```yaml # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # - apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 + apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: 2016-08-13T18:40:58Z @@ -205,7 +213,7 @@ cassandra ClusterIP None 9042/TCP 45s 1. 레플리카 개수를 4로 바꾸고, 매니페스트를 저장한다. - The `StatefulSet` now contains 4 Pods. + 스테이트풀셋은 4개의 파드를 실행하기 위해 스케일 한다. 1. 검증하기 위해 카산드라 스테이트풀셋을 살펴보자 @@ -213,7 +221,7 @@ cassandra ClusterIP None 9042/TCP 45s kubectl get statefulset cassandra ``` - 결과는 다음과 같다. + 결과는 다음과 유사하다. ``` NAME DESIRED CURRENT AGE @@ -229,22 +237,39 @@ cassandra ClusterIP None 9042/TCP 45s 스토리지 클래스와 리클레임 정책에 따라 *퍼시스턴스볼륨클레임* 을 삭제하면 그와 연관된 볼륨도 삭제될 수 있다. 볼륨 요청이 삭제되어도 데이터를 접근할 수 있다고 절대로 가정하지 말자. {{< /warning >}} -1. 다음 명령어(한 줄로 연결된)를 실행하여 카산드라 `스테이트풀셋`을 모두 제거하자. +1. 다음 명령어(한 줄로 연결된)를 실행하여 카산드라 스테이트풀셋을 모두 제거하자. ```shell - grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \ + grace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \ && kubectl delete statefulset -l app=cassandra \ - && echo "Sleeping $grace" \ + && echo "Sleeping ${grace} seconds" 1>&2 \ && sleep $grace \ - && kubectl delete pvc -l app=cassandra + && kubectl delete persistentvolumeclaim -l app=cassandra ``` -1. 다음 명령어를 실행하여 카산드라 서비스를 제거하자. +1. 다음 명령어를 실행하여 카산드라에 대해 설정한 서비스를 제거하자. ```shell kubectl delete service -l app=cassandra ``` +## 카산드라 컨테이너 환경 변수 + +이 튜토리얼의 *파드* 는 구글의 [컨테이너 레지스트리](https://cloud.google.com/container-registry/docs/)에 +[`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile) 이미지를 이용한다. +이 도커 이미지는 [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base)에 +기반하였고 OpenJDK 8을 포함한다. + +이 이미지는 아파치 데비안 리포의 표준 카산드라 설치본을 포함한다. +환경변수를 이용하여 `cassandra.yaml`에 삽입된 값을 바꿀 수 있다. + +| 환경 변수 | 기본값 | +| ------------- |:-------------: | +| `CASSANDRA_CLUSTER_NAME` | `'Test Cluster'` | +| `CASSANDRA_NUM_TOKENS` | `32` | +| `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` | + + {{% /capture %}} {{% capture whatsnext %}} @@ -254,4 +279,3 @@ cassandra ClusterIP None 9042/TCP 45s * 커스텀 [시드 제공자 설정](https://git.k8s.io/examples/cassandra/java/README.md)를 살펴본다. {{% /capture %}} - diff --git a/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index ae4f74e90c1e2..6176a3457d31b 100644 --- a/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -12,7 +12,7 @@ card: {{% capture overview %}} 이 튜토리얼은 WordPress 사이트와 MySQL 데이터베이스를 Minikube를 이용하여 어떻게 배포하는지 보여준다. 애플리케이션 둘 다 퍼시스턴트 볼륨과 퍼시스턴트볼륨클레임을 데이터를 저장하기 위해 사용한다. -[퍼시스턴트볼륨](/docs/concepts/storage/persistent-volumes/)(PV)는 관리자가 수동으로 프로비저닝한 클러스터나 쿠버네티스 [스토리지클래스](/docs/concepts/storage/storage-classes)를 이용해 동적으로 프로비저닝된 저장소의 일부이다. [퍼시스턴트볼륨클레임](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)은 PV로 충족할 수 있는 사용자에 의한 스토리지 요청이다. 퍼시스턴트볼륨은 파드 라이프사이클과 독립적이며 재시작, 재스케줄링이나 파드를 삭제할 때에도 데이터를 보존한다. +[퍼시스턴트볼륨](/ko/docs/concepts/storage/persistent-volumes/)(PV)는 관리자가 수동으로 프로비저닝한 클러스터나 쿠버네티스 [스토리지클래스](/docs/concepts/storage/storage-classes)를 이용해 동적으로 프로비저닝된 저장소의 일부이다. [퍼시스턴트볼륨클레임](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트볼륨클레임)(PVC)은 PV로 충족할 수 있는 사용자에 의한 스토리지 요청이다. 퍼시스턴트볼륨은 파드 라이프사이클과 독립적이며 재시작, 재스케줄링이나 파드를 삭제할 때에도 데이터를 보존한다. {{< warning >}} 이 배포는 프로덕션 사용 예로는 적절하지 않은데 이는 단일 인스턴스의 WordPress와 MySQL을 이용했기 때문이다. 프로덕션이라면 [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress)로 배포하기를 고려해보자. diff --git a/content/ko/examples/admin/logging/fluentd-sidecar-config.yaml b/content/ko/examples/admin/logging/fluentd-sidecar-config.yaml new file mode 100644 index 0000000000000..eea1849b033fa --- /dev/null +++ b/content/ko/examples/admin/logging/fluentd-sidecar-config.yaml @@ -0,0 +1,25 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: fluentd-config +data: + fluentd.conf: | + + type tail + format none + path /var/log/1.log + pos_file /var/log/1.log.pos + tag count.format1 + + + + type tail + format none + path /var/log/2.log + pos_file /var/log/2.log.pos + tag count.format2 + + + + type google_cloud + diff --git a/content/ko/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml b/content/ko/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml new file mode 100644 index 0000000000000..b37b616e6f7c7 --- /dev/null +++ b/content/ko/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml @@ -0,0 +1,39 @@ +apiVersion: v1 +kind: Pod +metadata: + name: counter +spec: + containers: + - name: count + image: busybox + args: + - /bin/sh + - -c + - > + i=0; + while true; + do + echo "$i: $(date)" >> /var/log/1.log; + echo "$(date) INFO $i" >> /var/log/2.log; + i=$((i+1)); + sleep 1; + done + volumeMounts: + - name: varlog + mountPath: /var/log + - name: count-agent + image: k8s.gcr.io/fluentd-gcp:1.30 + env: + - name: FLUENTD_ARGS + value: -c /etc/fluentd-config/fluentd.conf + volumeMounts: + - name: varlog + mountPath: /var/log + - name: config-volume + mountPath: /etc/fluentd-config + volumes: + - name: varlog + emptyDir: {} + - name: config-volume + configMap: + name: fluentd-config diff --git a/content/ko/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml b/content/ko/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml new file mode 100644 index 0000000000000..87bd198cfdab7 --- /dev/null +++ b/content/ko/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml @@ -0,0 +1,38 @@ +apiVersion: v1 +kind: Pod +metadata: + name: counter +spec: + containers: + - name: count + image: busybox + args: + - /bin/sh + - -c + - > + i=0; + while true; + do + echo "$i: $(date)" >> /var/log/1.log; + echo "$(date) INFO $i" >> /var/log/2.log; + i=$((i+1)); + sleep 1; + done + volumeMounts: + - name: varlog + mountPath: /var/log + - name: count-log-1 + image: busybox + args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log'] + volumeMounts: + - name: varlog + mountPath: /var/log + - name: count-log-2 + image: busybox + args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log'] + volumeMounts: + - name: varlog + mountPath: /var/log + volumes: + - name: varlog + emptyDir: {} diff --git a/content/ko/examples/admin/logging/two-files-counter-pod.yaml b/content/ko/examples/admin/logging/two-files-counter-pod.yaml new file mode 100644 index 0000000000000..6ebeb717a1892 --- /dev/null +++ b/content/ko/examples/admin/logging/two-files-counter-pod.yaml @@ -0,0 +1,26 @@ +apiVersion: v1 +kind: Pod +metadata: + name: counter +spec: + containers: + - name: count + image: busybox + args: + - /bin/sh + - -c + - > + i=0; + while true; + do + echo "$i: $(date)" >> /var/log/1.log; + echo "$(date) INFO $i" >> /var/log/2.log; + i=$((i+1)); + sleep 1; + done + volumeMounts: + - name: varlog + mountPath: /var/log + volumes: + - name: varlog + emptyDir: {} diff --git a/content/ko/examples/admin/resource/limit-mem-cpu-container.yaml b/content/ko/examples/admin/resource/limit-mem-cpu-container.yaml new file mode 100644 index 0000000000000..3c2b30f29ccef --- /dev/null +++ b/content/ko/examples/admin/resource/limit-mem-cpu-container.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: limit-mem-cpu-per-container +spec: + limits: + - max: + cpu: "800m" + memory: "1Gi" + min: + cpu: "100m" + memory: "99Mi" + default: + cpu: "700m" + memory: "900Mi" + defaultRequest: + cpu: "110m" + memory: "111Mi" + type: Container diff --git a/content/ko/examples/admin/resource/limit-mem-cpu-pod.yaml b/content/ko/examples/admin/resource/limit-mem-cpu-pod.yaml new file mode 100644 index 0000000000000..0ce0f69ac8130 --- /dev/null +++ b/content/ko/examples/admin/resource/limit-mem-cpu-pod.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: limit-mem-cpu-per-pod +spec: + limits: + - max: + cpu: "2" + memory: "2Gi" + type: Pod diff --git a/content/ko/examples/admin/resource/limit-memory-ratio-pod.yaml b/content/ko/examples/admin/resource/limit-memory-ratio-pod.yaml new file mode 100644 index 0000000000000..859fc20ecec38 --- /dev/null +++ b/content/ko/examples/admin/resource/limit-memory-ratio-pod.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: limit-memory-ratio-pod +spec: + limits: + - maxLimitRequestRatio: + memory: 2 + type: Pod diff --git a/content/ko/examples/admin/resource/limit-range-pod-1.yaml b/content/ko/examples/admin/resource/limit-range-pod-1.yaml new file mode 100644 index 0000000000000..0457792af94c4 --- /dev/null +++ b/content/ko/examples/admin/resource/limit-range-pod-1.yaml @@ -0,0 +1,37 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox1 +spec: + containers: + - name: busybox-cnt01 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"] + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "200Mi" + cpu: "500m" + - name: busybox-cnt02 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"] + resources: + requests: + memory: "100Mi" + cpu: "100m" + - name: busybox-cnt03 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"] + resources: + limits: + memory: "200Mi" + cpu: "500m" + - name: busybox-cnt04 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"] diff --git a/content/ko/examples/admin/resource/limit-range-pod-2.yaml b/content/ko/examples/admin/resource/limit-range-pod-2.yaml new file mode 100644 index 0000000000000..efac440269c6f --- /dev/null +++ b/content/ko/examples/admin/resource/limit-range-pod-2.yaml @@ -0,0 +1,37 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox2 +spec: + containers: + - name: busybox-cnt01 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"] + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "200Mi" + cpu: "500m" + - name: busybox-cnt02 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"] + resources: + requests: + memory: "100Mi" + cpu: "100m" + - name: busybox-cnt03 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"] + resources: + limits: + memory: "200Mi" + cpu: "500m" + - name: busybox-cnt04 + image: busybox + command: ["/bin/sh"] + args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"] diff --git a/content/ko/examples/admin/resource/limit-range-pod-3.yaml b/content/ko/examples/admin/resource/limit-range-pod-3.yaml new file mode 100644 index 0000000000000..8afdb6379cf61 --- /dev/null +++ b/content/ko/examples/admin/resource/limit-range-pod-3.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Pod +metadata: + name: busybox3 +spec: + containers: + - name: busybox-cnt01 + image: busybox + resources: + limits: + memory: "300Mi" + requests: + memory: "100Mi" diff --git a/content/ko/examples/admin/resource/pvc-limit-greater.yaml b/content/ko/examples/admin/resource/pvc-limit-greater.yaml new file mode 100644 index 0000000000000..2d92bf92b3121 --- /dev/null +++ b/content/ko/examples/admin/resource/pvc-limit-greater.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-limit-greater +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi diff --git a/content/ko/examples/admin/resource/pvc-limit-lower.yaml b/content/ko/examples/admin/resource/pvc-limit-lower.yaml new file mode 100644 index 0000000000000..ef819b6292049 --- /dev/null +++ b/content/ko/examples/admin/resource/pvc-limit-lower.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-limit-lower +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi diff --git a/content/ko/examples/admin/resource/storagelimits.yaml b/content/ko/examples/admin/resource/storagelimits.yaml new file mode 100644 index 0000000000000..7f597e4dfe9b1 --- /dev/null +++ b/content/ko/examples/admin/resource/storagelimits.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: storagelimits +spec: + limits: + - type: PersistentVolumeClaim + max: + storage: 2Gi + min: + storage: 1Gi diff --git a/content/ko/examples/application/deployment.yaml b/content/ko/examples/application/deployment.yaml index 68ab8289b5a0f..2cd599218d01e 100644 --- a/content/ko/examples/application/deployment.yaml +++ b/content/ko/examples/application/deployment.yaml @@ -1,4 +1,4 @@ -apiVersion: apps/v1 # apps/v1beta2를 사용하는 1.9.0보다 더 이전의 버전용 +apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment @@ -6,7 +6,7 @@ spec: selector: matchLabels: app: nginx - replicas: 2 # 템플릿에 매칭되는 파드 2개를 구동하는 디플로이먼트임 + replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/examples/application/hpa/php-apache.yaml b/content/ko/examples/application/hpa/php-apache.yaml index c73ae7d631b58..f3f1ef5d4f912 100644 --- a/content/ko/examples/application/hpa/php-apache.yaml +++ b/content/ko/examples/application/hpa/php-apache.yaml @@ -2,7 +2,6 @@ apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: php-apache - namespace: default spec: scaleTargetRef: apiVersion: apps/v1 diff --git a/content/ko/examples/application/nginx-app.yaml b/content/ko/examples/application/nginx-app.yaml new file mode 100644 index 0000000000000..d00682e1fcbba --- /dev/null +++ b/content/ko/examples/application/nginx-app.yaml @@ -0,0 +1,34 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-nginx-svc + labels: + app: nginx +spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: nginx +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-nginx + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 diff --git a/content/ko/examples/application/simple_deployment.yaml b/content/ko/examples/application/simple_deployment.yaml index 10fa1ddf29999..d9c74af8c577b 100644 --- a/content/ko/examples/application/simple_deployment.yaml +++ b/content/ko/examples/application/simple_deployment.yaml @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/examples/application/update_deployment.yaml b/content/ko/examples/application/update_deployment.yaml index d53aa3e6d2fc8..2d7603acb956c 100644 --- a/content/ko/examples/application/update_deployment.yaml +++ b/content/ko/examples/application/update_deployment.yaml @@ -13,6 +13,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.11.9 # update the image + image: nginx:1.16.1 # update the image ports: - containerPort: 80 diff --git a/content/ko/examples/controllers/daemonset.yaml b/content/ko/examples/controllers/daemonset.yaml index 1bfa082833c72..f291b750c158b 100644 --- a/content/ko/examples/controllers/daemonset.yaml +++ b/content/ko/examples/controllers/daemonset.yaml @@ -15,6 +15,8 @@ spec: name: fluentd-elasticsearch spec: tolerations: + # this toleration is to have the daemonset runnable on master nodes + # remove it if your masters can't run pods - key: node-role.kubernetes.io/master effect: NoSchedule containers: diff --git a/content/ko/examples/controllers/nginx-deployment.yaml b/content/ko/examples/controllers/nginx-deployment.yaml index f7f95deebbb23..685c17aa68e1d 100644 --- a/content/ko/examples/controllers/nginx-deployment.yaml +++ b/content/ko/examples/controllers/nginx-deployment.yaml @@ -16,6 +16,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/examples/debug/counter-pod.yaml b/content/ko/examples/debug/counter-pod.yaml new file mode 100644 index 0000000000000..f997886386258 --- /dev/null +++ b/content/ko/examples/debug/counter-pod.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Pod +metadata: + name: counter +spec: + containers: + - name: count + image: busybox + args: [/bin/sh, -c, + 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] diff --git a/content/ko/examples/pods/pod-nginx-preferred-affinity.yaml b/content/ko/examples/pods/pod-nginx-preferred-affinity.yaml new file mode 100644 index 0000000000000..183ba9f014225 --- /dev/null +++ b/content/ko/examples/pods/pod-nginx-preferred-affinity.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + affinity: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: disktype + operator: In + values: + - ssd + containers: + - name: nginx + image: nginx + imagePullPolicy: IfNotPresent diff --git a/content/ko/examples/pods/pod-nginx-required-affinity.yaml b/content/ko/examples/pods/pod-nginx-required-affinity.yaml new file mode 100644 index 0000000000000..a3805eaa8d9c9 --- /dev/null +++ b/content/ko/examples/pods/pod-nginx-required-affinity.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: disktype + operator: In + values: + - ssd + containers: + - name: nginx + image: nginx + imagePullPolicy: IfNotPresent diff --git a/content/ko/examples/pods/pod-nginx-specific-node.yaml b/content/ko/examples/pods/pod-nginx-specific-node.yaml new file mode 100644 index 0000000000000..27ead0118a783 --- /dev/null +++ b/content/ko/examples/pods/pod-nginx-specific-node.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + nodeName: foo-node # 특정 노드에 파드 스케줄 + containers: + - name: nginx + image: nginx + imagePullPolicy: IfNotPresent diff --git a/content/ko/examples/pods/pod-with-toleration.yaml b/content/ko/examples/pods/pod-with-toleration.yaml new file mode 100644 index 0000000000000..79f2756a8ce5f --- /dev/null +++ b/content/ko/examples/pods/pod-with-toleration.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx + labels: + env: test +spec: + containers: + - name: nginx + image: nginx + imagePullPolicy: IfNotPresent + tolerations: + - key: "example-key" + operator: "Exists" + effect: "NoSchedule" diff --git a/content/ko/examples/pods/resource/memory-request-limit-2.yaml b/content/ko/examples/pods/resource/memory-request-limit-2.yaml new file mode 100644 index 0000000000000..99032c4fc2adc --- /dev/null +++ b/content/ko/examples/pods/resource/memory-request-limit-2.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Pod +metadata: + name: memory-demo-2 + namespace: mem-example +spec: + containers: + - name: memory-demo-2-ctr + image: polinux/stress + resources: + requests: + memory: "50Mi" + limits: + memory: "100Mi" + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"] diff --git a/content/ko/examples/pods/resource/memory-request-limit-3.yaml b/content/ko/examples/pods/resource/memory-request-limit-3.yaml new file mode 100644 index 0000000000000..9f089c4a7a2be --- /dev/null +++ b/content/ko/examples/pods/resource/memory-request-limit-3.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Pod +metadata: + name: memory-demo-3 + namespace: mem-example +spec: + containers: + - name: memory-demo-3-ctr + image: polinux/stress + resources: + limits: + memory: "1000Gi" + requests: + memory: "1000Gi" + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] diff --git a/content/ko/examples/pods/resource/memory-request-limit.yaml b/content/ko/examples/pods/resource/memory-request-limit.yaml new file mode 100644 index 0000000000000..985b1308d9a00 --- /dev/null +++ b/content/ko/examples/pods/resource/memory-request-limit.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Pod +metadata: + name: memory-demo + namespace: mem-example +spec: + containers: + - name: memory-demo-ctr + image: polinux/stress + resources: + limits: + memory: "200Mi" + requests: + memory: "100Mi" + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] diff --git a/content/ko/examples/policy/example-psp.yaml b/content/ko/examples/policy/example-psp.yaml new file mode 100644 index 0000000000000..7c7a19343f0c3 --- /dev/null +++ b/content/ko/examples/policy/example-psp.yaml @@ -0,0 +1,17 @@ +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: example +spec: + privileged: false # 특권을 가진 파드는 허용금지! + # 나머지는 일부 필수 필드를 채운다. + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + runAsUser: + rule: RunAsAny + fsGroup: + rule: RunAsAny + volumes: + - '*' diff --git a/content/ko/examples/policy/privileged-psp.yaml b/content/ko/examples/policy/privileged-psp.yaml new file mode 100644 index 0000000000000..915c8d37b5460 --- /dev/null +++ b/content/ko/examples/policy/privileged-psp.yaml @@ -0,0 +1,27 @@ +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: privileged + annotations: + seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' +spec: + privileged: true + allowPrivilegeEscalation: true + allowedCapabilities: + - '*' + volumes: + - '*' + hostNetwork: true + hostPorts: + - min: 0 + max: 65535 + hostIPC: true + hostPID: true + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'RunAsAny' + fsGroup: + rule: 'RunAsAny' diff --git a/content/ko/examples/policy/restricted-psp.yaml b/content/ko/examples/policy/restricted-psp.yaml new file mode 100644 index 0000000000000..cbaf2758c0095 --- /dev/null +++ b/content/ko/examples/policy/restricted-psp.yaml @@ -0,0 +1,48 @@ +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: restricted + annotations: + seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' + apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' + seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default' + apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' +spec: + privileged: false + # 루트로의 에스컬레이션을 방지하는데 필요하다. + allowPrivilegeEscalation: false + # 이것은 루트가 아닌 사용자 + 권한 에스컬레이션을 허용하지 않는 것으로 중복이지만, + # 심층 방어를 위해 이를 제공한다. + requiredDropCapabilities: + - ALL + # 기본 볼륨 유형을 허용한다. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + # 클러스터 관리자가 설정한 퍼시스턴트볼륨을 사용하는 것이 안전하다고 가정한다. + - 'persistentVolumeClaim' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # 루트 권한없이 컨테이너를 실행해야 한다. + rule: 'MustRunAsNonRoot' + seLinux: + # 이 정책은 노드가 SELinux가 아닌 AppArmor를 사용한다고 가정한다. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # 루트 그룹을 추가하지 않는다. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # 루트 그룹을 추가하지 않는다. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false diff --git a/content/ko/examples/service/networking/dual-stack-ipv6-lb-svc.yaml b/content/ko/examples/service/networking/dual-stack-ipv6-lb-svc.yaml new file mode 100644 index 0000000000000..b45f03fda6c93 --- /dev/null +++ b/content/ko/examples/service/networking/dual-stack-ipv6-lb-svc.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service + labels: + app: MyApp +spec: + ipFamily: IPv6 + type: LoadBalancer + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 diff --git a/content/ko/includes/task-tutorial-prereqs.md b/content/ko/includes/task-tutorial-prereqs.md index 70d21967734fd..b05dc3ac8c134 100644 --- a/content/ko/includes/task-tutorial-prereqs.md +++ b/content/ko/includes/task-tutorial-prereqs.md @@ -1,8 +1,8 @@ 쿠버네티스 클러스터가 필요하고, kubectl 커맨드-라인 툴이 클러스터와 -통신할 수 있도록 설정되어 있어야 합니다. +통신할 수 있도록 설정되어 있어야 한다. 만약, 아직 클러스터를 가지고 있지 않다면, -[Minikube](/docs/setup/learning-environment/minikube/)를 사용해서 만들거나, -다음의 쿠버네티스 플레이그라운드 중 하나를 사용할 수 있습니다: +[Minikube](/ko/docs/setup/learning-environment/minikube/)를 사용해서 생성하거나, +다음의 쿠버네티스 플레이그라운드 중 하나를 사용할 수 있다. * [Katacoda](https://www.katacoda.com/courses/kubernetes/playground) * [Play with Kubernetes](http://labs.play-with-k8s.com/) diff --git a/content/ko/partners/_index.html b/content/ko/partners/_index.html new file mode 100644 index 0000000000000..2ac7e6945c839 --- /dev/null +++ b/content/ko/partners/_index.html @@ -0,0 +1,91 @@ +--- +title: 파트너 +bigheader: 쿠버네티스 파트너 +abstract: 쿠버네티스 생태계의 성장 +class: gridPage +cid: partners +--- + +

      +
      +
      쿠버네티스는 파트너와 협력하여 다양하게 보완하는 플랫폼을 지원하는 강력하고 활기찬 코드베이스를 만들어갑니다.
      +
      +
      +
      +
      + 공인 쿠버네티스 서비스 공급자(Kubernetes Certified Service Providers, KCSP) +
      +
      기업들이 쿠버네티스를 성공적으로 채택하도록 도와주는 풍부한 경험을 가진 노련한 서비스 공급자입니다. +


      + +

      KCSP에 관심이 있으신가요? +
      +
      +
      +
      +
      + 공인 쿠버네티스 배포, 호스트된 플랫폼 그리고 설치 프로그램 +
      소프트웨어 적합성은 모든 벤더의 쿠버네티스 버전이 필요한 API를 지원하도록 보장합니다. +


      + +

      공인 쿠버네티스에 관심이 있으신가요? +
      +
      +
      +
      +
      쿠버네티스 교육 파트너(Kubernetes Training Partners, KTP)
      +
      클라우드 네이티브 기술 교육 경험이 풍부하고 노련한 교육 공급자입니다. +



      + +

      KTP에 관심이 있으신가요? +
      +
      +
      + + + +
      + + +
      + +
      +
      + + + + diff --git a/content/ko/training/_index.html b/content/ko/training/_index.html index 4e2d914420c76..ce8e17a5ca022 100644 --- a/content/ko/training/_index.html +++ b/content/ko/training/_index.html @@ -7,14 +7,22 @@ class: training --- -
      -
      -
      -

      클라우드 네이티브 커리어를 구축하세요

      -

      쿠버네티스는 클라우드 네이티브 무브먼트의 핵심입니다. 리눅스 재단이 제공하는 교육과 인증 프로그램을 통해 커리어에 투자하고, 쿠버네티스를 배우며, 클라우드 네이티브 프로젝트를 성공적으로 수행하세요.

      -
      +
      +
      +
      +
      + +
      +
      + +
      +
      +

      클라우드 네이티브 커리어를 구축하세요

      +

      쿠버네티스는 클라우드 네이티브 무브먼트의 핵심입니다. 리눅스 재단이 제공하는 교육과 인증 프로그램을 통해 커리어에 투자하고, 쿠버네티스를 배우며, 클라우드 네이티브 프로젝트를 성공적으로 수행하세요.

      +
      +
      -
      +
      @@ -60,7 +68,7 @@

      리눅스 재단과 함께 배우기

      리눅스 재단은 쿠버네티스 애플리케이션 개발과 운영 라이프사이클의 모든 측면에 대해 강사 주도와 자기 주도 학습 과정을 제공합니다.

      -

      +

      강좌 보기
      @@ -71,25 +79,27 @@

      리눅스 재단과 함께 배우기

      쿠버네티스 공인 자격 획득하기

      -
      -
      -
      - 공인 쿠버네티스 애플리케이션 개발자(Certified Kubernetes Application Developer, CKAD) -
      -

      공인 쿠버네티스 애플리케이션 개발자 시험은 사용자가 쿠버네티스의 클라우드 네이티브 애플리케이션을 디자인, 구축, 구성과 노출을 할 수 있음을 인증합니다.

      -
      - 인증으로 이동하기 -
      -
      -
      -
      -
      - 공인 쿠버네티스 관리자(Certified Kubernetes Administrator, CKA) -
      -

      공인 쿠버네티스 관리자 프로그램은 CKA가 쿠버네티스 관리자 직무을 수행할 수 있는 기술, 지식과 역량을 갖추고 있음을 보장합니다.

      -
      - 인증으로 이동하기 -
      +
      +
      +
      +
      + 공인 쿠버네티스 애플리케이션 개발자(Certified Kubernetes Application Developer, CKAD) +
      +

      공인 쿠버네티스 애플리케이션 개발자 시험은 사용자가 쿠버네티스의 클라우드 네이티브 애플리케이션을 디자인, 구축, 구성과 노출을 할 수 있음을 인증합니다.

      +
      + 인증으로 이동하기 +
      +
      +
      +
      +
      + 공인 쿠버네티스 관리자(Certified Kubernetes Administrator, CKA) +
      +

      공인 쿠버네티스 관리자(CKA) 프로그램은 CKA가 쿠버네티스 관리자 직무을 수행할 수 있는 기술, 지식과 역량을 갖추고 있음을 보장합니다.

      +
      + 인증으로 이동하기 +
      +
      diff --git a/content/pl/_index.html b/content/pl/_index.html index 4e55c429bd47d..f021fc51e9b5c 100644 --- a/content/pl/_index.html +++ b/content/pl/_index.html @@ -45,7 +45,7 @@

      The Challenges of Migrating 150+ Microservices to Kubernetes




      - Weź udział w KubeCon w Amsterdamie (lipiec/sierpień) + Weź udział w KubeCon w Amsterdamie 13-16.08.2020


      diff --git a/content/pl/docs/concepts/overview/components.md b/content/pl/docs/concepts/overview/components.md index 966f67b0041e1..bc79d8382da41 100644 --- a/content/pl/docs/concepts/overview/components.md +++ b/content/pl/docs/concepts/overview/components.md @@ -44,25 +44,28 @@ Komponenty warstwy sterowania mogą być uruchomione na dowolnej maszynie w klas Kontrolerami są: -* Node Controller: Odpowiada za rozpoznawanie i reagowanie na sytuacje, kiedy węzeł staje się z jakiegoś powodu niedostępny. -* Replication Controller: Odpowiada za utrzymanie prawidłowej liczby podów dla każdego obiektu typu *ReplicationController* w systemie. -* Endpoints Controller: Dostarcza informacji do obiektów typu *Endpoints* (tzn. łączy ze sobą Serwisy i Pody). -* Service Account & Token Controllers: Tworzy domyślne konta i tokeny dostępu API dla nowych przestrzeni nazw (*namespaces*). +* Node controller: Odpowiada za rozpoznawanie i reagowanie na sytuacje, kiedy węzeł staje się z jakiegoś powodu niedostępny. +* Replication controller: Odpowiada za utrzymanie prawidłowej liczby podów dla każdego obiektu typu *ReplicationController* w systemie. +* Endpoints controller: Dostarcza informacji do obiektów typu *Endpoints* (tzn. łączy ze sobą Serwisy i Pody). +* Service Account & Token controllers: Tworzy domyślne konta i tokeny dostępu API dla nowych przestrzeni nazw (*namespaces*). ### cloud-controller-manager -[cloud-controller-manager](/docs/tasks/administer-cluster/running-cloud-controller/) uruchamia kontroler, który komunikuje się z usługami dostawcy chmury, na których zbudowany jest klaster. Oprogramowanie cloud-controller-manager, wprowadzone w Kubernetes 1.6 ma status rozwojowy beta. +{{< glossary_definition term_id="cloud-controller-manager" length="short" >}} -cloud-controller-manager wykonuje tylko pętle sterowania konkretnych dostawców usług chmurowych. Wykonywanie tych pętli sterowania musi być wyłączone w kube-controller-manager. Wyłączenie następuje poprzez ustawienie opcji `--cloud-provider` jako `external` przy starcie kube-controller-manager. +cloud-controller-manager uruchamia jedynie kontrolery właściwe dla konkretnego dostawcy usług chmurowych. +Jeśli uruchamiasz Kubernetesa we własnym centrum komputerowym lub w środowisku szkoleniowym na swoim +komputerze, klaster nie będzie miał cloud controller managera. -cloud-controller-manager umożliwia rozwój oprogramowania dostawców usług chmurowych niezależnie od samego oprogramowania Kubernetes. W poprzednich wersjach, główny kod Kubernetes był zależny od kodu dostarczonego przez zewnętrznych dostawców różnych usług chmurowych. W przyszłych wydaniach, oprogramowanie związane z dostawcami chmurowymi będzie utrzymywane przez nich samych i podłączane do cloud-controller-managera w trakcie uruchamiana Kubernetes. +Podobnie jak w przypadku kube-controller-manager, cloud-controller-manager łączy w jednym pliku binarnym +kilka niezależnych pętli sterowania. Można go skalować horyzontalnie +(uruchomić więcej niż jedną instancję), aby poprawić wydajność lub zwiększyć odporność na awarie. -Następujące kontrolery zależą od dostawców usług chmurowych: +Następujące kontrolery mogą zależeć od dostawców usług chmurowych: - * Node Controller: Aby sprawdzić u dostawcy usługi chmurowej, czy węzeł został skasowany po tym, jak przestał odpowiadać - * Route Controller: Aby ustawić trasy *(routes)* w niższych warstwach infrastruktury chmurowej - * Service Controller: Aby tworzyć, aktualizować i kasować *cloud load balancers* - * Volume Controller: Aby tworzyć, podłączać i montować woluminy oraz zarządzać nimi przez dostawcę usług chmurowych +* Node controller: Aby sprawdzić u dostawcy usługi chmurowej, czy węzeł został skasowany po tym, jak przestał odpowiadać +* Route controller: Aby ustawić trasy *(routes)* w niższych warstwach infrastruktury chmurowej +* Service controller: Aby tworzyć, aktualizować i kasować *cloud load balancers* ## Składniki węzłów @@ -110,6 +113,6 @@ Mechanizm [logowania na poziomie klastra](/docs/concepts/cluster-administration/ {{% capture whatsnext %}} * Więcej o [Węzłach](/docs/concepts/architecture/nodes/) * Więcej o [Kontrolerach](/docs/concepts/architecture/controller/) -* Więcej o [kube-scheduler](/docs/concepts/scheduling/kube-scheduler/) +* Więcej o [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/) * Oficjalna [dokumentacja](https://etcd.io/docs/) etcd {{% /capture %}} diff --git a/content/pl/docs/home/_index.md b/content/pl/docs/home/_index.md index f6382cdd845de..cae9bb4dc523c 100644 --- a/content/pl/docs/home/_index.md +++ b/content/pl/docs/home/_index.md @@ -3,7 +3,7 @@ title: Kubernetes — Dokumentacja noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "Strona główna" main_menu: true weight: 10 diff --git a/content/pl/docs/reference/_index.md b/content/pl/docs/reference/_index.md index 41a213e5e30ac..a5ca04492f743 100644 --- a/content/pl/docs/reference/_index.md +++ b/content/pl/docs/reference/_index.md @@ -36,13 +36,15 @@ biblioteki to: * [JSONPath](/docs/reference/kubectl/jsonpath/) - Podręcznik składni [wyrażeń JSONPath](http://goessner.net/articles/JsonPath/) dla kubectl. * [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) - Narzędzie tekstowe do łatwego budowania klastra Kubernetes spełniającego niezbędne wymogi bezpieczeństwa. -## Dokumentacja konfiguracji +## Dokumentacja komponentów * [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - Główny agent działający na każdym węźle. Kubelet pobiera zestaw definicji PodSpecs i gwarantuje, że opisane przez nie kontenery poprawnie działają. * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API, które sprawdza poprawność i konfiguruje obiekty API, takie jak pody, serwisy czy kontrolery replikacji. * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Proces wykonujący główne pętle sterowania Kubernetes. * [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Przekazuje bezpośrednio dane przepływające w transmisji TCP/UDP lub dystrybuuje ruch TCP/UDP zgodnie ze schematem *round-robin* pomiędzy usługi back-endu. * [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler odpowiada za dostępność, wydajność i zasoby. +* [kube-scheduler Policies](/docs/reference/scheduling/policies) +* [kube-scheduler Profiles](/docs/reference/scheduling/profiles) ## Dokumentacja projektowa diff --git a/content/pl/docs/reference/glossary/cloud-controller-manager.md b/content/pl/docs/reference/glossary/cloud-controller-manager.md new file mode 100755 index 0000000000000..5d09fa4695d73 --- /dev/null +++ b/content/pl/docs/reference/glossary/cloud-controller-manager.md @@ -0,0 +1,23 @@ +--- +title: Cloud Controller Manager +id: cloud-controller-manager +date: 2018-04-12 +full_link: /docs/concepts/architecture/cloud-controller/ +short_description: > + Element warstwy sterowania, który integruje Kubernetesa z zewnętrznymi usługami chmurowymi. +aka: +tags: +- core-object +- architecture +- operation +--- +Element składowy {{< glossary_tooltip text="warstwy sterowania" term_id="control-plane" >}} Kubernetesa, +który zarządza usługami realizowanymi po stronie chmur obliczeniowych. Cloud controller manager umożliwia +połączenie Twojego klastra z API operatora usług chmurowych i rozdziela składniki operujące na platformie +chmurowej od tych, które dotyczą wyłącznie samego klastra. + + + +Dzięki rozdzieleniu logiki zarządzającej pomiędzy klaster Kubernetesa i leżącą poniżej infrastrukturę chmurową, +cloud-controller-manager umożliwia operatorom usług chmurowych na dostarczanie nowych funkcjonalności +niezależnie od cyklu wydawniczego głównego projektu Kubernetes. diff --git a/content/pl/docs/reference/glossary/container-runtime.md b/content/pl/docs/reference/glossary/container-runtime.md index 170d5002218af..f86c1c1d2c170 100644 --- a/content/pl/docs/reference/glossary/container-runtime.md +++ b/content/pl/docs/reference/glossary/container-runtime.md @@ -2,7 +2,7 @@ title: Container Runtime id: container-runtime date: 2019-06-05 -full_link: /docs/reference/generated/container-runtime +full_link: /docs/setup/production-environment/container-runtimes short_description: > *Container runtime* to oprogramowanie zajmujące się uruchamianiem kontenerów. diff --git a/content/pl/docs/reference/tools.md b/content/pl/docs/reference/tools.md index c7246957f5d01..de200d77d26e5 100644 --- a/content/pl/docs/reference/tools.md +++ b/content/pl/docs/reference/tools.md @@ -16,10 +16,6 @@ Kubernetes zawiera różne wbudowane narzędzia służące do pracy z systemem: [`kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) to narzędzie tekstowe do łatwej instalacji klastra Kubernetes w bezpiecznej konfiguracji, uruchamianego na infrastrukturze serwerów fizycznych, serwerów w chmurze bądź na maszynach wirtualnych (aktualnie w fazie rozwojowej alfa). -## Kubefed - -[`kubefed`](/docs/tasks/federation/set-up-cluster-federation-kubefed/) to narzędzie tekstowe do zarządzania klastrami w federacji. - ## Minikube [`minikube`](/docs/tasks/tools/install-minikube/) to narzędzie do łatwego uruchamiania lokalnego klastra Kubernetes na twojej stacji roboczej na potrzeby rozwoju oprogramowania lub prowadzenia testów. diff --git a/content/pl/docs/setup/_index.md b/content/pl/docs/setup/_index.md index 8875cd6afd8a4..bb08ded7ff363 100644 --- a/content/pl/docs/setup/_index.md +++ b/content/pl/docs/setup/_index.md @@ -36,18 +36,14 @@ Aby uruchomić klaster Kubernetes do nauki na lokalnym komputerze, skorzystaj z |Społeczność |Ekosystem | | ------------ | -------- | -| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) | -| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| -| | [Minishift](https://docs.okd.io/latest/minishift/)| +| [Minikube](/docs/setup/learning-environment/minikube/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| +| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Minishift](https://docs.okd.io/latest/minishift/)| | | [MicroK8s](https://microk8s.io/)| -| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | -| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| -| | [k3s](https://k3s.io)| ## Środowisko produkcyjne {#srodowisko-produkcyjne} Wybierając rozwiązanie dla środowiska produkcyjnego musisz zdecydować, którymi poziomami zarządzania klastrem (_abstrakcjami_) chcesz zajmować się sam, a które będą realizowane po stronie zewnętrznego operatora. -Aby zapoznać się z listą dostawców posiadających [certyfikację Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes), odwiedź stronę "[Partnerzy](https://kubernetes.io/partners/#conformance)". +Na stronie [Partnerzy Kubernetes](https://kubernetes.io/partners/#conformance) znajdziesz listę dostawców posiadających [certyfikację Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes). {{% /capture %}} diff --git a/content/pl/docs/tutorials/hello-minikube.md b/content/pl/docs/tutorials/hello-minikube.md index 1a7da3cd3c99d..bee3dd0b658cc 100644 --- a/content/pl/docs/tutorials/hello-minikube.md +++ b/content/pl/docs/tutorials/hello-minikube.md @@ -77,7 +77,7 @@ Użycie Deploymentu to rekomendowana metoda zarządzania tworzeniem i skalowanie wykorzystując podany obraz Dockera. ```shell - kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 ``` 2. Sprawdź stan Deploymentu: diff --git a/content/pl/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/pl/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html index ff6cfe22ca136..019b6837fe15f 100644 --- a/content/pl/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html +++ b/content/pl/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -19,7 +19,7 @@
      - Do pracy z terminalem użyj wersji na desktop/tablet + Ekran jest za wąski do pracy z terminalem. Użyj wersji na desktop/tablet.
      diff --git a/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 37a0c9e25c053..5b90420c5a712 100644 --- a/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -29,8 +29,8 @@

      Cele

      Instalacje w Kubernetes

      Mając działający klaster Kubernetes, można na nim zacząć instalować aplikacje. - W tym celu należy skonfigurować Deployment. Deployment informuje Kubernetes, - jak tworzyć i aktualizować instancje Twojej aplikacji. Po stworzeniu Deploymentu, węzeł master Kubernetes + W tym celu należy skonfigurować Deployment. Deployment informuje Kubernetesa, + jak tworzyć i aktualizować instancje Twojej aplikacji. Po stworzeniu Deploymentu, węzeł master Kubernetesa zleca uruchomienie tej aplikacji na indywidualnych węzłach klastra.

      @@ -75,7 +75,7 @@

      Instalacja pierwszej aplikacji w Kubernetes

      Do tworzenia i zarządzaniem Deploymentem służy polecenie linii komend, Kubectl. Kubectl używa Kubernetes API do komunikacji z klasterem. W tym module nauczysz się najczęściej używanych poleceń Kubectl niezbędnych do stworzenia Deploymentu, który uruchomi Twoje aplikacje na klastrze Kubernetes.

      -

      Tworząc Deployment musisz określić obraz kontenera oraz liczbę replik, które mają być uruchomione. Te ustawienia możesz zmieniać później, aktualizując Deployment. Moduły 5 oraz 6 omawiają skalowanie i aktualizowanie Deploymentów.

      +

      Tworząc Deployment musisz określić obraz kontenera oraz liczbę replik, które mają być uruchomione. Te ustawienia możesz zmieniać później, aktualizując Deployment. Moduły 5 oraz 6 omawiają skalowanie i aktualizowanie Deploymentów.

      diff --git a/content/pl/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/pl/docs/tutorials/kubernetes-basics/explore/explore-intro.html index 9e66db80d6b1e..f436b35fc2376 100644 --- a/content/pl/docs/tutorials/kubernetes-basics/explore/explore-intro.html +++ b/content/pl/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -29,7 +29,7 @@

      Cele

      Pody Kubernetes

      -

      Po stworzeniu Deploymentu w Module 2, Kubernetes stworzył Pod, który "przechowuje" instancję Twojej aplikacji. Pod jest obiektem abstrakcyjnym Kubernetes, który reprezentuje grupę jednego bądź wielu kontenerów (takich jak Docker lub rkt) wraz ze wspólnymi zasobami dla tych kontenerów. Zasobami mogą być:

      +

      Po stworzeniu Deploymentu w Module 2, Kubernetes stworzył Pod, który "przechowuje" instancję Twojej aplikacji. Pod jest obiektem abstrakcyjnym Kubernetes, który reprezentuje grupę jednego bądź wielu kontenerów (takich jak Docker lub rkt) wraz ze wspólnymi zasobami dla tych kontenerów. Zasobami mogą być:

      • Współdzielona przestrzeń dyskowa, np. Volumes
      • Zasoby sieciowe, takie jak unikatowy adres IP klastra
      • @@ -108,7 +108,7 @@

        Schemat węzła

        Rozwiązywanie problemów przy pomocy kubectl

        -

        W module 2 używałeś narzędzia Kubectl. W module 3 będziemy go nadal używać, aby wydobyć informacje na temat zainstalowanych aplikacji i środowiska, w jakim działają. Najczęstsze operacje przeprowadzane są przy pomocy następujących poleceń kubectl:

        +

        W module 2 używałeś narzędzia Kubectl. W module 3 będziemy go nadal używać, aby wydobyć informacje na temat zainstalowanych aplikacji i środowiska, w jakim działają. Najczęstsze operacje przeprowadzane są przy pomocy następujących poleceń kubectl:

        • kubectl get - wyświetl informacje o zasobach
        • kubectl describe - pokaż szczegółowe informacje na temat konkretnego zasobu
        • diff --git a/content/pl/training/_index.html b/content/pl/training/_index.html index 37c521187f5f6..cc6579781d3dc 100644 --- a/content/pl/training/_index.html +++ b/content/pl/training/_index.html @@ -7,14 +7,22 @@ class: training --- -
          -
          -
          -

          Kariera Cloud Native

          -

          Kubernetes stanowi serce całego ruchu cloud native. Korzystając ze szkoleń i certyfikacji oferowanych przez Linux Foundation i naszych partnerów zainwestujesz w swoją karierę, nauczysz się korzystać z Kubernetesa i sprawisz, że Twoje projekty cloud native osiągną sukces.

          -
          +
          +
          +
          +
          + +
          +
          + +
          +
          +

          Kariera Cloud Native

          +

          Kubernetes stanowi serce całego ruchu cloud native. Korzystając ze szkoleń i certyfikacji oferowanych przez Linux Foundation i naszych partnerów zainwestujesz w swoją karierę, nauczysz się korzystać z Kubernetesa i sprawisz, że Twoje projekty cloud native osiągną sukces.

          +
          +
          -
          +
          @@ -25,7 +33,7 @@

          Darmowe kursy na edX

          - Wprowadzenie do Kubernetesa
           
          + Wprowadzenie do Kubernetesa
           

          Chcesz nauczyć się Kubernetesa? Oto solidne podstawy do poznania tego potężnego systemu zarządzania aplikacjami w kontenerach.


          @@ -60,8 +68,8 @@

          Nauka z Linux Foundation

          Linux Foundation oferuje szkolenia prowadzone przez instruktora oraz szkolenia samodzielne obejmujące wszystkie aspekty rozwijania i zarządzania aplikacjami na Kubrnetesie.

          -

          - Zobacz ofertę szkoleń +

          + Sprawdź ofertę szkoleń
          @@ -71,25 +79,27 @@

          Nauka z Linux Foundation

          Uzyskaj certyfikat Kubernetes

          -
          -
          -
          - Certified Kubernetes Application Developer (CKAD) -
          -

          Egzamin na certyfikowanego dewelopera aplikacji (Certified Kubernetes Application Developer) potwierdza umiejętności projektowania, budowania, konfigurowania i udostępniania "rdzennych" aplikacji dla Kubernetesa.

          -
          - Przejdź do certyfikacji -
          -
          -
          -
          -
          - Certified Kubernetes Administrator (CKA) -
          -

          Program certyfikowanego administratora Kubernetes (Certified Kubernetes Administrator) potwierdza umiejętności, wiedzę i kompetencje do podejmowania się zadań administracji Kubernetesem.

          -
          - Przejdź do certyfikacji -
          +
          +
          +
          +
          + Certified Kubernetes Application Developer (CKAD) +
          +

          Egzamin na certyfikowanego dewelopera aplikacji (Certified Kubernetes Application Developer) potwierdza umiejętności projektowania, budowania, konfigurowania i udostępniania "rdzennych" aplikacji dla Kubernetesa.

          +
          + Przejdź do certyfikacji +
          +
          +
          +
          +
          + Certified Kubernetes Administrator (CKA) +
          +

          Program certyfikowanego administratora Kubernetes (Certified Kubernetes Administrator) potwierdza umiejętności, wiedzę i kompetencje do podejmowania się zadań administracji Kubernetesem.

          +
          + Przejdź do certyfikacji +
          +
          diff --git a/content/pt/docs/concepts/architecture/controller.md b/content/pt/docs/concepts/architecture/controller.md new file mode 100644 index 0000000000000..4ab6e98cae751 --- /dev/null +++ b/content/pt/docs/concepts/architecture/controller.md @@ -0,0 +1,156 @@ +--- +title: Controladores +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Em robótica e automação um _control loop_, ou em português _ciclo de controle_, é +um ciclo não terminado que regula o estado de um sistema. + +Um exemplo de ciclo de controle é um termostato de uma sala. + +Quando você define a temperatura, isso indica ao termostato +sobre o seu *estado desejado*. A temperatura ambiente real é o +*estado atual*. O termostato atua de forma a trazer o estado atual +mais perto do estado desejado, ligando ou desligando o equipamento. + +{{< glossary_definition term_id="controller" length="short">}} + +{{% /capture %}} + + +{{% capture body %}} + +## Padrão Controlador (Controller pattern) + +Um controlador rastreia pelo menos um tipo de recurso Kubernetes. +Estes [objetos](/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects) +têm um campo *spec* que representa o *estado desejado*. +O(s) controlador(es) para aquele recurso são responsáveis por trazer o *estado atual* +mais perto do *estado desejado*. + +O controlador pode executar uma ação ele próprio, ou, +o que é mais comum, no Kubernetes, o controlador envia uma mensagem para o +{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} (servidor de API) que tem +efeitos colaterais úteis. Você vai ver exemplos disto abaixo. + +### Controlador via API server + +O controlador {{< glossary_tooltip term_id="job" >}} é um exemplo de um +controlador Kubernetes embutido. Controladores embutidos gerem estados através da +interação com o *cluster API server*. + +*Job* é um recurso do Kubernetes que é executado em um +*{{< glossary_tooltip term_id="pod" >}}*, ou talvez vários *Pods*, com o objetivo de +executar uma tarefa e depois parar. + +(Uma vez [agendado](/docs/concepts/scheduling/), objetos *Pod* passam a fazer parte +do *estado desejado* para um kubelet. + +Quando o controlador *Job* observa uma nova tarefa ele garante que, +algures no seu *cluster*, os kubelets num conjunto de nós (*Nodes*) estão correndo o número +correto de *Pods* para completar o trabalho. +O controlador *Job* não corre *Pods* ou *containers* ele próprio. +Em vez disso, o controlador *Job* informa o *API server* para criar ou remover *Pods*. +Outros componentes do plano de controle +({{< glossary_tooltip text="control plane" term_id="control-plane" >}}) +atuam na nova informação (existem novos *Pods* para serem agendados e executados), +e eventualmente o trabalho é feito. + +Após ter criado um novo *Job*, o *estado desejado* é que esse Job seja completado. +O controlador *Job* faz com que o *estado atual* para esse *Job* esteja mais perto do seu +*estado desejado*: criando *Pods* que fazem o trabalho desejado para esse *Job* para que +o *Job* fique mais perto de ser completado. + +Controladores também atualizam os objetos que os configuram. +Por exemplo: assim que o trabalho de um *Job* está completo, +o controlador *Job* atualiza esse objeto *Job* para o marcar como `Finished` (terminado). + +(Isto é um pouco como alguns termostatos desligam uma luz para +indicar que a temperatura da sala está agora na temperatura que foi introduzida). + +### Controle direto + +Em contraste com *Job*, alguns controladores necessitam de efetuar +mudanças fora do *cluster*. + +Por exemplo, se usar um ciclo de controle para garantir que existem +*{{< glossary_tooltip text="Nodes" term_id="node" >}}* suficientes +no seu *cluster*, então esse controlador necessita de algo exterior ao +*cluster* atual para configurar novos *Nodes* quando necessário. + +Controladores que interagem com estados externos encontram o seu estado desejado +a partir do *API server*, e então comunicam diretamente com o sistema externo para +trazer o *estado atual* mais próximo do desejado. + +(Existe um controlador que escala horizontalmente nós no seu *cluster*. +Veja [Escalamento automático do cluster](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling)) + +## Estado desejado versus atual {#desired-vs-current} + +Kubernetes tem uma visão *cloud-native* de sistemas e é capaz de manipular +mudanças constantes. + +O seu *cluster* pode mudar em qualquer momento à medida que as ações acontecem e +os ciclos de controle corrigem falhas automaticamente. Isto significa que, +potencialmente, o seu *cluster* nunca atinge um estado estável. + +Enquanto os controladores no seu *cluster* estiverem rodando e forem capazes de +fazer alterações úteis, não importa se o estado é estável ou se é instável. + +## Design + +Como um princípio do seu desenho, o Kubernetes usa muitos controladores onde cada +um gerencia um aspecto particular do estado do *cluster*. Comumente, um particular +ciclo de controle (controlador) usa uma espécie de recurso como o seu *estado desejado*, +e tem uma espécie diferente de recurso que o mesmo gere para garantir que esse *estado desejado* +é cumprido. + +É útil que haja controladores simples em vez de um conjunto monolítico de ciclos de controle +que estão interligados. Controladores podem falhar, então o Kubernetes foi desenhado para +permitir isso. + +Por exemplo: um controlador de *Jobs* rastreia objetos *Job* (para +descobrir novos trabalhos) e objetos *Pod* (para correr o *Jobs*, e então +ver quando o trabalho termina). Neste caso outra coisa cria os *Jobs*, +enquanto o controlador *Job* cria *Pods*. + +{{< note >}} +Podem existir vários controladores que criam ou atualizam a mesma espécie (kind) de objeto. +Atrás das cortinas, os controladores do Kubernetes garantem que eles apenas tomam +atenção aos recursos ligados aos seus recursos controladores. + +Por exemplo, você pode ter *Deployments* e *Jobs*; ambos criam *Pods*. +O controlador de *Job* não apaga os *Pods* que o seu *Deployment* criou, +porque existe informação ({{< glossary_tooltip term_id="label" text="labels" >}}) +que os controladores podem usar para diferenciar esses *Pods*. +{{< /note >}} + +## Formas de rodar controladores {#running-controllers} + +O Kubernetes vem com um conjunto de controladores embutidos que correm +dentro do {{< glossary_tooltip term_id="kube-controller-manager" >}}. +Estes controladores embutidos providenciam comportamentos centrais importantes. + +O controlador *Deployment* e o controlador *Job* são exemplos de controladores +que veem como parte do próprio Kubernetes (controladores "embutidos"). +O Kubernetes deixa você correr o plano de controle resiliente, para que se qualquer +um dos controladores embutidos falhar, outra parte do plano de controle assume +o trabalho. + +Pode encontrar controladores fora do plano de controle, para extender o Kubernetes. +Ou, se quiser, pode escrever um novo controlador você mesmo. +Pode correr o seu próprio controlador como um conjunto de *Pods*, +ou externo ao Kubernetes. O que encaixa melhor vai depender no que esse +controlador faz em particular. + +{{% /capture %}} + +{{% capture whatsnext %}} +* Leia mais sobre o [plano de controle do Kubernetes](/docs/concepts/#kubernetes-control-plane) +* Descubra alguns dos [objetos Kubernetes](/docs/concepts/#kubernetes-objects) básicos. +* Aprenda mais sobre [API do Kubernetes](/docs/concepts/overview/kubernetes-api/) +* Se pretender escrever o seu próprio controlador, veja [Padrões de Extensão](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) +{{% /capture %}} diff --git a/content/pt/docs/concepts/extend-kubernetes/_index.md b/content/pt/docs/concepts/extend-kubernetes/_index.md new file mode 100644 index 0000000000000..db8257b6259ac --- /dev/null +++ b/content/pt/docs/concepts/extend-kubernetes/_index.md @@ -0,0 +1,4 @@ +--- +title: Extendendo o Kubernetes +weight: 110 +--- diff --git a/content/pt/docs/concepts/extend-kubernetes/api-extension/_index.md b/content/pt/docs/concepts/extend-kubernetes/api-extension/_index.md new file mode 100644 index 0000000000000..a0aa0bf978226 --- /dev/null +++ b/content/pt/docs/concepts/extend-kubernetes/api-extension/_index.md @@ -0,0 +1,4 @@ +--- +title: Extendendo a API do Kubernetes +weight: 20 +--- diff --git a/content/pt/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/pt/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md new file mode 100644 index 0000000000000..281daaee5584f --- /dev/null +++ b/content/pt/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -0,0 +1,65 @@ +--- +title: Extendendo a API do Kubernetes com a camada de agregação +reviewers: +- lavalamp +- cheftako +- chenopis +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +A camada de agregação permite ao Kubernetes ser estendido com APIs adicionais, +para além do que é oferecido pelas APIs centrais do Kubernetes. +As APIs adicionais podem ser soluções prontas tal como o +[catálogo de serviços](/docs/concepts/extend-kubernetes/service-catalog/), +ou APIs que você mesmo desenvolva. + +A camada de agregação é diferente dos [Recursos Personalizados](/docs/concepts/extend-kubernetes/api-extension/custom-resources/), +que são uma forma de fazer o {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} +reconhecer novas espécies de objetos. + +{{% /capture %}} + +{{% capture body %}} + +## Camada de agregação + +A camada de agregação executa em processo com o kube-apiserver. +Até que um recurso de estensão seja registado, a camada de agregação +não fará nada. Para registar uma API, terá de adicionar um objeto *APIService* +que irá "reclamar" o caminho URL na API do Kubernetes. Nesta altura, a camada +de agregação procurará qualquer coisa enviada para esse caminho da API +(e.g. `/apis/myextension.mycompany.io/v1/…`) para o *APIService* registado. + +A maneira mais comum de implementar o *APIService* é executar uma +*estensão do servidor API* em *Pods* que executam no seu cluster. +Se estiver a usar o servidor de estensão da API para gerir recursos +no seu cluster, o servidor de estensão da API (também escrito como "extension-apiserver") +é tipicamente emparelhado com um ou mais {{< glossary_tooltip text="controladores" term_id="controller" >}}. +A biblioteca apiserver-builder providencia um esqueleto para ambos +os servidores de estensão da API e controladores associados. + +### Latência da resposta + +Servidores de estensão de APIs devem ter baixa latência de rede de e para o kube-apiserver. +Pedidos de descoberta são necessários que façam a ida e volta do kube-apiserver em 5 +segundos ou menos. + +Se o seu servidor de estensão da API não puder cumprir com o requisito de latência, +considere fazer alterações que permitam atingi-lo. Pode também definir +[portal de funcionalidade](/docs/reference/command-line-tools-reference/feature-gates/) `EnableAggregatedDiscoveryTimeout=false` no kube-apiserver para desativar +a restrição de intervalo. Esta portal de funcionalidade deprecado será removido +num lançamento futuro. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Para pôr o agregador a funcionar no seu ambiente, [configure a camada de agregação](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/). +* De seguida, [configura um api-server de estensão](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) para funcionar com a camada de agregação. +* Também, aprenda como pode [estender a API do Kubernetes através do use de Definições de Recursos Personalizados](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/). +* Leia a especificação do [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io) + +{{% /capture %}} diff --git a/content/pt/docs/concepts/extend-kubernetes/operator.md b/content/pt/docs/concepts/extend-kubernetes/operator.md new file mode 100644 index 0000000000000..9b7198d2fe71a --- /dev/null +++ b/content/pt/docs/concepts/extend-kubernetes/operator.md @@ -0,0 +1,137 @@ +--- +title: Padrão Operador +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Operadores são extensões de software para o Kubernetes que +fazem uso de [*recursos personalizados*](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +para gerir aplicações e os seus componentes. Operadores seguem os +princípios do Kubernetes, notavelmente o [ciclo de controle](/docs/concepts/#kubernetes-control-plane). + +{{% /capture %}} + + +{{% capture body %}} + +## Motivação + +O padrão Operador tem como objetivo capturar o principal objetivo de um operador +humano que gere um serviço ou um conjunto de serviços. Operadores humanos +responsáveis por aplicações e serviços específicos têm um conhecimento +profundo da forma como o sistema é suposto se comportar, como é instalado +e como deve reagir na ocorrência de problemas. + +As pessoas que executam cargas de trabalho no Kubernetes habitualmente gostam +de usar automação para cuidar de tarefas repetitivas. O padrão Operador captura +a forma como pode escrever código para automatizar uma tarefa para além do que +o Kubernetes fornece. + +## Operadores no Kubernetes + +O Kubernetes é desenhado para automação. *Out of the box*, você tem bastante +automação embutida no núcleo do Kubernetes. Pode usar +o Kubernetes para automatizar instalações e executar cargas de trabalho, +e pode ainda automatizar a forma como o Kubernetes faz isso. + +O conceito de {{< glossary_tooltip text="controlador" term_id="controller" >}} no +Kubernetes permite a extensão do comportamento sem modificar o código do próprio +Kubernetes. +Operadores são clientes da API do Kubernetes que atuam como controladores para +um dado [*Custom Resource*](/docs/concepts/api-extension/custom-resources/) + +## Exemplo de um Operador {#exemplo} + +Algumas das coisas que um operador pode ser usado para automatizar incluem: + +* instalar uma aplicação a pedido +* obter e restaurar backups do estado dessa aplicação +* manipular atualizações do código da aplicação juntamente com alterações + como esquemas de base de dados ou definições de configuração extra +* publicar um *Service* para aplicações que não suportam a APIs do Kubernetes + para as descobrir +* simular una falha em todo ou parte do cluster de forma a testar a resiliência +* escolher um lider para uma aplicação distribuída sem um processo + de eleição de membro interno + +Como deve um Operador parecer em mais detalhe? Aqui está um exemplo em mais +detalhe: + +1. Um recurso personalizado (*custom resource*) chamado SampleDB, que você pode + configurar para dentro do *cluster*. +2. Um *Deployment* que garante que um *Pod* está a executar que contém a + parte controlador do operador. +3. Uma imagem do *container* do código do operador. +4. Código do controlador que consulta o plano de controle para descobrir quais + recursos *SampleDB* estão configurados. +5. O núcleo do Operador é o código para informar ao servidor da API (*API server*) como fazer + a realidade coincidir com os recursos configurados. + * Se você adicionar um novo *SampleDB*, o operador configurará *PersistentVolumeClaims* + para fornecer armazenamento de base de dados durável, um *StatefulSet* para executar *SampleDB* e + um *Job* para lidar com a configuração inicial. + * Se você apagá-lo, o Operador tira um *snapshot* e então garante que + o *StatefulSet* e *Volumes* também são removidos. +6. O operador também gere backups regulares da base de dados. Para cada recurso *SampleDB*, + o operador determina quando deve criar um *Pod* que possa se conectar + à base de dados e faça backups. Esses *Pods* dependeriam de um *ConfigMap* + e / ou um *Secret* que possui detalhes e credenciais de conexão com à base de dados. +7. Como o Operador tem como objetivo fornecer automação robusta para o recurso + que gere, haveria código de suporte adicional. Para este exemplo, + O código verifica se a base de dados está a executar uma versão antiga e, se estiver, + cria objetos *Job* que o atualizam para si. + +## Instalar Operadores + +A forma mais comum de instalar um Operador é a de adicionar a +definição personalizada de recurso (*Custom Resource Definition*) e +o seu Controlador associado ao seu cluster. +O Controlador vai normalmente executar fora do +{{< glossary_tooltip text="plano de controle" term_id="control-plane" >}}, +como você faria com qualquer aplicação containerizada. +Por exemplo, você pode executar o controlador no seu cluster como um *Deployment*. + +## Usando um Operador + +Uma vez que você tenha um Operador instalado, usaria-o adicionando, modificando +ou apagando a espécie de recurso que o Operador usa. Seguindo o exemplo acima, +você configuraria um *Deployment* para o próprio Operador, e depois: + +```shell +kubectl get SampleDB # encontra a base de dados configurada + +kubectl edit SampleDB/example-database # mudar manualmente algumas definições +``` + +…e é isso! O Operador vai tomar conta de aplicar +as mudanças assim como manter o serviço existente em boa forma. + +## Escrevendo o seu prórpio Operador {#escrevendo-operador} + +Se não existir no ecosistema um Operador que implementa +o comportamento que pretende, pode codificar o seu próprio. +[Qual é o próximo](#qual-é-o-próximo) você vai encontrar +alguns *links* para bibliotecas e ferramentas que pode usar +para escrever o seu próprio Operador *cloud native*. + +Pode também implementar um Operador (isto é, um Controlador) usando qualquer linguagem / *runtime* +que pode atuar como um [cliente da API do Kubernetes](/docs/reference/using-api/client-libraries/). + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Aprenda mais sobre [Recursos Personalizados](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) +* Encontre operadores prontos em [OperatorHub.io](https://operatorhub.io/) para o seu caso de uso +* Use ferramentes existentes para escrever os seus Operadores: + * usando [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator) + * usando [kubebuilder](https://book.kubebuilder.io/) + * usando [Metacontroller](https://metacontroller.app/) juntamente com WebHooks que + implementa você mesmo + * usando o [Operator Framework](https://github.com/operator-framework/getting-started) +* [Publique](https://operatorhub.io/) o seu operador para que outras pessoas o possam usar +* Leia o [artigo original da CoreOS](https://coreos.com/blog/introducing-operators.html) que introduz o padrão Operador +* Leia um [artigo](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) da Google Cloud sobre as melhores práticas para contruir Operadores + +{{% /capture %}} diff --git a/content/pt/docs/concepts/workloads/controllers/_index.md b/content/pt/docs/concepts/workloads/controllers/_index.md new file mode 100755 index 0000000000000..376ef67943222 --- /dev/null +++ b/content/pt/docs/concepts/workloads/controllers/_index.md @@ -0,0 +1,4 @@ +--- +title: "Controladores" +weight: 20 +--- diff --git a/content/pt/docs/concepts/workloads/controllers/cron-jobs.md b/content/pt/docs/concepts/workloads/controllers/cron-jobs.md new file mode 100644 index 0000000000000..a3241f73b0a62 --- /dev/null +++ b/content/pt/docs/concepts/workloads/controllers/cron-jobs.md @@ -0,0 +1,54 @@ +--- +reviewers: + - erictune + - soltysh + - janetkuo +title: CronJob +content_template: templates/concept +weight: 80 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.8" state="beta" >}} + +Um _Cron Job_ cria [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) em um cronograma baseado em tempo. + +Um objeto CronJob é como um arquivo _crontab_ (tabela cron). Executa um job periodicamente em um determinado horário, escrito no formato [Cron](https://en.wikipedia.org/wiki/Cron). + +{{< note >}} +Todos os **CronJob** `schedule (horários):` são indicados em UTC. +{{< /note >}} + +Ao criar o manifesto para um recurso CronJob, verifique se o nome que você fornece é um [nome de subdomínio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido. +O nome não deve ter mais que 52 caracteres. Isso ocorre porque o controlador do CronJob anexará automaticamente 11 caracteres ao nome da tarefa fornecido e há uma restrição de que o comprimento máximo de um nome da tarefa não pode ultrapassar 63 caracteres. + +Para obter instruções sobre como criar e trabalhar com tarefas cron, e para obter um exemplo de arquivo de especificação para uma tarefa cron, consulte [Executando tarefas automatizadas com tarefas cron](/docs/tasks/job/automated-tasks-with-cron-jobs). + +{{% /capture %}} + +{{% capture body %}} + +## Limitações do Cron Job + +Um trabalho cron cria um objeto de trabalho _about_ uma vez por tempo de execução de seu planejamento, Dizemos "about" porque há certas circunstâncias em que duas tarefas podem ser criadas ou nenhum trabalho pode ser criado. Tentamos torná-los únicos, mas não os impedimos completamente. Portanto, os trabalhos devem ser _idempotente_. + +Se `startingDeadlineSeconds` estiver definido como um valor grande ou não definido (o padrão) e se `concurrencyPolicy` estiver definido como `Allow(Permitir)` os trabalhos sempre serão executados pelo menos uma vez. + +Para cada CronJob, o CronJob {{< glossary_tooltip term_id="controller" >}} verifica quantas agendas faltou na duração, desde o último horário agendado até agora. Se houver mais de 100 agendamentos perdidos, ele não iniciará o trabalho e registrará o erro + +``` +Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew. +``` + +É importante observar que, se o campo `startingDeadlineSeconds` estiver definido (não `nil`), o controlador contará quantas tarefas perdidas ocorreram a partir do valor de `startingDeadlineSeconds` até agora, e não do último horário programado até agora. Por exemplo, se `startingDeadlineSeconds` for `200`, o controlador contará quantas tarefas perdidas ocorreram nos últimos 200 segundos. + +Um CronJob é contado como perdido se não tiver sido criado no horário agendado. Por exemplo, se `concurrencyPolicy` estiver definido como `Forbid` e um CronJob tiver sido tentado ser agendado quando havia um agendamento anterior ainda em execução, será contabilizado como perdido. + +Por exemplo, suponha que um CronJob esteja definido para agendar um novo trabalho a cada minuto, começando em `08:30:00`, e seu campo `startingDeadlineSeconds` não esteja defindo. Se o controlador CronJob estiver baixo de `08:29:00` para `10:21:00`, o trabalho não será iniciado, pois o número de trabalhos perdidos que perderam o cronograma é maior que 100. + +Para ilustrar ainda mais esse conceito, suponha que um CronJob esteja definido para agendar um novo trabalho a cada minuto, começando em `08:30:00`, e seu `startingDeadlineSeconds` está definido em 200 segundos. Se o controlador CronJob estiver inativo no mesmo período do exemplo anterior (`08:29:00` a `10:21:00`), o trabalho ainda será iniciado às 10:22:00. Isso acontece pois o controlador agora verifica quantos agendamentos perdidos ocorreram nos últimos 200 segundos (ou seja, 3 agendamentos perdidos), em vez do último horário agendado até agora. + +O CronJob é responsável apenas pela criação de trabalhos que correspondem à sua programação, e o trabalho, por sua vez, é responsável pelo gerenciamento dos Pods que ele representa. + +{{% /capture %}} diff --git a/content/pt/docs/reference/glossary/controller.md b/content/pt/docs/reference/glossary/controller.md new file mode 100755 index 0000000000000..759961cd8f535 --- /dev/null +++ b/content/pt/docs/reference/glossary/controller.md @@ -0,0 +1,30 @@ +--- +title: Controlador +id: controller +date: 2020-03-23 +full_link: /docs/concepts/architecture/controller/ +short_description: > + Um ciclo de controle que observa o estado partilhado do cluster através do API Server e efetua + mudanças tentando mover o estado atual em direção ao estado desejado. + +aka: +tags: +- architecture +- fundamental +--- +No Kubernetes, controladores são ciclos de controle que observam o estado do seu +{{< glossary_tooltip term_id="cluster" text="cluster">}}, e então fazer ou requisitar +mudanças onde necessário. +Cada controlador tenta mover o estado atual do cluster mais perto do estado desejado. + + + +Controladores observam o estado partilhado do cluster através do +{{< glossary_tooltip text="apiserver" term_id="kube-apiserver" >}} (parte do +{{< glossary_tooltip term_id="control-plane" >}}). + +Alguns controladores também correm dentro do plano de controle, fornecendo ciclos +de controle que são centrais às operações do Kubernetes. Por exemplo: o controlador +de *deployments*, o controlador de *daemonsets*, o controlador de *namespaces*, e o +controlador de volumes persistentes (*persistent volumes*) (e outros) todos correm +dentro do {{< glossary_tooltip term_id="kube-controller-manager" >}}. diff --git a/content/pt/docs/reference/tools.md b/content/pt/docs/reference/tools.md new file mode 100644 index 0000000000000..c068d503fbafb --- /dev/null +++ b/content/pt/docs/reference/tools.md @@ -0,0 +1,54 @@ +--- +title: Ferramentas +content_template: templates/concept +--- + +{{% capture overview %}} +O Kubernetes contém várias ferramentas internas para ajudá-lo a trabalhar com o sistema Kubernetes. +{{% /capture %}} + +{{% capture body %}} +## Kubectl + +[`kubectl`](/docs/tasks/tools/install-kubectl/) é a ferramenta de linha de comando para o Kubernetes. Ela controla o gerenciador de cluster do Kubernetes. + + +## Kubeadm + +[`kubeadm`](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) é a ferramenta de linha de comando para provisionar facilmente um cluster Kubernetes seguro sobre servidores físicos ou na nuvem ou em máquinas virtuais (atualmente em alfa). + + +## Minikube + +[`minikube`](/docs/tasks/tools/install-minikube/) é uma ferramenta que facilita a execução local de um cluster Kubernetes de nó único em sua estação de trabalho para fins de desenvolvimento e teste. + + +## Dashboard + +[`Dashboard`](/docs/tasks/access-application-cluster/web-ui-dashboard/), a interface Web do Kubernetes, permite implantar aplicativos em contêiner em um cluster do Kubernetes, solucionar problemas e gerenciar o cluster e seus próprios recursos. + + +## Helm + +[`Kubernetes Helm`](https://github.com/kubernetes/helm) é uma ferramenta para gerenciar pacotes de recursos pré-configurados do Kubernetes, também conhecidos como Kubernetes charts. + +Use o Helm para: + +* Encontrar e usar softwares populares empacotados como Kubernetes charts +* Compartilhar seus próprios aplicativos como Kubernetes charts +* Criar builds reproduzíveis de seus aplicativos Kubernetes +* Gerenciar de forma inteligente os arquivos de manifesto do Kubernetes +* Gerenciar versões dos pacotes Helm + + +## Kompose + +[`Kompose`](https://github.com/kubernetes-incubator/kompose) é uma ferramenta para ajudar os usuários do Docker Compose a migrar para o Kubernetes. + +Use o Kompose para: + +* Converter um arquivo Docker Compose em objetos Kubernetes +* Ir do desenvolvimento local do Docker ao gerenciamento de seu aplicativo via Kubernetes +* Converter arquivos `yaml` do Docker Compose v1 ou v2 ou [Bundles de Aplicativos Distribuídos](https://docs.docker.com/compose/bundles/) + +{{% /capture %}} \ No newline at end of file diff --git a/content/pt/docs/templates/feature-state-alpha.txt b/content/pt/docs/templates/feature-state-alpha.txt new file mode 100644 index 0000000000000..891096f1233f3 --- /dev/null +++ b/content/pt/docs/templates/feature-state-alpha.txt @@ -0,0 +1,7 @@ +Atualmente, esse recurso está no estado *alpha*, o que significa: + +* Os nomes das versões contêm alfa (ex. v1alpha1). +* Pode estar com bugs. A ativação do recurso pode expor bugs. Desabilitado por padrão. +* O suporte ao recurso pode ser retirado a qualquer momento sem aviso prévio. +* A API pode mudar de maneiras incompatíveis em uma versão de software posterior sem aviso prévio. +* Recomendado para uso apenas em clusters de teste de curta duração, devido ao aumento do risco de erros e falta de suporte a longo prazo. diff --git a/content/pt/docs/templates/feature-state-beta.txt b/content/pt/docs/templates/feature-state-beta.txt new file mode 100644 index 0000000000000..0a0970f83bef9 --- /dev/null +++ b/content/pt/docs/templates/feature-state-beta.txt @@ -0,0 +1,8 @@ +Atualmente, esse recurso está no estado *beta*, o que significa: + +* Os nomes das versões contêm beta (ex, v2beta3). +* O código está bem testado. A ativação do recurso é considerada segura. Ativado por padrão. +* O suporte para o recurso geral não será descartado, embora os detalhes possam mudar. +* O esquema e/ou semântica dos objetos podem mudar de maneiras incompatíveis em uma versão beta ou estável subsequente. Quando isso acontecer, forneceremos instruções para migrar para a próxima versão. Isso pode exigir a exclusão, edição e recriação de objetos da API. O processo de edição pode exigir alguma reflexão. Isso pode exigir tempo de inatividade para aplicativos que dependem do recurso. +* Recomendado apenas para usos não comerciais, devido ao potencial de alterações incompatíveis nas versões subsequentes. Se você tiver vários clusters que podem ser atualizados independentemente, poderá relaxar essa restrição. +* **Por favor, experimente nossos recursos beta e dê um feedback sobre eles! Depois que eles saem da versão beta, pode não ser prático para nós fazer mais alterações.** diff --git a/content/pt/docs/templates/feature-state-deprecated.txt b/content/pt/docs/templates/feature-state-deprecated.txt new file mode 100644 index 0000000000000..e75f713b7308a --- /dev/null +++ b/content/pt/docs/templates/feature-state-deprecated.txt @@ -0,0 +1 @@ +Este recurso está *obsoleto*. Para obter mais informações sobre esse estado, consulte a [Política de descontinuação do Kubernetes](/docs/reference/deprecation-policy/). diff --git a/content/pt/docs/templates/feature-state-stable.txt b/content/pt/docs/templates/feature-state-stable.txt new file mode 100644 index 0000000000000..a9ef64609c826 --- /dev/null +++ b/content/pt/docs/templates/feature-state-stable.txt @@ -0,0 +1,4 @@ +Esse recurso é *estável*, o que significa: + +* O nome da versão é vX, em que X é um número inteiro. +* Versões estáveis dos recursos aparecerão no software lançado para muitas versões subsequentes. diff --git a/content/pt/docs/templates/index.md b/content/pt/docs/templates/index.md new file mode 100644 index 0000000000000..9d7bccd143f5f --- /dev/null +++ b/content/pt/docs/templates/index.md @@ -0,0 +1,13 @@ +--- +headless: true + +resources: +- src: "*alpha*" + title: "alpha" +- src: "*beta*" + title: "beta" +- src: "*deprecated*" + title: "deprecated" +- src: "*stable*" + title: "stable" +--- diff --git a/content/ru/docs/concepts/overview/kubernetes-api.md b/content/ru/docs/concepts/overview/kubernetes-api.md new file mode 100644 index 0000000000000..c3812c8469c95 --- /dev/null +++ b/content/ru/docs/concepts/overview/kubernetes-api.md @@ -0,0 +1,117 @@ +--- +title: API Kubernetes +content_template: templates/concept +weight: 30 +card: + name: concepts + weight: 30 +--- + +{{% capture overview %}} + +Общие соглашения API описаны на [странице соглашений API](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). + +Конечные точки API, типы ресурсов и примеры описаны в [справочнике API](/docs/reference). + +Удаленный доступ к API обсуждается в [Controlling API Access doc](/docs/reference/access-authn-authz/controlling-access/). + +API Kubernetes также служит основой декларативной схемы конфигурации системы. С помощью инструмента командной строки [kubectl](/ru/docs/reference/kubectl/overview/) можно создавать, обновлять, удалять и получать API-объекты. + +Kubernetes также сохраняет сериализованное состояние (в настоящее время в хранилище [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) каждого API-ресурса. + +Kubernetes как таковой состоит из множества компонентов, которые взаимодействуют друг с другом через собственные API. + +{{% /capture %}} + +{{% capture body %}} + +## Изменения в API + +Исходя из нашего опыта, любая успешная система должна улучшаться и изменяться по мере появления новых сценариев использования или изменения существующих. Поэтому мы надеемся, что и API Kubernetes будет постоянно меняться и расширяться. Однако в течение продолжительного периода времени мы будем поддерживать хорошую обратную совместимость с существующими клиентами. В целом, новые ресурсы API и поля ресурсов будут добавляться часто. Удаление ресурсов или полей регулируются [соответствующим процессом](/docs/reference/using-api/deprecation-policy/). + +Определение совместимого изменения и методы изменения API подробно описаны в [документе об изменениях API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md). + +## Определения OpenAPI и Swagger + +Все детали API документируется с использованием [OpenAPI](https://www.openapis.org/). + +Начиная с Kubernetes 1.10, API-сервер Kubernetes основывается на спецификации OpenAPI через конечную точку `/openapi/v2`. +Нужный формат устанавливается через HTTP-заголовоки: + +Заголовок | Возможные значения +------ | --------------- +Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (по умолчанию заголовок Content-Type установлен в `application/json` с `*/*`, допустимо также пропускать этот заголовок) +Accept-Encoding | `gzip` (можно не передавать этот заголовок) + +До версии 1.14 конечные точки с форматом (`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`) предоставляли спецификацию OpenAPI в разных форматах. Эти конечные точки были объявлены устаревшими и удалены в Kubernetes 1.14. + +**Примеры получения спецификации OpenAPI**: + +До 1.10 | С версии Kubernetes 1.10 +----------- | ----------------------------- +GET /swagger.json | GET /openapi/v2 **Accept**: application/json +GET /swagger-2.0.0.pb-v1 | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf +GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf **Accept-Encoding**: gzip + +В Kubernetes реализован альтернативный формат сериализации API, основанный на Protobuf, который в первую очередь предназначен для взаимодействия внутри кластера. Описание этого формата можно найти в [проектом решении](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md), а IDL-файлы по каждой схемы — в пакетах Go, определяющих API-объекты. + +До версии 1.14 apiserver Kubernetes также представлял API, который можно использовать для получения спецификации [Swagger v1.2](http://swagger.io/) для API Kubernetes по пути `/swaggerapi`. Эта конечная точка устарела и была удалена в Kubernetes 1.14 + +## Версионирование API + +Чтобы упростить удаления полей или изменение ресурсов, Kubernetes поддерживает несколько версий API, каждая из которых доступна по собственному пути, например, `/api/v1` или `/apis/extensions/v1beta1`. + +Мы выбрали версионирование API, а не конкретных ресурсов или полей, чтобы API отражал четкое и согласованное представление о системных ресурсах и их поведении, а также, чтобы разграничивать API, которые уже не поддерживаются и/или находятся в экспериментальной стадии. Схемы сериализации JSON и Protobuf следуют одним и тем же правилам по внесению изменений в схему, поэтому описание ниже охватывают оба эти формата. + +Обратите внимание, что версиоирование API и программное обеспечение косвенно связаны друг с другом. [Предложение по версионированию API и новых выпусков](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) описывает, как связаны между собой версии API с версиями программного обеспечения. + +Разные версии API имеют характеризуются разной уровнем стабильностью и поддержкой. Критерии каждого уровня более подробно описаны в [документации изменений API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). Ниже приводится краткое изложение: + +- Альфа-версии: + - Названия версий включают надпись `alpha` (например, `v1alpha1`). + - Могут содержать баги. Включение такой функциональности может привести к ошибкам. По умолчанию она отключена. + - Поддержка функциональности может быть прекращена в любое время без какого-либо оповещения об этом. + - API может быть несовместим с более поздними версиями без упоминания об этом. + - Рекомендуется для использования только в тестировочных кластерах с коротким жизненным циклом из-за высокого риска наличия багов и отсутствия долгосрочной поддержки. +- Бета-версии: + - Названия версий включают надпись `beta` (например, `v2beta3`). + - Код хорошо протестирован. Активация этой функциональности — безопасно. Поэтому она включена по умолчанию. + - Поддержка функциональности в целом не будет прекращена, хотя кое-что может измениться. + - Схема и/или семантика объектов может стать несовместимой с более поздними бета-версиями или стабильными выпусками. Когда это случится, мы даим инструкции по миграции на следующую версию. Это обновление может включать удаление, редактирование и повторного создание API-объектов. Этот процесс может потребовать тщательного анализа. Кроме этого, это может привести к простою приложений, которые используют данную функциональность. + - Рекомендуется только для неосновного производственного использования из-за риска возникновения возможных несовместимых изменений с будущими версиями. Если у вас есть несколько кластеров, которые возможно обновить независимо, вы можете снять это ограничение. + - **Пожалуйста, попробуйте в действии бета-версии функциональности и поделитесь своими впечатлениями! После того, как функциональность выйдет из бета-версии, нам может быть нецелесообразно что-то дальше изменять.** +- Стабильные версии: + - Имя версии `vX`, где `vX` — целое число. + - Стабильные версии функциональностей появятся в новых версиях. + +## API-группы + +Чтобы упростить расширение API Kubernetes, реализованы [*группы API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md). +Группа API указывается в пути REST и в поле `apiVersion` сериализованного объекта. + +В настоящее время используется несколько API-групп: + +1. Группа *core*, которая часто упоминается как *устаревшая* (*legacy group*), доступна по пути `/api/v1` и использует `apiVersion: v1`. + +1. Именованные группы находятся в пути REST `/apis/$GROUP_NAME/$VERSION` и используют `apiVersion: $GROUP_NAME/$VERSION` (например, `apiVersion: batch/v1`). Полный список поддерживаемых групп API можно увидеть в [справочнике API Kubernetes](/docs/reference/). + +Есть два поддерживаемых пути к расширению API с помощью [пользовательских ресурсов](/docs/concepts/api-extension/custom-resources/): + +1. [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/) для пользователей, которым нужен очень простой CRUD. +2. Пользователи, которым нужна полная семантика API Kubernetes, могут реализовать собственный apiserver и использовать [агрегатор](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) для эффективной интеграции для клиентов. + +## Включение или отключение групп API + +Некоторые ресурсы и группы API включены по умолчанию. Их можно включить или отключить, установив `--runtime-config` для apiserver. Флаг `--runtime-config` принимает значения через запятую. Например, чтобы отключить batch/v1, используйте `--runtime-config=batch/v1=false`, а чтобы включить batch/v2alpha1, используйте флаг `--runtime-config=batch/v2alpha1`. +Флаг набор пар ключ-значение, указанных через запятую, который описывает конфигурацию во время выполнения сервера. + +{{< note >}}Включение или отключение групп или ресурсов требует перезапуска apiserver и controller-manager для применения изменений `--runtime-config`.{{< /note >}} + +## Включение определённых ресурсов в группу extensions/v1beta1 + +DaemonSets, Deployments, StatefulSet, NetworkPolicies, PodSecurityPolicies и ReplicaSets в API-группе `extensions/v1beta1` по умолчанию отключены. +Например: чтобы включить deployments и daemonsets, используйте флаг `--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`. + +{{< note >}}Включение/отключение отдельных ресурсов поддерживается только в API-группе `extensions/v1beta1` по историческим причинам.{{< /note >}} + +{{% /capture %}} diff --git a/content/ru/docs/concepts/overview/working-with-objects/_index.md b/content/ru/docs/concepts/overview/working-with-objects/_index.md new file mode 100755 index 0000000000000..9f2b4ef4f3349 --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/_index.md @@ -0,0 +1,5 @@ +--- +title: "Работа с объектами Kubernetes" +weight: 40 +--- + diff --git a/content/ru/docs/concepts/overview/working-with-objects/annotations.md b/content/ru/docs/concepts/overview/working-with-objects/annotations.md new file mode 100644 index 0000000000000..fd1fd6669b487 --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/annotations.md @@ -0,0 +1,79 @@ +--- +title: Аннотации +content_template: templates/concept +weight: 50 +--- + +{{% capture overview %}} +Аннотации Kubernetes можно использовать для добавления собственных метаданных к объектам. Такие клиенты, как инструменты и библиотеки, могут получить эти метаданные. +{{% /capture %}} + +{{% capture body %}} + +## Добавление метаданных к объектам + +Вы можете использовать метки или аннотации для добавления метаданных к объектам Kubernetes. Метки можно использовать для выбора объектов и для поиска коллекций объектов, которые соответствуют определенным условиям. В отличие от них аннотации не используются для идентификации и выбора объектов. Метаданные в аннотации могут быть маленькими или большими, структурированными или неструктурированными, кроме этого они включать символы, которые не разрешены в метках. + +Аннотации, как и метки, являются коллекциями с наборами пар ключ-значение: + +```json +"metadata": { + "annotations": { + "key1" : "value1", + "key2" : "value2" + } +} +``` + +Некоторые примеры информации, которая может быть в аннотациях: + +* Поля, управляемые декларативным уровнем конфигурации. Добавление этих полей в виде аннотаций позволяет отличать их от значений по умолчанию, установленных клиентами или серверами, а также от автоматически сгенерированных полей и полей, заданных системами автоматического масштабирования. + +* Информация о сборке, выпуске или образе, например, метка времени, идентификаторы выпуска, ветка git, номера PR, хеши образов и адрес реестра. + +* Ссылки на репозитории логирования, мониторинга, аналитики или аудита. + +* Информация о клиентской библиотеке или инструменте, которая может использоваться при отладке (например, имя, версия и информация о сборке). + +* Информация об источнике пользователя или инструмента/системы, например, URL-адреса связанных объектов из других компонентов экосистемы. + +* Небольшие метаданные развертывания (например, конфигурация или контрольные точки). + +* Номера телефонов или пейджеров ответственных лиц или записи в справочнике, в которых можно найти нужную информацию, например, сайт группы. + +* Инструкции от конечных пользователей по исправлению работы или использования нестандартной функциональности. + +Вместо использования аннотаций, вы можете сохранить такого рода информацию во внешней базе данных или директории, хотя это усложнило бы создание общих клиентских библиотек и инструментов развертывания, управления, самодиагностики и т.д. + +## Синтаксис и набор символов + +_Аннотации_ представляют собой пары ключ-значение. Разрешенные ключи аннотации имеют два сегмента, разделённые слешем (`/`): префикс (необязательно) и имя. Сегмент имени обязателен и должен содержать не более 63 символов, среди которых могут быть буквенно-цифровые символы(`[a-z0-9A-Z]`), а также дефисы (`-`), знаки подчеркивания (`_`), точки (`.`). Префикс не обязателен, но он быть поддоменом DNS: набор меток DNS, разделенных точками (`.`), общей длиной не более 253 символов, за которыми следует слеш (`/`). + +Если префикс не указан, ключ аннотации считается закрытым для пользователя. Компоненты автоматизированной системы (например, `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl` или другие сторонние), которые добавляют аннотации к объектам пользователя, должны указывать префикс. + +Префиксы `kubernetes.io/` и `k8s.io/` зарезервированы для использования основными компонентами Kubernetes. + +Например, ниже представлен конфигурационный файл объекта Pod с аннотацией `imageregistry: https://hub.docker.com/`: + +```yaml + +apiVersion: v1 +kind: Pod +metadata: + name: annotations-demo + annotations: + imageregistry: "https://hub.docker.com/" +spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + +``` + +{{% /capture %}} + +{{% capture whatsnext %}} +Узнать подробнее про [метки и селекторы](/ru/docs/concepts/overview/working-with-objects/labels/). +{{% /capture %}} diff --git a/content/ru/docs/concepts/overview/working-with-objects/common-labels.md b/content/ru/docs/concepts/overview/working-with-objects/common-labels.md new file mode 100644 index 0000000000000..06fe6d8f2d1b2 --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/common-labels.md @@ -0,0 +1,165 @@ +--- +title: Рекомендуемые метки +content_template: templates/concept +--- + +{{% capture overview %}} + +Вы можете визуализировать и управлять объектами Kubernetes не только с помощью kubectl и панели управления. С помощью единого набора меток можно единообразно описывать объекты, что позволяет инструментам согласованно работать между собой. + +В дополнение к существующим инструментам, рекомендуемый набор меток описывают приложения в том виде, в котором они могут быть получены. + +{{% /capture %}} + +{{% capture body %}} + + +Метаданные сосредоточены на понятии _приложение_. Kubernetes — это не платформа как услуга (PaaS), поэтому не закрепляет формальное понятие приложения. +Вместо этого приложения являются неформальными и описываются через метаданные. Определение приложения довольно расплывчатое. + +{{< note >}} + +Это рекомендуемые для использования метки. Они облегчают процесс управления приложениями, но при этом не являются обязательными для основных инструментов. + +{{< /note >}} + +Общие метки и аннотации используют один и тот же префикс: `app.kubernetes.io`. Метки без префикса являются приватными для пользователей. Совместно используемый префикс гарантирует, что общие метки не будут влиять на пользовательские метки. + +## Метки + +Чтобы извлечь максимум пользы от использования таких меток, они должны добавляться к каждому ресурсному объекту. + +| Ключ | Описание | Пример | Тип | +| ----------------------------------- | --------------------- | -------- | ---- | +| `app.kubernetes.io/name` | Имя приложения | `mysql` | string | +| `app.kubernetes.io/instance` | Уникальное имя экземпляра приложения | `wordpress-abcxzy` | string | +| `app.kubernetes.io/version` | Текущая версия приложения (например, семантическая версия, хеш коммита и т.д.) | `5.7.21` | string | +| `app.kubernetes.io/component` | Имя компонента в архитектуре | `database` | string | +| `app.kubernetes.io/part-of` | Имя основного приложения, частью которого является текущий объект | `wordpress` | string | +| `app.kubernetes.io/managed-by` | Инструмент управления приложением | `helm` | string | + +Для демонстрации этих меток, рассмотрим следующий объект `StatefulSet`: + +```yaml +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/name: mysql + app.kubernetes.io/instance: wordpress-abcxzy + app.kubernetes.io/version: "5.7.21" + app.kubernetes.io/component: database + app.kubernetes.io/part-of: wordpress + app.kubernetes.io/managed-by: helm +``` + +## Приложения и экземпляры приложений + +Одно и то же приложение может быть установлено несколько раз в кластер Kubernetes, в ряде случаев — в одинаковое пространство имен. Например, WordPress может быть установлен более одного раза, тогда каждый из сайтов будет иметь собственный установленный экземпляр WordPress. + +Имя приложения и имя экземпляра хранятся по отдельности. Например, WordPress имеет ключ `app.kubernetes.io/name` со значением `wordpress`, при этом у него есть имя экземпляра, представленное ключом `app.kubernetes.io/instance` со значением `wordpress-abcxzy`. Такой механизм позволяет идентифицировать как приложение, так и экземпляры приложения. У каждого экземпляра приложения должно быть уникальное имя. + +## Примеры + +Следующие примеры показывают разные способы использования общих меток, поэтому они различаются по степени сложности. + +### Простой сервис без состояния + +Допустим, у нас есть простой сервис без состояния, развернутый с помощью объектов `Deployment` и `Service`. Следующие два фрагмента конфигурации показывают, как можно использовать метки в самом простом варианте. + +Объект `Deployment` используется для наблюдения за подами, на которых запущено приложение. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/name: myservice + app.kubernetes.io/instance: myservice-abcxzy +... +``` + +Объект `Service` используется для открытия доступа к приложению. + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/name: myservice + app.kubernetes.io/instance: myservice-abcxzy +... +``` + +### Веб-приложение с базой данных + +Рассмотрим случай немного посложнее: веб-приложение (WordPress), которое использует базу данных (MySQL), установленное с помощью Helm. В следующих фрагментов конфигурации объектов отображена отправная точка развертывания такого приложения. + +Следующий объект `Deployment` используется для WordPress: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/name: wordpress + app.kubernetes.io/instance: wordpress-abcxzy + app.kubernetes.io/version: "4.9.4" + app.kubernetes.io/managed-by: helm + app.kubernetes.io/component: server + app.kubernetes.io/part-of: wordpress +... +``` + +Объект `Service` используется для открытия доступа к WordPress: + + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/name: wordpress + app.kubernetes.io/instance: wordpress-abcxzy + app.kubernetes.io/version: "4.9.4" + app.kubernetes.io/managed-by: helm + app.kubernetes.io/component: server + app.kubernetes.io/part-of: wordpress +... +``` + +MySQL открывается в виде `StatefulSet` с метаданными как для самого приложения, так и основного (родительского) приложения, к которому принадлежит СУБД: + +```yaml +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + app.kubernetes.io/name: mysql + app.kubernetes.io/instance: mysql-abcxzy + app.kubernetes.io/version: "5.7.21" + app.kubernetes.io/managed-by: helm + app.kubernetes.io/component: database + app.kubernetes.io/part-of: wordpress +... +``` + +Объект `Service` предоставляет MySQL в составе WordPress: + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/name: mysql + app.kubernetes.io/instance: mysql-abcxzy + app.kubernetes.io/version: "5.7.21" + app.kubernetes.io/managed-by: helm + app.kubernetes.io/component: database + app.kubernetes.io/part-of: wordpress +... +``` + +Вы заметите, что `StatefulSet` и `Service` MySQL содержат больше информации о MySQL и WordPress. + +{{% /capture %}} diff --git a/content/ru/docs/concepts/overview/working-with-objects/field-selectors.md b/content/ru/docs/concepts/overview/working-with-objects/field-selectors.md new file mode 100644 index 0000000000000..207322ca26c8c --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/field-selectors.md @@ -0,0 +1,61 @@ +--- +title: Селекторы полей +weight: 60 +--- + +_Селекторы полей_ позволяют [выбирать ресурсы Kubernetes](/ru/docs/concepts/overview/working-with-objects/kubernetes-objects), исходя из значения одного или нескольких полей ресурсов. Ниже приведены несколько примеров запросов селекторов полей: + +* `metadata.name=my-service` +* `metadata.namespace!=default` +* `status.phase=Pending` + +Следующая команда `kubectl` выбирает все Pod-объекты, в которых значение поля [`status.phase`](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) равно `Running`: + +```shell +kubectl get pods --field-selector status.phase=Running +``` + +{{< note >}} +По сути, селекторы полей являются *фильтрами* ресурсов. По умолчанию нет установленных селекторов/фильтров, поэтому выбираются ресурсы всех типов. Это означает, что два запроса `kubectl` ниже одинаковы: + +```shell +kubectl get pods +kubectl get pods --field-selector "" +``` +{{< /note >}} + +## Поддерживаемые поля + +Доступные селекторы полей зависят от типа ресурса Kubernetes. У всех типов ресурсов есть поля `metadata.name` и `metadata.namespace`. При использовании несуществующего селекторов полей приведёт к возникновению ошибки. Например: + +```shell +kubectl get ingress --field-selector foo.bar=baz +``` + +``` +Error from server (BadRequest): Unable to find "ingresses" that match label selector "", field selector "foo.bar=baz": "foo.bar" is not a known field selector: only "metadata.name", "metadata.namespace" +``` + +## Поддерживаемые операторы + +Можно использовать операторы `=`, `==` и `!=` в селекторах полей (`=` и `==` — синонимы). Например, следующая команда `kubectl` выбирает все сервисы Kubernetes, не принадлежавшие пространству имен `default`: + +```shell +kubectl get services --all-namespaces --field-selector metadata.namespace!=default +``` + +## Составные селекторы + +Аналогично [метки](/ru/docs/concepts/overview/working-with-objects/labels) и другим селекторам, несколько селекторы полей могут быть объединены через запятую. Приведенная ниже команда `kubectl` выбирает все Pod-объекты, у которых значение поле `status.phase`, отличное от `Running`, а поле `spec.restartPolicy` имеет значение `Always`: + +```shell +kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always +``` + +## Множественные типы ресурсов + +Можно использовать селекторы полей с несколькими типами ресурсов одновременно. Команда `kubectl` выбирает все объекты StatefulSet и Services, не включенные в пространство имен `default`: + +```shell +kubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default +``` diff --git a/content/ru/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/ru/docs/concepts/overview/working-with-objects/kubernetes-objects.md new file mode 100644 index 0000000000000..dc8a73cd51414 --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -0,0 +1,80 @@ +--- +title: Изучение объектов Kubernetes +content_template: templates/concept +weight: 10 +card: + name: concepts + weight: 40 +--- + +{{% capture overview %}} + +На этой странице объясняется, как объекты Kubernetes представлены в API Kubernetes, и как их можно определить в формате `.yaml`. + +{{% /capture %}} + +{{% capture body %}} + +## Изучение объектов Kubernetes {#kubernetes-objects} + +*Объекты Kubernetes* — сущности, которые хранятся в Kubernetes. Kubernetes использует их для представления состояния кластера. В частности, они описывают следующую информацию: + +* Какие контейнеризированные приложения запущены (и на каких узлах). +* Доступные ресурсы для этих приложений. +* Стратегии управления приложения, которые относятся, например, к перезапуску, обновлению или отказоустойчивости. + +После создания объекта Kubernetes будет следить за существованием объекта. Создавая объект, вы таким образом указываете системе Kubernetes, какой должна быть рабочая нагрузка кластера; это *требуемое состояние* кластера. + +Для работы с объектами Kubernetes – будь то создание, изменение или удаление — нужно использовать [API Kubernetes](/docs/concepts/overview/kubernetes-api/). Например, при использовании CLI-инструмента `kubectl`, он обращается к API Kubernetes. С помощью одной из [клиентской библиотеки](/docs/reference/using-api/client-libraries/) вы также можете использовать API Kubernetes в собственных программах. + +### Спецификация и статус объекта + +Почти в каждом объекте Kubernetes есть два вложенных поля-объекта, которые управляют конфигурацией объекта: *`spec`* и *`status`*. +При создании объекта в поле `spec` указывается _требуемое состояние_ (описание характеристик, которые должны быть у объекта). + +Поле `status` описывает _текущее состояние_ объекта, которое создаётся и обновляется самим Kubernetes и его компонентами. {{< glossary_tooltip text="Плоскость управления" term_id="control-plane" >}} Kubernetes непрерывно управляет фактическим состоянием каждого объекта, чтобы оно соответствовало требуемому состоянию, которое было задано пользователем. + +Например: Deployment — это объект Kubernetes, представляющий работающее приложение в кластере. При создании объекта Deployment вы можете указать в его поле `spec`, что хотите иметь три реплики приложения. Система Kubernetes получит спецификацию объекта Deployment и запустит три экземпляра приложения, таким образом обновит статус (состояние) объекта, чтобы он соответствовал заданной спецификации. В случае сбоя одного из экземпляров (это влечет за собой изменение состояние), Kubernetes обнаружит несоответствие между спецификацией и статусом и исправит его, т.е. активирует новый экземпляр вместо того, который вышел из строя. + +Для получения дополнительной информации о спецификации объекта, статусе и метаданных смотрите документ с [соглашениями API Kubernetes](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). + +### Описание объекта Kubernetes + +При создании объекта в Kubernetes нужно передать спецификацию объекта, которая содержит требуемое состояние, а также основную информацию об объекте (например, его имя). Когда вы используете API Kubernetes для создания объекта (напрямую либо через `kubectl`), соответствующий API-запрос должен включать в теле запроса всю указанную информацию в JSON-формате. **В большинстве случаев вы будете передавать `kubectl` эти данные, записанные в файле .yaml**. Тогда инструмент `kubectl` преобразует их в формат JSON при выполнении запроса к API. + +Ниже представлен пример `.yaml`-файла, в котором заданы обязательные поля и спецификация объекта, необходимая для объекта Deployment в Kubernetes: + +{{< codenew file="application/deployment.yaml" >}} + +Один из способов создания объекта Deployment с помощью файла `.yaml`, показанного выше — использовать команду [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply), которая принимает в качестве аргумента файл в формате `.yaml`. Например: + +```shell +kubectl apply -f https://k8s.io/examples/application/deployment.yaml --record +``` + +Вывод будет примерно таким: + +``` +deployment.apps/nginx-deployment created +``` + +### Обязательные поля + +В файле `.yaml` создаваемого объекта Kubernetes необходимо указать значения для следующих полей: + +* `apiVersion` — используемая для создания объекта версия API Kubernetes +* `kind` — тип создаваемого объекта +* `metadata` — данные, позволяющие идентифицировать объект (`name`, `UID` и необязательное поле `namespace`) +* `spec` — требуемое состояние состояние объекта + +Конкретный формат поля-объекта `spec` зависит от типа объекта Kubernetes и содержит вложенные поля, предназначенные только для используемого объекта. В [справочнике API Kubernetes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) можно найти формат спецификации любого объекта Kubernetes. +Например, формат `spec` для объекта Pod находится в [ядре PodSpec v1](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core), а формат `spec` для Deployment — в [DeploymentSpec v1 apps](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deploymentspec-v1-apps). + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Обзор API Kubernetes](/docs/reference/using-api/api-overview/) более подробно объясняет некоторые из API-концепций +* Познакомиться с наиболее важными и основными объектами в Kubernetes, например, с [подами](/docs/concepts/workloads/pods/pod-overview/). +* Узнать подробнее про [контролеры](/docs/concepts/architecture/controller/) в Kubernetes +{{% /capture %}} diff --git a/content/ru/docs/concepts/overview/working-with-objects/labels.md b/content/ru/docs/concepts/overview/working-with-objects/labels.md new file mode 100644 index 0000000000000..b0af52c94070f --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/labels.md @@ -0,0 +1,221 @@ +--- +title: Метки и селекторы +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +_Метки_ — это пары ключ-значение, которые добавляются к объектам, как поды. +Метки предназначены для идентификации атрибутов объектов, которые имеют значимость и важны для пользователей, но при этом не относятся напрямую к основной системе. +Метки можно использовать для группировки и выбора подмножеств объектов. Метки могут быть добавлены к объектам во время создания и изменены в любое время после этого. +Каждый объект может иметь набор меток в виде пары ключ-значение. Каждый ключ должен быть уникальным в рамках одного и того же объекта. + +```json +"metadata": { + "labels": { + "key1" : "value1", + "key2" : "value2" + } +} +``` + +Метки используются при получении и отслеживании объектов и в веб-панелях и CLI-инструментах. Любая неидентифицирующая информация должна быть записана в [аннотации](/ru/docs/concepts/overview/working-with-objects/annotations/). + +{{% /capture %}} + +{{% capture body %}} + +## Причины использования + +Метки позволяют пользователям гибко сопоставить их организационные структуры с системными объектами, не требуя от клиентов хранить эти соответствия. + +Развертывания сервисов и процессы пакетной обработки часто являются многомерными сущностями (например, множество разделов или развертываний, несколько групп выпусков, несколько уровней приложения, несколько микросервисов на каждый уровень приложения). Для управления часто требуются сквозные операции, которые нарушают инкапсуляцию строго иерархических представлений, особенно жестких иерархий, определяемых инфраструктурой, а не пользователями. + +Примеры меток: + + * `"release" : "stable"`, `"release" : "canary"` + * `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"` + * `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "cache"` + * `"partition" : "customerA"`, `"partition" : "customerB"` + * `"track" : "daily"`, `"track" : "weekly"` + +Это всего лишь примеры часто используемых меток; конечно, вы можете использовать свои собственные. Помните о том, что ключ метки должна быть уникальной в пределах одного объекта. + +## Синтаксис и набор символов + +_Метки_ представляют собой пары ключ-значение. Разрешенные ключи метки имеют два сегмента, разделённые слешем (`/`): префикс (необязательно) и имя. Сегмент имени обязателен и должен содержать не более 63 символов, среди которых могут быть буквенно-цифровые символы (`[a-z0-9A-Z]`), а также дефисы (`-`), знаки подчеркивания (`_`), точки (`.`). Префикс не обязателен, но он быть поддоменом DNS: набор меток DNS, разделенных точками (`.`), общей длиной не более 253 символов, за которыми следует слеш (`/`). + +Если префикс не указан, ключ метки считается закрытым для пользователя. Компоненты автоматизированной системы (например, `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl` или другие сторонние), которые добавляют метки к объектам пользователя, должны указывать префикс. + +Префиксы `kubernetes.io/` и `k8s.io/` зарезервированы для использования основными компонентами Kubernetes. + +Например, ниже представлен конфигурационный файл объекта Pod с двумя метками `environment: production` и `app: nginx`: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: label-demo + labels: + environment: production + app: nginx +spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 +``` + +## Селекторы меток + +В отличие от [имен и идентификаторов](/docs/user-guide/identifiers), метки не гарантируют уникальность. Поэтому мы предполагаем, что многие объекты будут иметь одинаковые метки. + +С помощью _селектора меток_ клиент/пользователь может идентифицировать набор объектов. Селектор меток — основное средство группировки в Kubernetes. + +В настоящее время API поддерживает два типа селекторов: _на равенстве_ и _на наборе_. +Селектор меток может состоять из нескольких _условий_, разделенных запятыми. В таком случае все условия должны быть выполнены, поэтому запятая-разделитель работает как логический оператор _И_ (`&&`). + +Работа пустых или неопределённых селекторов зависит от контекста. Типы API, которые использует селекторы, должны задокументировать это поведение. + +{{< note >}} +Для некоторых API-типов, например, ReplicaSets, селекторы меток двух экземпляров не должны дублироваться в пространстве имен, в противном случае контроллер может рассматривать их как конфликтующие инструкции и не сможет определить количество реплик. +{{< /note >}} + +{{< caution >}} +Как для условий, основанных на равенстве, так и для условий на основе набора, не существует логического оператора _ИЛИ_ (`||`). Убедитесь, что синтаксис фильтрации правильно составлен. +{{< /caution >}} + +### Условие _равенства_ + +Условия _равенства_ или _неравенства_ позволяют отфильтровать объекты по ключам и значениям меток. Сопоставляемые объекты должны удовлетворять всем указанным условиям меток, хотя при этом у объектов также могут быть заданы другие метки. +Доступны три оператора: `=`,`==`,`!=`. Первые два означают _равенство_ (и являются всего лишь синонимами), а последний оператор определяет _неравенство_. Например: + +``` +environment = production +tier != frontend +``` + +Первый пример выбирает все ресурсы с ключом `environment`, у которого значение указано `production`. +Последний получает все ресурсы с ключом `tier` без значения `frontend`, а также все ресурсы, в которых нет метки с ключом `tier`. +Используя оператор запятой можно совместить показанные два условия в одно, запросив ресурсы, в которых есть значение метки `production` и исключить `frontend`: `environment=production,tier!=frontend`. + +С помощью условия равенства в объектах Pod можно указать, какие нужно выбрать ресурсы. Например, в примере ниже объект Pod выбирает узлы с меткой "`accelerator=nvidia-tesla-p100`". + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: cuda-test +spec: + containers: + - name: cuda-test + image: "k8s.gcr.io/cuda-vector-add:v0.1" + resources: + limits: + nvidia.com/gpu: 1 + nodeSelector: + accelerator: nvidia-tesla-p100 +``` + +### Условие _набора_ + +Условие меток _на основе набора_ фильтрует ключи в соответствии с набором значений. Поддерживаются три вида операторов: `in`, `notin` и `exists` (только идентификатор ключа). Например: + +``` +environment in (production, qa) +tier notin (frontend, backend) +partition +!partition +``` + +В первом примере выбираются все ресурсы с ключом `environment` и значением `production` или `qa`. +Во втором примере выбираются все ресурсы с ключом `tier` и любыми значениями, кроме `frontend` и `backend`, а также все ресурсы без меток с ключом `tier`. +Третий пример выбирает все ресурсы, включая метку с ключом `partition` (с любым значением). +В четвертом примере выбираются все ресурсы без метки с ключом `partition` (с любым значением). +Как и логический оператор _И_ работает разделитель в виде запятой. Таким образом, фильтрация ресурсов по ключу `partition` (вне зависимости от значения) и ключу `environment` с любым значением, кроме `qa`, можно получить с помощью следующего выражения: `partition,environment notin (qa)`. +Селектор меток _на основе набора_ — основная форма равенства, поскольку `environment=production` то же самое, что и `environment in (production)`; аналогично, оператор `!=` соответствует `notin`. + +Условия _набора_ могут использоваться одновременно с условия _равенства_. Например, так: `partition in (customerA, customerB),environment!=qa`. + +## API + +### Фильтрация LIST и WATCH + +Операции LIST и WATCH могут использовать параметр запроса, чтобы указать селекторы меток фильтрации наборов объектов. Есть поддержка обоих условий (строка запроса URL ниже показывается в исходном виде): + + * Условия _на основе равенства_: `?labelSelector=environment%3Dproduction,tier%3Dfrontend` + * Условия _на основе набора_: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` + +Указанные выше формы селектора меток можно использовать для просмотра или отслеживания ресурсов через REST-клиент. Например, `apiserver` с `kubectl`, который использует _условие равенства_: + +```shell +kubectl get pods -l environment=production,tier=frontend +``` + +Либо используя условия _на основе набора_: + +```shell +kubectl get pods -l 'environment in (production),tier in (frontend)' +``` + +Как уже показывалось, _условия набора_ дают больше возможностей. Например, в них можно использовать подобие оператора _И_: + +```shell +kubectl get pods -l 'environment in (production, qa)' +``` + +Либо можно воспользоваться исключающим сопоставлением с помощью оператора _exists_: + +```shell +kubectl get pods -l 'environment,environment notin (frontend)' +``` + +### Установка ссылок в API-объекты + +Некоторые объекты Kubernetes, такие как [`services`](/docs/user-guide/services) и [`replicationcontrollers`](/docs/user-guide/replication-controller), также используют селекторы меток для ссылки на наборы из других ресурсов, например, [подов](/docs/user-guide/pods). + +#### Service и ReplicationController + +Набор подов, на которые указывает `service`, определяется через селектор меток. Аналогичным образом, количество подов, которыми должен управлять `replicationcontroller`, также формируются с использованием селектора меток. + +Селекторы меток для обоих объектов записываются в словарях файлов формата `json` и `yaml`, при этом поддерживаются только селекторы с условием _равенства_: + +```json +"selector": { + "component" : "redis", +} +``` + +Или: + +```yaml +selector: + component: redis +``` + +Этот селектор (как в формате `json`, так и в `yaml`) эквивалентен `component=redis` или `component in (redis)`. + +#### Ресурсы, поддерживающие условия набора + +Новые ресурсы, такие как [`Job`](/docs/concepts/workloads/controllers/jobs-run-to-completion/), [`Deployment`](/docs/concepts/workloads/controllers/deployment/), [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) и [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/), также поддерживают условия _набора_. + +```yaml +selector: + matchLabels: + component: redis + matchExpressions: + - {key: tier, operator: In, values: [cache]} + - {key: environment, operator: NotIn, values: [dev]} +``` + +`matchLabels` — словарь пар `{key,value}`. Каждая пара `{key,value}` в словаре `matchLabels` эквивалентна элементу `matchExpressions`, где поле `key` — "key", поле `operator` — "In", а массив `values` содержит только "value". +`matchExpressions` представляет собой список условий селектора пода. В качестве операторов могут быть In, NotIn, Exists и DoesNotExist. В случае использования In и NotIn должны заданы непустые значения. Все условия, как для `matchLabels`, так и для `matchExpressions`, объединяются с помощью логического И, поэтому при выборке объектов все они должны быть выполнены. + +#### Выбор наборов узлов + +Один из вариантов использования меток — возможность выбора набора узлов, в которых может быть развернут под. +Смотрите документацию про [выбор узлов](/docs/concepts/configuration/assign-pod-node/), чтобы получить дополнительную информацию. + +{{% /capture %}} diff --git a/content/ru/docs/concepts/overview/working-with-objects/names.md b/content/ru/docs/concepts/overview/working-with-objects/names.md new file mode 100644 index 0000000000000..aedbb44667f23 --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/names.md @@ -0,0 +1,78 @@ +--- +title: Имена и идентификаторы объектов +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +Каждый объект в кластере имеет уникальное [_имя_](#имена) для конкретного типа ресурса. +Кроме этого, у каждого объекта Kubernetes есть собственный [_уникальный идентификатор (UID)_](#идентификаторы) в пределах кластера. + +Например, в одном и том же [пространстве имён](/ru/docs/concepts/overview/working-with-objects/namespaces/) может быть только один Pod-объект с именем `myapp-1234`, и при этом существовать объект Deployment с этим же названием `myapp-1234`. + +Для создания пользовательских неуникальных атрибутов у Kubernetes есть [метки](/ru/docs/concepts/overview/working-with-objects/labels/) и [аннотации](/ru/docs/concepts/overview/working-with-objects/annotations/). + +{{% /capture %}} + +{{% capture body %}} + +## Имена + +{{< glossary_definition term_id="name" length="all" >}} + +Ниже перечислены три типа распространённых требований к именам ресурсов. + +### Имена поддоменов DNS + +Большинству типов ресурсов нужно указать имя, используемое в качестве имени поддомена DNS в соответствии с [RFC 1123](https://tools.ietf.org/html/rfc1123). Соответственно, имя должно: + +- содержать не более 253 символов +- иметь только строчные буквенно-цифровые символы, '-' или '.' +- начинаться с буквенно-цифрового символа +- заканчивается буквенно-цифровым символом + +### Имена меток DNS + +Некоторые типы ресурсов должны соответствовать стандарту меток DNS, который описан в [RFC 1123](https://tools.ietf.org/html/rfc1123). Таким образом, имя должно: + +- содержать не более 63 символов +- содержать только строчные буквенно-цифровые символы или '.' +- начинаться с буквенно-цифрового символа +- заканчивается буквенно-цифровым символом + +### Имена сегментов пути + +Определённые имена типов ресурсов должны закодированы для использования в качестве сегмента пути. Проще говоря, имя не может быть "." или "..", а также не может содержать "/" или "%". + +Пример файла манифеста пода `nginx-demo`. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx-demo +spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 +``` + +{{< note >}} +У отдельных типов ресурсов есть дополнительные ограничения именования. +{{< /note >}} + +## Уникальные идентификаторы + +{{< glossary_definition term_id="uid" length="all" >}} + +Уникальные идентификатор (UID) в Kubernetes — это универсальные уникальные идентификаторы (известные также как Universally Unique IDentifier, сокращенно UUID). +Эти идентификаторы стандартизированы под названием ISO/IEC 9834-8, а также как ITU-T X.667. + +{{% /capture %}} +{{% capture whatsnext %}} +* Узнать подробнее про [метки](/ru/docs/concepts/overview/working-with-objects/labels/) в Kubernetes. +* Посмотреть архитектуру [идентификаторов и имён Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md). +{{% /capture %}} diff --git a/content/ru/docs/concepts/overview/working-with-objects/namespaces.md b/content/ru/docs/concepts/overview/working-with-objects/namespaces.md new file mode 100644 index 0000000000000..75eae895ede1c --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/namespaces.md @@ -0,0 +1,97 @@ +--- +title: Пространства имён +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Kubernetes поддерживает несколько виртуальных кластеров в одном физическом кластере. Такие виртуальные кластеры называются пространствами имён. + +{{% /capture %}} + +{{% capture body %}} + +## Причины использования нескольких пространств имён + +Пространства имён применяются в окружениях с многочисленными пользователями, распределенными по нескольким командам или проектам. Пространства имён не нужно создавать, если есть кластеры с небольшим количеством пользователей (например, десяток пользователей). Пространства имён имеет смысл использовать, когда необходима такая функциональность. + +Пространства имён определяют область имён. Имена ресурсов должны быть уникальными в пределах одного и того же пространства имён. Пространства имён не могут быть вложенными, а каждый ресурс Kubernetes может находиться только в одном пространстве имён. + +Пространства имён — это способ разделения ресурсов кластера между несколькими пользователями (с помощью [квоты ресурсов](/docs/concepts/policy/resource-quotas/)). + +По умолчанию в будущих версиях Kubernetes объекты в одном и том же пространстве имён будут иметь одинаковую политику контроля доступа. + +Не нужно использовать пространства имён только для разделения слегка отличающихся ресурсов, например, в случае разных версий одного и того же приложения. Используйте [метки](/ru/docs/concepts/overview/working-with-objects/labels/), чтобы различать ресурсы в рамках одного пространства имён. + +## Использование пространств имён + +Создание и удаление пространств имён описаны в [руководстве администратора по пространствам имён](/docs/admin/namespaces). + +### Просмотр пространств имён + +Используйте следующую команду, чтобы вывести список существующих пространств имён в кластере: + +```shell +kubectl get namespace +``` +``` +NAME STATUS AGE +default Active 1d +kube-system Active 1d +kube-public Active 1d +``` + +По умолчанию в Kubernetes определены три пространства имён: + + * `default` — пространство имён по умолчанию для объектов без какого-либо другого пространства имён. + * `kube-system` — пространство имён для объектов, созданных Kubernetes + * `kube-public` — создаваемое автоматически пространство имён, которое доступно для чтения всем пользователям (включая также неаутентифицированных пользователей). Как правило, это пространство имён используется кластером, если некоторые ресурсы должны быть общедоступными для всего кластера. Главная особенность этого пространства имён — оно всего лишь соглашение, а не требование. + +### Определение пространства имён для отдельных команд + +Используйте флаг `--namespace`, чтобы определить пространство имён только для текущего запроса. + +Примеры: + +```shell +kubectl run nginx --image=nginx --namespace= +kubectl get pods --namespace= +``` + +### Определение пространства имён для всех команд + +Можно определить пространство имён, которое должно использоваться для всех выполняемых команд kubectl в текущем контексте. + +```shell +kubectl config set-context --current --namespace= +# Проверка +kubectl config view --minify | grep namespace: +``` + +## Пространства имён и DNS + +При создании [сервиса](/docs/user-guide/services) создаётся соответствующая ему [DNS-запись](/docs/concepts/services-networking/dns-pod-service/). +Эта запись вида `..svc.cluster.local` означает, что если контейнер использует только ``, то он будет локальным сервисом в пространстве имён. Это позволит применять одну и ту же конфигурацию в нескольких пространствах имен (например, development, staging и production). Если нужно обращаться к другим пространствам имён, то нужно использовать полностью определенное имя домена (FQDN). + +## Объекты без пространства имён + +Большинство ресурсов Kubernetes (например, поды, сервисы, контроллеры репликации и другие) расположены в определённых пространствах имён. При этом сами ресурсы пространства имён не находятся ни в других пространствах имён. А такие низкоуровневые ресурсы, как [узлы](/docs/admin/node) и persistentVolumes, не принадлежат ни одному пространству имён. + +Чтобы посмотреть, какие ресурсы Kubernetes находятся в пространстве имён, а какие — нет, используйте следующие команды: + +```shell +# Ресурсы в пространстве имён +kubectl api-resources --namespaced=true + +# Ресурсы, не принадлежавшие ни одному пространству имён +kubectl api-resources --namespaced=false +``` + +{{% /capture %}} + +{{% capture whatsnext %}} +* Узнать подробнее про [создание нового пространства имён](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace). +* Узнать подробнее про [удаление пространства имён](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace). + +{{% /capture %}} diff --git a/content/ru/docs/concepts/overview/working-with-objects/object-management.md b/content/ru/docs/concepts/overview/working-with-objects/object-management.md new file mode 100644 index 0000000000000..c440ea082bd9c --- /dev/null +++ b/content/ru/docs/concepts/overview/working-with-objects/object-management.md @@ -0,0 +1,168 @@ +--- +title: Управление объектами Kubernetes +content_template: templates/concept +weight: 15 +--- + +{{% capture overview %}} + +В инструменте командной строки `kubectl` есть несколько разных способов создания и управления объектами Kubernetes. На этой странице рассматриваются различные подходы. Изучите [документацию по Kubectl](https://kubectl.docs.kubernetes.io) для получения подробной информации по управлению объектами с помощью Kubectl. + +{{% /capture %}} + +{{% capture body %}} + +## Способы управления + +{{< warning >}} + +Используйте только один способ для управления объектами Kubernetes. Применение нескольких методов управления к одному и тому же объекту может привести к неопределенному поведению. + +{{< /warning >}} + +| Способ управления | Область применения |Рекомендуемое окружение | Количество поддерживаемых авторов | Трудность изучения | +|----------------------------------|----------------------|------------------------|--------------------|----------------| +| Императивные команды | Активные объекты | Проекты в стадии разработки | 1+ | Низкая | +| Императивная конфигурация объекта | Отдельные файлы | Продакшен-проекты | 1 | Средняя | +| Декларативная конфигурация объекта | Директории или файлы | Продакшен-проекты | 1+ | Сложная | + +## Императивные команды + +При использовании императивных команд пользователь работает непосредственно с активными (текущими) объектами в кластере. Пользователь указывает выполняемые операции команде `kubectl` в качестве аргументов или флагов. + +Это самый простой способ начать или выполнять одноразовые задачи в кластере. Из-за того, что происходит работа с активными объектами напрямую, нет возможности посмотреть историю предыдущих конфигураций. + +### Примеры + +Запустите экземпляр контейнера nginx, посредством создания объекта Deployment: + +```sh +kubectl run nginx --image nginx +``` + +То же самое, но с другим синтаксисом: + +```sh +kubectl create deployment nginx --image nginx +``` + +### Плюсы и минусы + +Преимущества по сравнению с конфигурацией объекта: + +- Простые команды, которые легко выучить и запомнить. +- Для применения изменений в кластер нужно только выполнить команды. + +Недостатки по сравнению с конфигурацией объекта: + +- Команды не интегрированы с процессом проверки (обзора) изменений. +- У команд нет журнала с изменениями. +- Команды не дают источник записей, за исключением активных объектов. +- Команды не содержат шаблон для создания новых объектов. + +## Императивная конфигурация объекта + +В случае использования императивной конфигурации объекта команде kubectl устанавливают действие (создание, замена и т.д.), необязательные флаги и как минимум одно имя файла. Файл должен содержать полное определение объекта в формате YAML или JSON. + +Посмотрите [Справочник API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) для получения более подробной информации про определения объекта. + +{{< warning >}} + +Императивная команда `replace` заменяет существующую спецификацию новой (переданной), удаляя все изменения в объекте, которые не определены в конфигурационном файле. Такой подход не следует использовать для типов ресурсов, спецификации которых обновляются независимо от конфигурационного файла. +Например, поле `externalIPs` в сервисах типа `LoadBalancer` обновляется кластером независимо от конфигурации. + +{{< /warning >}} + +### Примеры + +Создать объекты, определенные в конфигурационном файле: + +```sh +kubectl create -f nginx.yaml +``` + +Удалить объекты, определенные в двух конфигурационных файлах: + +```sh +kubectl delete -f nginx.yaml -f redis.yaml +``` + +Обновить объекты, определенные в конфигурационном файле, перезаписав текущую конфигурацию: + +```sh +kubectl replace -f nginx.yaml +``` + +### Плюсы и минусы + +Преимущества по сравнению с императивными командами: + +- Конфигурация объекта может храниться в системе управления версиями, такой как Git. +- Конфигурация объекта может быть интегрирована с процессами проверки изменений и логирования. +- Конфигурация объекта предусматривает шаблон для создания новых объектов. + +Недостатки по сравнению с императивными командами: + +- Конфигурация объекта требует наличие общего представления об схеме объекта. +- Конфигурация объекта предусматривает написание файла YAML. + +Преимущества по сравнению с декларативной конфигурацией объекта: + +- Императивная конфигурация объекта проще и легче для понимания. +- Начиная с Kubernetes 1.5, конфигурация императивных объектов стала лучше и совершеннее. + +Недостатки по сравнению с декларативной конфигурацией объекта: + +- Императивная конфигурация объекта наилучшим образом работает с файлами, а не с директориями. +- Обновления текущих объектов должны быть описаны в файлах конфигурации, в противном случае они будут потеряны при следующей замене. + +## Декларативная конфигурация объекта + +При использовании декларативной конфигурации объекта пользователь работает с локальными конфигурационными файлами объекта, при этом он не определяет операции, которые будут выполняться над этими файлами. Операции создания, обновления и удаления автоматически для каждого объекта определяются `kubectl`. Этот механизм позволяет работать с директориями, в ситуациях, когда для разных объектов может потребоваться выполнение других операций. + +{{< note >}} +Декларативная конфигурация объекта сохраняет изменения, сделанные другими, даже если эти изменения не будут зафиксированы снова в конфигурационный файл объекта. +Это достигается путем использования API-операции `patch`, чтобы записать только обнаруженные изменения, а не использовать для этого API-операцию `replace`, которая полностью заменяет конфигурацию объекта. +{{< /note >}} + +### Примеры + +Обработать все конфигурационные файлы объектов в директории `configs` и создать либо частично обновить активные объекты. Сначала можно выполнить `diff`, чтобы посмотреть, какие изменения будут внесены, и только после этого применить их: + +```sh +kubectl diff -f configs/ +kubectl apply -f configs/ +``` + +Рекурсивная обработка директорий: + +```sh +kubectl diff -R -f configs/ +kubectl apply -R -f configs/ +``` + +### Плюсы и минусы + +Преимущества по сравнению с императивной конфигурацией объекта: + +- Изменения, внесенные непосредственно в активные объекты, будут сохранены, даже если они не отражены в конфигурационных файлах. +- Декларативная конфигурация объекта лучше работает с директориями и автоматически определяет тип операции (создание, частичное обновление, удаление) каждого объекта. + +Недостатки по сравнению с императивной конфигурацией объекта: + +- Декларативную конфигурацию объекта сложнее отладить и понять, когда можно получить неожиданные результаты. +- Частичные обновления с использованием различий приводит к сложным операциям слияния и исправления. + +{{% /capture %}} + +{{% capture whatsnext %}} + +- [Управление объектами Kubernetes с помощью императивных команд](/docs/tasks/manage-kubernetes-objects/imperative-command/) +- [Управление объектами Kubernetes с помощью императивной конфигурации объекта](/docs/tasks/manage-kubernetes-objects/imperative-config/) +- [Управление объектами Kubernetes с помощью декларативной конфигурации объекта](/docs/tasks/manage-kubernetes-objects/declarative-config/) +- [Управление объектами Kubernetes с помощью Kustomize (декларативный способ)](/docs/tasks/manage-kubernetes-objects/kustomization/) +- [Справочник по командам Kubectl](/docs/reference/generated/kubectl/kubectl-commands/) +- [Документация Kubectl](https://kubectl.docs.kubernetes.io) +- [Справочник API Kubernetes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) + +{{% /capture %}} diff --git a/content/ru/docs/contribute/start.md b/content/ru/docs/contribute/start.md index 91515c378c38f..da0af7eb22576 100644 --- a/content/ru/docs/contribute/start.md +++ b/content/ru/docs/contribute/start.md @@ -25,21 +25,12 @@ card: Вы можете создавать новые задачи, редактировать содержимое и проверять изменения от других участников, — всё это доступно с сайта GitHub. Вы также можете использовать встроенный в GitHub поиск и историю коммитов. -<<<<<<< HEAD Не все задачи могут быть выполнены с помощью интерфейса GitHub, но некоторые из них обсуждаются в руководствах для [продвинутых](/ru/docs/contribute/intermediate/) и [опытных](/ru/docs/contribute/advanced/) участников. ### Участие в документации SIG Документация Kubernetes поддерживается {{< glossary_tooltip text="специальной группой" term_id="sig" >}} (Special Interest Group, SIG) под названием SIG Docs. Мы [общаемся](#participate-in-sig-docs-discussions) с помощью канала Slack, списка рассылки и еженедельных видеозвонков. Будем рады новым участникам. Для получения дополнительной информации обратитесь к странице [Участие в SIG Docs](ru/docs/contribute/participating/). -======= -Не все задачи могут быть выполнены на GitHub, поэтому они обсуждаются в [intermediate](/docs/contribute/intermediate/) and -[advanced](/docs/contribute/advanced/) docs contribution guides. - -### Участие в документации SIG - -Документация Kubernetes поддерживается {{< glossary_tooltip text="специальной группой" term_id="sig" >}} (Special Interest Group, SIG) под названием SIG Docs. Мы [общаемся](#participate-in-sig-docs-discussions) с помощью канала Slack, списка рассылки и еженедельных видеозвонков. Будем рады новым участникам. Для получения дополнительной информации обратитесь к странице [Participating in SIG Docs](/docs/contribute/participating/). ->>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124) ### Руководящие принципы по содержанию @@ -49,11 +40,7 @@ card: Мы поддерживаем [руководство по оформлению](/docs/contribute/style/style-guide/) с информацией о выборе, сделанном сообществом SIG Docs в отношении грамматики, синтаксиса, исходного форматирования и типографских соглашений. Прежде чем сделать свой первый вклад, просмотрите руководство по стилю и используйте его, когда у вас есть вопросы. -<<<<<<< HEAD SIG Docs совместными усилиями вносит изменения в руководство по оформлению. Чтобы предложить изменение или дополнение, добавьте его в повестку дня предстоящей встречи SIG Docs и посетите её, чтобы принять участие в обсуждении. Смотрите руководство для [продвинутых участников](/docs/contribute/advanced/), чтобы получить дополнительную информацию. -======= -SIG Docs совместными усилиями вносит изменения в руководство по оформлению. Чтобы предложить изменение или дополнение, добавьте его в повестку дня предстоящей встречи SIG Docs и посетите её, чтобы принять участие в обсуждении. Смотрите страницу с [продвинутым руководством](/docs/contribute/advanced/) для получения дополнительной информации. ->>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124) ### Шаблоны страниц @@ -69,11 +56,7 @@ SIG Docs совместными усилиями вносит изменения Более подробную информацию про участие в работе над документацией на нескольких языках ["Localize content"](/docs/contribute/intermediate#localize-content) в промежуточном руководстве по добавлению. -<<<<<<< HEAD Если вы заинтересованы в переводе документации на новый язык, посмотрите раздел ["Локализация"](/ru/docs/contribute/localization/). -======= -Если вы заинтересованы в переводе документации на новый язык, посмотрите раздел ["Локализация"](/docs/contribute/localization/). ->>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124) ## Создание хороших заявок @@ -83,11 +66,7 @@ SIG Docs совместными усилиями вносит изменения - **Для существующей страницы** -<<<<<<< HEAD Если заметили проблему на существующей странице в [документации Kubernetes](/ru/docs/), перейдите в конец страницы и нажмите кнопку **Create an Issue**. Если вы ещё не авторизованы в GitHub, сделайте это. После этого откроется страница с форма для создания нового запроса в GitHub с уже предварительно заполненным полями. -======= - Если заметили проблему на существующей странице в [документации Kubernetes](/docs/), перейдите в конец страницы и нажмите кнопку **Create an Issue**. Если вы ещё не авторизованы в GitHub, сделайте это. После этого откроется страница с форма для создания нового запроса в GitHub с уже предварительно заполненным полями. ->>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124) При помощи разметки Markdown опишите как можно подробнее, что хотите. Там, где вы видите пустые квадратные скобки (`[ ]`), проставьте `x` между скобками. Если у вас есть предлагаемое решение проблемы, напишите его. @@ -127,11 +106,7 @@ SIG Docs совместными усилиями вносит изменения Примечание. Разработчики кода Kubernetes. Если вы документируете новую функцию для предстоящего выпуска Kubernetes, ваш процесс будет немного другим. См. Документирование функции для руководства по процессу и информации о сроках. {{< note >}} -<<<<<<< HEAD **Для разработчиков кода Kubernetes**: если вы документируете новую функциональность для новой версии Kubernetes, то процесс рассмотрения будет немного другим. Посетите страницу [Документирование функциональности](/ru/docs/contribute/intermediate/#добавление-документации-для-новой-функциональности), чтобы узнать про процесс и информацию о крайних сроках. -======= -**Для разработчиков кода Kubernetes**: если вы документируете новую функциональность для новой версии Kubernetes, то процесс рассмотрения будет немного другим. Посетите страницу [Документирование функциональности](/docs/contribute/intermediate/#sig-members-documenting-new-features), чтобы узнать про процесс и информацию о крайних сроках. ->>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124) {{< /note >}} ### Подписание CLA-соглашения CNCF {#sign-the-cla} @@ -141,11 +116,7 @@ SIG Docs совместными усилиями вносит изменения ### Поиск задач для работы -<<<<<<< HEAD Если вы уже нашли что исправить, просто следуйте инструкциям ниже. Для этого вам не обязательно [создавать ишью](#создание-хороших-заявок) (хотя вы, безусловно, пойти этим путём). -======= -Если вы уже нашли что исправить, просто следуйте инструкциям ниже. Для этого вам не обязательно [создавать ишью](#file-actionable-issues) (хотя вы, безусловно, пойти этим путём). ->>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124) Если вы хотите ещё не определились с тем, над чем хотите поработать, перейдите по адресу [https://github.com/kubernetes/website/issues](https://github.com/kubernetes/website/issues) и найдите ишью с меткой `good first issue` (вы можете использовать [эту](https://github.com/kubernetes/website/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) ссылку для быстрого поиска). Прочитайте комментарии, чтобы убедиться что нет открытого пулреквеста для решения текущей ишью, а также, что никто другой не оставил комментарий, что он работает над этой задачей в последнее время (как правило, 3 дня). Добавьте комментарий, что вы хотели бы заняться решением этой задачи. @@ -203,11 +174,7 @@ SIG Docs совместными усилиями вносит изменения 7. Если ваши изменения одобрены, то рецензент объединяет соответствующий пулреквест. Через несколько минут вы сможете сможете увидеть его в действии на сайте Kubernetes. -<<<<<<< HEAD Это только один из способов отправить пулреквест. Если вы уже опытный пользователь Git и GitHub, вы можете вносить изменения, используя локальный GUI-клиент или Git из терминала вместо того, чтобы использовать интерфейс GitHub для этого. Некоторые основы использования Git-клиента из командной строки обсуждаются в руководстве для [продвинутого участника](/ru/docs/contribute/intermediate/). -======= -Это только один из способов отправить пулреквест. Если вы уже опытный пользователь Git и GitHub, вы можете вносить изменения, используя локальный GUI-клиент или Git из терминала вместо того, чтобы использовать интерфейс GitHub для этого. Некоторые основы использования Git-клиента из командной строки обсуждаются в [продвинутом](/docs/contribute/intermediate/) руководстве участника. ->>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124) ## Просмотр пулреквестов в документацию diff --git a/content/ru/docs/home/_index.md b/content/ru/docs/home/_index.md index e217224aaffa6..13c08174b2e40 100644 --- a/content/ru/docs/home/_index.md +++ b/content/ru/docs/home/_index.md @@ -5,7 +5,7 @@ title: Документация по Kubernetes noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "Главная" main_menu: true weight: 10 diff --git a/content/ru/docs/reference/glossary/name.md b/content/ru/docs/reference/glossary/name.md new file mode 100755 index 0000000000000..97d9873b82f74 --- /dev/null +++ b/content/ru/docs/reference/glossary/name.md @@ -0,0 +1,17 @@ +--- +title: Имя +id: name +date: 2018-04-12 +full_link: /docs/concepts/overview/working-with-objects/names +short_description: > + Клиентская строка, предназначенная для ссылки на объект в URL-адресе ресурса, например `/api/v1/pods/some-name`. + +aka: +tags: +- fundamental +--- + Клиентская строка, предназначенная для ссылки на объект в URL-адресе ресурса, например `/api/v1/pods/some-name`. + + + +Указанное имя может иметь только один объект определённого типа. Но если вы удалите этот объект, вы можете создать новый с таким же именем diff --git a/content/ru/docs/reference/glossary/uid.md b/content/ru/docs/reference/glossary/uid.md new file mode 100755 index 0000000000000..fe050683baa80 --- /dev/null +++ b/content/ru/docs/reference/glossary/uid.md @@ -0,0 +1,17 @@ +--- +title: UID +id: uid +date: 2018-04-12 +full_link: /docs/concepts/overview/working-with-objects/names +short_description: > + Уникальная строка, сгенерированная самим Kubernetes, для идентификации объектов. + +aka: +tags: +- fundamental +--- + Уникальная строка, сгенерированная самим Kubernetes, для идентификации объектов. + + + +У каждого объекта, созданного в течение всего периода работы кластера Kubernetes, есть собственный уникальный идентификатор (UID). Он предназначен для выяснения различий между событиями похожих сущностей. \ No newline at end of file diff --git a/content/ru/docs/setup/learning-environment/minikube.md b/content/ru/docs/setup/learning-environment/minikube.md index 586a491ad3627..b55beb69f7da6 100644 --- a/content/ru/docs/setup/learning-environment/minikube.md +++ b/content/ru/docs/setup/learning-environment/minikube.md @@ -67,7 +67,7 @@ Minikube поддерживает следующие возможности Kube deployment.apps/hello-minikube created ``` -3. Чтобы получить доступ к объекту Deployment `hello-minikube` извне, создайте объект сервиса (Service): +3. Чтобы получить доступ к объекту Deployment `hello-minikube` извне, создайте объект сервиса (Service): ```shell kubectl expose deployment hello-minikube --type=NodePort --port=8080 @@ -81,32 +81,34 @@ Minikube поддерживает следующие возможности Kube service/hello-minikube exposed ``` -4. Под (Pod) `hello-minikube` теперь запущен, но нужно подождать, пока он начнёт функционировать, прежде чем обращаться к нему. +4. Под (Pod) `hello-minikube` теперь запущен, но нужно подождать, пока он начнёт функционировать, прежде чем обращаться к нему. - Проверьте, что под работает: + Проверьте, что под работает: - ```shell - kubectl get pod - ``` + ```shell + kubectl get pod + ``` + + Если в столбце вывода `STATUS` выводится `ContainerCreating`, значит под все еще создается: - Если в столбце вывода `STATUS` выводится `ContainerCreating`, значит под все еще создается: + ``` + NAME READY STATUS RESTARTS AGE + hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s + ``` - ``` - NAME READY STATUS RESTARTS AGE - hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s - ``` + Если в столбце `STATUS` указано `Running`, то под теперь в рабочем состоянии: - Если в столбце `STATUS` указано `Running`, то под теперь в рабочем состоянии: + ``` + NAME READY STATUS RESTARTS AGE + hello-minikube-3383150820-vctvh 1/1 Running 0 13s + ``` - ``` - NAME READY STATUS RESTARTS AGE - hello-minikube-3383150820-vctvh 1/1 Running 0 13s - ``` 5. Узнайте URL-адрес открытого (exposed) сервиса, чтобы просмотреть подробные сведения о сервисе: - ```shell - minikube service hello-minikube --url - ``` + ```shell + minikube service hello-minikube --url + ``` + 6. Чтобы ознакомиться с подробной информацией о локальном кластере, скопируйте и откройте полученный из вывода команды на предыдущем шаге URL-адрес в браузере. Вывод будет примерно следующим: @@ -138,7 +140,8 @@ Minikube поддерживает следующие возможности Kube -no body in request- ``` - Если сервис и кластер вам больше не нужны, их можно удалить. + Если сервис и кластер вам больше не нужны, их можно удалить. + 7. Удалите сервис `hello-minikube`: ```shell @@ -150,6 +153,7 @@ Minikube поддерживает следующие возможности Kube ``` service "hello-minikube" deleted ``` + 8. Удалите развёртывание `hello-minikube`: ```shell @@ -161,17 +165,24 @@ Minikube поддерживает следующие возможности Kube ``` deployment.extensions "hello-minikube" deleted ``` + 9. Остановите локальный кластер Minikube: + ```shell minikube stop ``` + Вывод будет примерно следующим: + ``` Stopping "minikube"... "minikube" stopped. ``` - Подробности смотрите в разделе [Остановка кластера](#остановка-кластера). + + Подробности смотрите в разделе [Остановка кластера](#остановка-кластера). + 10. Удалите локальный кластер Minikube: + ```shell minikube delete ``` @@ -182,7 +193,8 @@ Minikube поддерживает следующие возможности Kube Deleting "minikube" ... The "minikube" cluster has been deleted. ``` - Подробности смотрите в разделе [Удаление кластера](#удаление-кластера). + + Подробности смотрите в разделе [Удаление кластера](#удаление-кластера). ## Управление кластером @@ -226,16 +238,18 @@ minikube start --vm-driver= Minikube поддерживает следующие драйверы: {{< note >}} -Смотрите файл [DRIVERS](https://git.k8s.io/minikube/docs/drivers.md) для получения подробной информации о поддерживаемых драйверах и как устанавливать плагины. +Смотрите страницу [DRIVERS](https://minikube.sigs.k8s.io/docs/reference/drivers/) для получения подробной информации о поддерживаемых драйверах и как устанавливать плагины. {{< /note >}} * virtualbox * vmwarefusion -* kvm2 ([установка драйвера](https://git.k8s.io/minikube/docs/drivers.md#kvm2-driver)) -* hyperkit ([установка драйвера](https://git.k8s.io/minikube/docs/drivers.md#hyperkit-driver)) -* hyperv ([установка драйвера](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver)) +* docker (ЭКСПЕРИМЕНТАЛЬНЫЙ) +* kvm2 ([установка драйвера](https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/)) +* hyperkit ([установка драйвера](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/)) +* hyperv ([установка драйвера](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/)) Обратите внимание, что указанный IP-адрес на этой странице является динамическим и может изменяться. Его можно получить с помощью `minikube ip`. -* vmware ([установка драйвера](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#vmware-unified-driver)) (VMware unified driver) +* vmware ([установка драйвера](https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/)) (VMware unified driver) +* parallels ([установка драйвера](https://minikube.sigs.k8s.io/docs/reference/drivers/parallels/)) * none (Запускает компоненты Kubernetes на хосте, а не на виртуальной машине. Использование этого драйвера требует использование Linux и установленного {{< glossary_tooltip term_id="docker" >}}.) {{< caution >}} @@ -357,16 +371,19 @@ Minikube имеет такую возможность как "конфигура Чтобы изменить настройку `AuthorizationMode` в `apiserver` на значение `RBAC`, используйте флаг `--extra-config=apiserver.authorization-mode=RBAC`. ### Остановка кластера + Команда `minikube stop` используется для остановки кластера. Эта команда выключает виртуальную машины Minikube, но сохраняет всё состояние кластера и данные. Повторный запуск кластера вернет его в прежнее состояние. ### Удаление кластера + Команда `minikube delete` используется для удаления кластера. Эта команда выключает и удаляет виртуальную машину Minikube. Данные или состояние не сохраняются. ### Обновление minikube + Смотрите [инструкцию по обновлению minikube](https://minikube.sigs.k8s.io/docs/start/macos/). ## Работа с кластером @@ -440,6 +457,7 @@ spec: ``` ## Смонтированные директории хоста + Некоторые драйверы монтируют директорию хоста в виртуальную машину, чтобы можно было легко обмениваться файлами между виртуальной машиной и хостом. В настоящее время это не настраивается и отличается от используемого драйвера и ОС. {{< note >}} diff --git a/content/ru/docs/tasks/tools/install-minikube.md b/content/ru/docs/tasks/tools/install-minikube.md index 64fca933991a6..380e632c2e679 100644 --- a/content/ru/docs/tasks/tools/install-minikube.md +++ b/content/ru/docs/tasks/tools/install-minikube.md @@ -62,7 +62,7 @@ Hyper-V Requirements: A hypervisor has been detected. Features required for ### Установка kubectl -Убедитесь, что у вас установлен kubectl. Вы можете установить kubectl согласно инструкциям в разделе [Установка и настройка kubectl](/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux). +Убедитесь, что у вас установлен kubectl. Вы можете установить kubectl согласно инструкциям в разделе [Установка и настройка kubectl](/ru/docs/tasks/tools/install-kubectl/#установка-kubectl-в-linux). ### Установка Hypervisor @@ -122,7 +122,7 @@ brew install minikube {{% tab name="macOS" %}} ### Установка kubectl -Убедитесь, что у вас установлен kubectl. Вы можете установить kubectl согласно инструкциям в разделе [Установка и настройка kubectl](/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos). +Убедитесь, что у вас установлен kubectl. Вы можете установить kubectl согласно инструкциям в разделе [Установка и настройка kubectl](/ru/docs/tasks/tools/install-kubectl/#установка-kubectl-в-macos). ### Установка Hypervisor @@ -158,7 +158,7 @@ sudo mv minikube /usr/local/bin {{% tab name="Windows" %}} ### Установка kubectl -Убедитесь, что у вас установлен kubectl. Вы можете установить kubectl согласно инструкциям в разделе [Установка и настройка kubectl](/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows). +Убедитесь, что у вас установлен kubectl. Вы можете установить kubectl согласно инструкциям в разделе [Установка и настройка kubectl](/ru/docs/tasks/tools/install-kubectl/#установка-kubectl-в-windows). ### Установка Hypervisor @@ -198,7 +198,7 @@ choco install minikube {{% capture whatsnext %}} -* [Running Kubernetes Locally via Minikube](/docs/setup/learning-environment/minikube/) +* [Локальный запуск Kubernetes при помощи Minikube](/ru/docs/setup/learning-environment/minikube/) {{% /capture %}} @@ -208,7 +208,7 @@ choco install minikube {{< note >}} -Для использования опции `--vm-driver` с командой `minikube start` укажите имя установленного вами гипервизора в нижнем регистре в заполнителе `` команды ниже. Полный список значений для опции `--vm-driver` перечислен в разделе по [указанию драйвера виртуальной машины](/docs/setup/learning-environment/minikube/#specifying-the-vm-driver). +Для использования опции `--vm-driver` с командой `minikube start` укажите имя установленного вами гипервизора в нижнем регистре в заполнителе `` команды ниже. Полный список значений для опции `--vm-driver` перечислен в разделе по [указанию драйвера виртуальной машины](/ru/docs/setup/learning-environment/minikube/#указание-драйвера-виртуальной-машины). {{< /note >}} diff --git a/content/ru/examples/application/deployment.yaml b/content/ru/examples/application/deployment.yaml new file mode 100644 index 0000000000000..f7a4886e4ecf2 --- /dev/null +++ b/content/ru/examples/application/deployment.yaml @@ -0,0 +1,19 @@ +apiVersion: apps/v1 # до версии 1.9.0 нужно использовать apps/v1beta2 +kind: Deployment +metadata: + name: nginx-deployment +spec: + selector: + matchLabels: + app: nginx + replicas: 2 # запускает 2 пода, созданных по шаблону + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 diff --git a/content/uk/OWNERS b/content/uk/OWNERS new file mode 100644 index 0000000000000..09fbf1170cb09 --- /dev/null +++ b/content/uk/OWNERS @@ -0,0 +1,13 @@ +# See the OWNERS docs at https://go.k8s.io/owners + +# This is the directory for Ukrainian source content. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-uk-reviews + +approvers: +- sig-docs-uk-owners + +labels: +- language/uk diff --git a/content/uk/_common-resources/index.md b/content/uk/_common-resources/index.md new file mode 100644 index 0000000000000..ca03031f1ee91 --- /dev/null +++ b/content/uk/_common-resources/index.md @@ -0,0 +1,3 @@ +--- +headless: true +--- diff --git a/content/uk/_index.html b/content/uk/_index.html new file mode 100644 index 0000000000000..02df4d395da18 --- /dev/null +++ b/content/uk/_index.html @@ -0,0 +1,85 @@ +--- +title: "Довершена система оркестрації контейнерів" +abstract: "Автоматичне розгортання, масштабування і управління контейнерами" +cid: home +--- + +{{< announcement >}} + +{{< deprecationwarning >}} + +{{< blocks/section id="oceanNodes" >}} +{{% blocks/feature image="flower" %}} + +### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) - це система з відкритим вихідним кодом для автоматичного розгортання, масштабування і управління контейнеризованими застосунками. + + +Вона об'єднує контейнери, що утворюють застосунок, у логічні елементи для легкого управління і виявлення. В основі Kubernetes - [15 років досвіду запуску і виконання застосунків у продуктивних середовищах Google](http://queue.acm.org/detail.cfm?id=2898444), поєднані з найкращими ідеями і практиками від спільноти. +{{% /blocks/feature %}} + +{{% blocks/feature image="scalable" %}} + +#### Глобальне масштабування + + +Заснований на тих самих принципах, завдяки яким Google запускає мільярди контейнерів щотижня, Kubernetes масштабується без потреби збільшення вашого штату з експлуатації. + +{{% /blocks/feature %}} + +{{% blocks/feature image="blocks" %}} + +#### Невичерпна функціональність + + +Запущений для локального тестування чи у глобальній корпорації, Kubernetes динамічно зростатиме з вами, забезпечуючи регулярну і легку доставку ваших застосунків незалежно від рівня складності ваших потреб. + +{{% /blocks/feature %}} + +{{% blocks/feature image="suitcase" %}} + +#### Працює всюди + + +Kubernetes - проект з відкритим вихідним кодом. Він дозволяє скористатися перевагами локальної, гібридної чи хмарної інфраструктури, щоб легко переміщати застосунки туди, куди вам потрібно. + +{{% /blocks/feature %}} + +{{< /blocks/section >}} + +{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}} +
          + +

          Проблеми міграції 150+ мікросервісів у Kubernetes

          + +

          Сара Уеллз, технічний директор з експлуатації і безпеки роботи, Financial Times

          + +
          +
          +
          + Відвідати KubeCon в Амстердамі, 30.03-02.04 2020 +
          +
          +
          +
          + Відвідати KubeCon у Шанхаї, 28-30 липня 2020 +
          +
          + + +
          +{{< /blocks/section >}} + +{{< blocks/kubernetes-features >}} + +{{< blocks/case-studies >}} diff --git a/content/uk/case-studies/_index.html b/content/uk/case-studies/_index.html new file mode 100644 index 0000000000000..f8003e82e0d3f --- /dev/null +++ b/content/uk/case-studies/_index.html @@ -0,0 +1,13 @@ +--- +#title: Case Studies +title: Приклади використання +#linkTitle: Case Studies +linkTitle: Приклади використання +#bigheader: Kubernetes User Case Studies +bigheader: Приклади використання Kubernetes від користувачів. +#abstract: A collection of users running Kubernetes in production. +abstract: Підбірка користувачів, що використовують Kubernetes для робочих навантажень. +layout: basic +class: gridPage +cid: caseStudies +--- diff --git a/content/uk/docs/_index.md b/content/uk/docs/_index.md new file mode 100644 index 0000000000000..a601666b678f9 --- /dev/null +++ b/content/uk/docs/_index.md @@ -0,0 +1,3 @@ +--- +title: Документація +--- diff --git a/content/uk/docs/concepts/_index.md b/content/uk/docs/concepts/_index.md new file mode 100644 index 0000000000000..64f6e8232376b --- /dev/null +++ b/content/uk/docs/concepts/_index.md @@ -0,0 +1,123 @@ +--- +title: Концепції +main_menu: true +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + + +В розділі "Концепції" описані складові системи Kubernetes і абстракції, за допомогою яких Kubernetes реалізовує ваш {{< glossary_tooltip text="кластер" term_id="cluster" length="all" >}}. Цей розділ допоможе вам краще зрозуміти, як працює Kubernetes. + +{{% /capture %}} + +{{% capture body %}} + + + +## Загальна інформація + + +Для роботи з Kubernetes ви використовуєте *об'єкти API Kubernetes* для того, щоб описати *бажаний стан* вашого кластера: які застосунки або інші робочі навантаження ви плануєте запускати, які образи контейнерів вони використовують, кількість реплік, скільки ресурсів мережі та диску ви хочете виділити тощо. Ви задаєте бажаний стан, створюючи об'єкти в Kubernetes API, зазвичай через інтерфейс командного рядка `kubectl`. Ви також можете взаємодіяти із кластером, задавати або змінювати його бажаний стан безпосередньо через Kubernetes API. + + +Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері: + + + +* **Kubernetes master** становить собою набір із трьох процесів, запущених на одному вузлі вашого кластера, що визначений як керівний (master). До цих процесів належать: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) і [kube-scheduler](/docs/admin/kube-scheduler/). +* На кожному не-мастер вузлі вашого кластера виконуються два процеси: + * **[kubelet](/docs/admin/kubelet/)**, що обмінюється даними з Kubernetes master. + * **[kube-proxy](/docs/admin/kube-proxy/)**, мережевий проксі, що відображає мережеві сервіси Kubernetes на кожному вузлі. + + + +## Об'єкти Kubernetes + + +Kubernetes оперує певною кількістю абстракцій, що відображають стан вашої системи: розгорнуті у контейнерах застосунки та робочі навантаження, пов'язані з ними ресурси мережі та диску, інша інформація щодо функціонування вашого кластера. Ці абстракції представлені як об'єкти Kubernetes API. Для більш детальної інформації ознайомтесь з [Об'єктами Kubernetes](/docs/concepts/overview/working-with-objects/kubernetes-objects/). + + +До базових об'єктів Kubernetes належать: + +* [Pod](/docs/concepts/workloads/pods/pod-overview/) +* [Service](/docs/concepts/services-networking/service/) +* [Volume](/docs/concepts/storage/volumes/) +* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/) + + +В Kubernetes є також абстракції вищого рівня, які надбудовуються над базовими об'єктами за допомогою [контролерів](/docs/concepts/architecture/controller/) і забезпечують додаткову функціональність і зручність. До них належать: + +* [Deployment](/docs/concepts/workloads/controllers/deployment/) +* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) +* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) +* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) +* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) + + + +## Площина управління Kubernetes (*Kubernetes Control Plane*) {#площина-управління-kubernetes} + + +Різні частини площини управління Kubernetes, такі як Kubernetes Master і kubelet, регулюють, як Kubernetes спілкується з вашим кластером. Площина управління веде облік усіх об'єктів Kubernetes в системі та безперервно, в циклі перевіряє стан цих об'єктів. У будь-який момент часу контрольні цикли, запущені площиною управління, реагуватимуть на зміни у кластері і намагатимуться привести поточний стан об'єктів до бажаного, що заданий у конфігурації. + + +Наприклад, коли за допомогою API Kubernetes ви створюєте Deployment, ви задаєте новий бажаний стан для системи. Площина управління Kubernetes фіксує створення цього об'єкта і виконує ваші інструкції шляхом запуску потрібних застосунків та їх розподілу між вузлами кластера. В такий спосіб досягається відповідність поточного стану бажаному. + + + +### Kubernetes Master + + +Kubernetes Master відповідає за підтримку бажаного стану вашого кластера. Щоразу, як ви взаємодієте з Kubernetes, наприклад при використанні інтерфейсу командного рядка `kubectl`, ви обмінюєтесь даними із Kubernetes master вашого кластера. + + +Слово "master" стосується набору процесів, які управляють станом кластера. Переважно всі ці процеси виконуються на одному вузлі кластера, який також називається master. Master-вузол можна реплікувати для забезпечення високої доступності кластера. + + + +### Вузли Kubernetes + + +Вузлами кластера називають машини (ВМ, фізичні сервери тощо), на яких запущені ваші застосунки та хмарні робочі навантаження. Кожен вузол керується Kubernetes master; ви лише зрідка взаємодіятимете безпосередньо із вузлами. + + +{{% /capture %}} + +{{% capture whatsnext %}} + + +Якщо ви хочете створити нову сторінку у розділі Концепції, у статті +[Використання шаблонів сторінок](/docs/home/contribute/page-templates/) +ви знайдете інформацію щодо типу і шаблона сторінки. + +{{% /capture %}} diff --git a/content/uk/docs/concepts/configuration/_index.md b/content/uk/docs/concepts/configuration/_index.md new file mode 100644 index 0000000000000..588d144f6e596 --- /dev/null +++ b/content/uk/docs/concepts/configuration/_index.md @@ -0,0 +1,5 @@ +--- +title: "Конфігурація" +weight: 80 +--- + diff --git a/content/uk/docs/concepts/overview/_index.md b/content/uk/docs/concepts/overview/_index.md new file mode 100644 index 0000000000000..efffaf0892adf --- /dev/null +++ b/content/uk/docs/concepts/overview/_index.md @@ -0,0 +1,4 @@ +--- +title: "Огляд" +weight: 20 +--- diff --git a/content/uk/docs/concepts/overview/what-is-kubernetes.md b/content/uk/docs/concepts/overview/what-is-kubernetes.md new file mode 100644 index 0000000000000..269239c30a468 --- /dev/null +++ b/content/uk/docs/concepts/overview/what-is-kubernetes.md @@ -0,0 +1,182 @@ +--- +title: Що таке Kubernetes? +content_template: templates/concept +weight: 10 +card: + name: concepts + weight: 10 +--- + +{{% capture overview %}} + +Ця сторінка являє собою узагальнений огляд Kubernetes. +{{% /capture %}} + +{{% capture body %}} + +Kubernetes - це платформа з відкритим вихідним кодом для управління контейнеризованими робочими навантаженнями та супутніми службами. Її основні характеристики - кросплатформенність, розширюваність, успішне використання декларативної конфігурації та автоматизації. Вона має гігантську, швидкопрогресуючу екосистему. + + +Назва Kubernetes походить з грецької та означає керманич або пілот. Google відкрив доступ до вихідного коду проекту Kubernetes у 2014 році. Kubernetes побудовано [на базі п'ятнадцятирічного досвіду, що Google отримав, оперуючи масштабними робочими навантаженнями](https://ai.google/research/pubs/pub43438) у купі з найкращими у своєму класі ідеями та практиками, які може запропонувати спільнота. + + +## Озираючись на першопричини + + +Давайте повернемось назад у часі та дізнаємось, завдяки чому Kubernetes став таким корисним. + +![Еволюція розгортання](/images/docs/Container_Evolution.svg) + + +**Ера традиційного розгортання:** На початку організації запускали застосунки на фізичних серверах. Оскільки в такий спосіб не було можливості задати обмеження використання ресурсів, це спричиняло проблеми виділення та розподілення ресурсів на фізичних серверах. Наприклад: якщо багато застосунків було запущено на фізичному сервері, могли траплятись випадки, коли один застосунок забирав собі найбільше ресурсів, внаслідок чого інші програми просто не справлялись з обов'язками. Рішенням може бути запуск кожного застосунку на окремому фізичному сервері. Але такий підхід погано масштабується, оскільки ресурси не повністю використовуються; на додачу, це дорого, оскільки організаціям потрібно опікуватись багатьма фізичними серверами. + + +**Ера віртуалізованого розгортання:** Як рішення - була представлена віртуалізація. Вона дозволяє запускати численні віртуальні машини (Virtual Machines або VMs) на одному фізичному ЦПУ сервера. Віртуалізація дозволила застосункам бути ізольованими у межах віртуальних машин та забезпечувала безпеку, оскільки інформація застосунку на одній VM не була доступна застосунку на іншій VM. + + +Віртуалізація забезпечує краще використання ресурсів на фізичному сервері та кращу масштабованість, оскільки дозволяє легко додавати та оновлювати застосунки, зменшує витрати на фізичне обладнання тощо. З віртуалізацією ви можете представити ресурси у вигляді одноразових віртуальних машин. + + +Кожна VM є повноцінною машиною з усіма компонентами, включно з власною операційною системою, що запущені поверх віртуалізованого апаратного забезпечення. + + +**Ера розгортання контейнерів:** Контейнери схожі на VM, але мають спрощений варіант ізоляції і використовують спільну операційну систему для усіх застосунків. Саму тому контейнери вважаються легковісними. Подібно до VM, контейнер має власну файлову систему, ЦПУ, пам'ять, простір процесів тощо. Оскільки контейнери вивільнені від підпорядкованої інфраструктури, їх можна легко переміщати між хмарними провайдерами чи дистрибутивами операційних систем. + +Контейнери стали популярними, бо надавали додаткові переваги, такі як: + + + +* Створення та розгортання застосунків за методологією Agile: спрощене та більш ефективне створення образів контейнерів у порівнянні до використання образів віртуальних машин. +* Безперервна розробка, інтеграція та розгортання: забезпечення надійних та безперервних збирань образів контейнерів, їх швидке розгортання та легкі відкатування (за рахунок незмінності образів). +* Розподіл відповідальності команд розробки та експлуатації: створення образів контейнерів застосунків під час збирання/релізу на противагу часу розгортання, і як наслідок, вивільнення застосунків із інфраструктури. +* Спостереження не лише за інформацією та метриками на рівні операційної системи, але й за станом застосунку та іншими сигналами. +* Однорідність середовища для розробки, тестування та робочого навантаження: запускається так само як на робочому комп'ютері, так і у хмарного провайдера. +* ОС та хмарна кросплатформність: запускається на Ubuntu, RHEL, CoreOS, у власному дата-центрі, у Google Kubernetes Engine і взагалі будь-де. +* Керування орієнтоване на застосунки: підвищення рівня абстракції від запуску операційної системи у віртуальному апаратному забезпеченні до запуску застосунку в операційній системі, використовуючи логічні ресурси. +* Нещільно зв'язані, розподілені, еластичні, вивільнені мікросервіси: застосунки розбиваються на менші, незалежні частини для динамічного розгортання та управління, на відміну від монолітної архітектури, що працює на одній великій виділеній машині. +* Ізоляція ресурсів: передбачувана продуктивність застосунку. +* Використання ресурсів: висока ефективність та щільність. + + +## Чому вам потрібен Kebernetes і що він може робити + + +Контейнери - це прекрасний спосіб упакувати та запустити ваші застосунки. У прод оточенні вам потрібно керувати контейнерами, в яких працюють застосунки, і стежити, щоб не було простою. Наприклад, якщо один контейнер припиняє роботу, інший має бути запущений йому на заміну. Чи не легше було б, якби цим керувала сама система? + + +Ось де Kubernetes приходить на допомогу! Kubernetes надає вам каркас для еластичного запуску розподілених систем. Він опікується масштабуванням та аварійним відновленням вашого застосунку, пропонує шаблони розгортань тощо. Наприклад, Kubernetes дозволяє легко створювати розгортання за стратегією canary у вашій системі. + + +Kubernetes надає вам: + + + +* **Виявлення сервісів та балансування навантаження** +Kubernetes може надавати доступ до контейнера, використовуючи DNS-ім'я або його власну IP-адресу. Якщо контейнер зазнає завеликого мережевого навантаження, Kubernetes здатний збалансувати та розподілити його таким чином, щоб якість обслуговування залишалась стабільною. +* **Оркестрація сховища інформації** +Kubernetes дозволяє вам автоматично монтувати системи збереження інформації на ваш вибір: локальні сховища, рішення від хмарних провайдерів тощо. +* **Автоматичне розгортання та відкатування** +За допомогою Kubernetes ви можете описати бажаний стан контейнерів, що розгортаються, і він регульовано простежить за виконанням цього стану. Наприклад, ви можете автоматизувати в Kubernetes процеси створення нових контейнерів для розгортання, видалення існуючих контейнерів і передачу їхніх ресурсів на новостворені контейнери. +* **Автоматичне розміщення задач** +Ви надаєте Kubernetes кластер для запуску контейнерізованих задач і вказуєте, скільки ресурсів ЦПУ та пам'яті (RAM) необхідно для роботи кожного контейнера. Kubernetes розподіляє контейнери по вузлах кластера для максимально ефективного використання ресурсів. +* **Самозцілення** +Kubernetes перезапускає контейнери, що відмовили; заміняє контейнери; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності. +* **Управління секретами та конфігурацією** +Kubernetes дозволяє вам зберігати та керувати чутливою інформацією, такою як паролі, OAuth токени та SSH ключі. Ви можете розгортати та оновлювати секрети та конфігурацію без перезбирання образів ваших контейнерів, не розкриваючи секрети в конфігурацію стека. + + + +## Чим не є Kubernetes + + +Kubernetes не є комплексною системою PaaS (Платформа як послуга) у традиційному розумінні. Оскільки Kubernetes оперує швидше на рівні контейнерів, аніж на рівні апаратного забезпечення, деяка загальнозастосована функціональність і справді є спільною з PaaS, як-от розгортання, масштабування, розподіл навантаження, логування і моніторинг. Водночас Kubernetes не є монолітним, а вищезазначені особливості підключаються і є опціональними. Kubernetes надає будівельні блоки для створення платформ для розробників, але залишає за користувачем право вибору у важливих питаннях. + + +Kubernetes: + + + +* Не обмежує типи застосунків, що підтримуються. Kubernetes намагається підтримувати найрізноманітніші типи навантажень, включно із застосунками зі станом (stateful) та без стану (stateless), навантаження по обробці даних тощо. Якщо ваш застосунок можна контейнеризувати, він чудово запуститься під Kubernetes. +* Не розгортає застосунки з вихідного коду та не збирає ваші застосунки. Процеси безперервної інтеграції, доставки та розгортання (CI/CD) визначаються на рівні організації, та в залежності від технічних вимог. +* Не надає сервіси на рівні застосунків як вбудовані: програмне забезпечення проміжного рівня (наприклад, шина передачі повідомлень), фреймворки обробки даних (наприклад, Spark), бази даних (наприклад, MySQL), кеш, некластерні системи збереження інформації (наприклад, Ceph). Ці компоненти можуть бути запущені у Kubernetes та/або бути доступними для застосунків за допомогою спеціальних механізмів, наприклад [Open Service Broker](https://openservicebrokerapi.org/). +* Не нав'язує використання інструментів для логування, моніторингу та сповіщень, натомість надає певні інтеграційні рішення як прототипи, та механізми зі збирання та експорту метрик. +* Не надає та не змушує використовувати якусь конфігураційну мову/систему (як наприклад `Jsonnet`), натомість надає можливість використовувати API, що може бути використаний довільними формами декларативних специфікацій. +* Не надає і не запроваджує жодних систем машинної конфігурації, підтримки, управління або самозцілення. +* На додачу, Kubernetes - не просто система оркестрації. Власне кажучи, вона усуває потребу оркестрації як такої. Технічне визначення оркестрації - це запуск визначених процесів: спочатку A, за ним B, потім C. На противагу, Kubernetes складається з певної множини незалежних, складних процесів контролерів, що безперервно опрацьовують стан у напрямку, що заданий бажаною конфігурацією. Неважливо, як ви дістанетесь з пункту A до пункту C. Централізоване управління також не є вимогою. Все це виливається в систему, яку легко використовувати, яка є потужною, надійною, стійкою та здатною до легкого розширення. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Перегляньте [компоненти Kubernetes](/docs/concepts/overview/components/) +* Готові [розпочати роботу](/docs/setup/)? +{{% /capture %}} diff --git a/content/uk/docs/concepts/services-networking/_index.md b/content/uk/docs/concepts/services-networking/_index.md new file mode 100644 index 0000000000000..634694311a433 --- /dev/null +++ b/content/uk/docs/concepts/services-networking/_index.md @@ -0,0 +1,4 @@ +--- +title: "Сервіси, балансування навантаження та мережа" +weight: 60 +--- diff --git a/content/uk/docs/concepts/storage/_index.md b/content/uk/docs/concepts/storage/_index.md new file mode 100644 index 0000000000000..23108a421cb1a --- /dev/null +++ b/content/uk/docs/concepts/storage/_index.md @@ -0,0 +1,4 @@ +--- +title: "Сховища інформації" +weight: 70 +--- diff --git a/content/uk/docs/concepts/workloads/_index.md b/content/uk/docs/concepts/workloads/_index.md new file mode 100644 index 0000000000000..c826cbbcbc587 --- /dev/null +++ b/content/uk/docs/concepts/workloads/_index.md @@ -0,0 +1,4 @@ +--- +title: "Робочі навантаження" +weight: 50 +--- diff --git a/content/uk/docs/concepts/workloads/controllers/_index.md b/content/uk/docs/concepts/workloads/controllers/_index.md new file mode 100644 index 0000000000000..3e5306f908cbc --- /dev/null +++ b/content/uk/docs/concepts/workloads/controllers/_index.md @@ -0,0 +1,4 @@ +--- +title: "Контролери" +weight: 20 +--- diff --git a/content/uk/docs/contribute/localization_uk.md b/content/uk/docs/contribute/localization_uk.md new file mode 100644 index 0000000000000..0235b56a3249c --- /dev/null +++ b/content/uk/docs/contribute/localization_uk.md @@ -0,0 +1,122 @@ +--- +title: Рекомендації з перекладу на українську мову +content_template: templates/concept +anchors: + - anchor: "#правила-перекладу" + title: Правила перекладу + - anchor: "#словник" + title: Словник +--- + +{{% capture overview %}} + +Дорогі друзі! Раді вітати вас у спільноті українських контриб'юторів проекту Kubernetes. Ця сторінка створена з метою полегшити вашу роботу при перекладі документації. Вона містить правила, якими ми керувалися під час перекладу, і базовий словник, який ми почали укладати. Перелічені у ньому терміни ви знайдете в українській версії документації Kubernetes. Будемо дуже вдячні, якщо ви допоможете нам доповнити цей словник і розширити правила перекладу. + +Сподіваємось, наші рекомендації стануть вам у пригоді. + +{{% /capture %}} + +{{% capture body %}} + +## Правила перекладу {#правила-перекладу} + +* У випадку, якщо у перекладі термін набуває неоднозначності і розуміння тексту ускладнюється, надайте у дужках англійський варіант, наприклад: кінцеві точки (endpoints). Якщо при перекладі термін втрачає своє значення, краще не перекладати його, наприклад: характеристики affinity. + +* Назви об'єктів Kubernetes залишаємо без перекладу і пишемо з великої літери: Service, Pod, Deployment, Volume, Namespace, за винятком терміна node (вузол). Назви об'єктів Kubernetes вважаємо за іменники ч.р. і відмінюємо за допомогою апострофа: Pod'ів, Deployment'ами. + Для слів, що закінчуються на приголосний, у родовому відмінку однини використовуємо закінчення -а: Pod'а, Deployment'а. + Слова, що закінчуються на голосний, не відмінюємо: доступ до Service, за допомогою Namespace. У множині використовуємо англійську форму: користуватися Services, спільні Volumes. + +* Частовживані і усталені за межами Kubernetes слова перекладаємо українською і пишемо з малої літери (label -> мітка). У випадку, якщо термін для означення об'єкта Kubernetes вживається у своєму загальному значенні поза контекстом Kubernetes (service як службова програма, deployment як розгортання), перекладаємо його і пишемо з малої літери, наприклад: service discovery -> виявлення сервісу, continuous deployment -> безперервне розгортання. + +* Складені слова вважаємо за власні назви і не перекладаємо (LabelSelector, kube-apiserver). + +* Для перевірки закінчень слів у родовому відмінку однини (-а/-я, -у/-ю) використовуйте [онлайн словник](https://slovnyk.ua/). Якщо слова немає у словнику, визначте його відміну і далі відмінюйте за правилами. Докладніше [дивіться тут](https://pidruchniki.com/1948041951499/dokumentoznavstvo/vidminyuvannya_imennikiv). + +## Словник {#словник} + +English | Українська | +--- | --- | +addon | розширення | +application | застосунок | +backend | бекенд | +build | збирання (результат) | +build | збирати (процес) | +cache | кеш | +CLI | інтерфейс командного рядка | +cloud | хмара; хмарний провайдер | +containerized | контейнеризований | +continuous deployment | безперервне розгортання | +continuous development | безперервна розробка | +continuous integration | безперервна інтеграція | +contribute | робити внесок (до проекту), допомагати (проекту) | +contributor | контриб'ютор, учасник проекту | +control plane | площина управління | +controller | контролер | +CPU | ЦП | +dashboard | дашборд | +data plane | площина даних | +default (by) | за умовчанням | +default settings | типові налаштування | +Deployment | Deployment | +deprecated | застарілий | +desired state | бажаний стан | +downtime | недоступність, простій | +ecosystem | сімейство проектів (екосистема) | +endpoint | кінцева точка | +expose (a service) | відкрити доступ (до сервісу) | +fail | відмовити | +feature | компонент | +framework | фреймворк | +frontend | фронтенд | +image | образ | +Ingress | Ingress | +instance | інстанс | +issue | запит | +kube-proxy | kube-proxy | +kubelet | kubelet | +Kubernetes features | функціональні можливості Kubernetes | +label | мітка | +lifecycle | життєвий цикл | +logging | логування | +maintenance | обслуговування | +map | спроектувати, зіставити, встановити відповідність | +master | master | +monitor | моніторити | +monitoring | моніторинг | +Namespace | Namespace | +network policy | мережева політика | +node | вузол | +orchestrate | оркеструвати | +output | вивід | +patch | патч | +Pod | Pod | +production | прод | +pull request | pull request | +release | реліз | +replica | репліка | +rollback | відкатування | +rolling update | послідовне оновлення | +rollout (new updates) | викатка (оновлень) | +run | запускати | +scale | масштабувати | +schedule | розподіляти (Pod'и по вузлах) | +Scheduler | Scheduler | +Secret | Secret | +Selector | Селектор | +self-healing | самозцілення | +self-restoring | самовідновлення | +Service | Service (як об'єкт Kubernetes) | +service | сервіс (як службова програма) | +service discovery | виявлення сервісу | +source code | вихідний код | +stateful app | застосунок зі станом | +stateless app | застосунок без стану | +task | завдання | +terminated | зупинений | +traffic | трафік | +VM (virtual machine) | ВМ (віртуальна машина) | +Volume | Volume | +workload | робоче навантаження | +YAML | YAML | + +{{% /capture %}} diff --git a/content/uk/docs/home/_index.md b/content/uk/docs/home/_index.md new file mode 100644 index 0000000000000..5a8cfc3c51a09 --- /dev/null +++ b/content/uk/docs/home/_index.md @@ -0,0 +1,56 @@ +--- +title: Документація Kubernetes +noedit: true +cid: docsHome +layout: docsportal_home +class: gridPage gridPageHome +linkTitle: "Головна" +main_menu: true +weight: 10 +hide_feedback: true +menu: + main: + title: "Документація" + weight: 20 + post: > +

          Дізнайтеся про основи роботи з Kubernetes, використовуючи схеми, навчальну та довідкову документацію. Ви можете навіть зробити свій внесок у документацію!

          +overview: > + Kubernetes - рушій оркестрації контейнерів з відкритим вихідним кодом для автоматичного розгортання, масштабування і управління контейнеризованими застосунками. Цей проект розробляється під егідою Cloud Native Computing Foundation (CNCF). +cards: +- name: concepts + title: "Розуміння основ" + description: "Дізнайтеся про Kubernetes і його фундаментальні концепції." + button: "Дізнатися про концепції" + button_path: "/docs/concepts" +- name: tutorials + title: "Спробуйте Kubernetes" + description: "Дізнайтеся із навчальних матеріалів, як розгортати застосунки в Kubernetes." + button: "Переглянути навчальні матеріали" + button_path: "/docs/tutorials" +- name: setup + title: "Налаштування кластера" + description: "Розгорніть Kubernetes з урахуванням власних ресурсів і потреб." + button: "Налаштувати Kubernetes" + button_path: "/docs/setup" +- name: tasks + title: "Дізнайтеся, як користуватись Kubernetes" + description: "Ознайомтеся з типовими задачами і способами їх виконання за допомогою короткого алгоритму дій." + button: "Переглянути задачі" + button_path: "/docs/tasks" +- name: reference + title: Переглянути довідкову інформацію + description: Ознайомтеся з термінологією, синтаксисом командного рядка, типами ресурсів API і документацією з налаштування інструментів. + button: Переглянути довідкову інформацію + button_path: /docs/reference +- name: contribute + title: Зробити внесок у документацію + description: Будь-хто може зробити свій внесок, незалежно від того, чи ви нещодавно долучилися до проекту, чи працюєте над ним вже довгий час. + button: Зробити внесок у документацію + button_path: /docs/contribute +- name: download + title: Завантажити Kubernetes + description: Якщо ви встановлюєте Kubernetes чи оновлюєтесь до останньої версії, звіряйтеся з актуальною інформацією по релізу. +- name: about + title: Про документацію + description: Цей вебсайт містить документацію по актуальній і чотирьох попередніх версіях Kubernetes. +--- diff --git a/content/uk/docs/reference/glossary/applications.md b/content/uk/docs/reference/glossary/applications.md new file mode 100644 index 0000000000000..c42c6ec34339c --- /dev/null +++ b/content/uk/docs/reference/glossary/applications.md @@ -0,0 +1,16 @@ +--- +# title: Applications +title: Застосунки +id: applications +date: 2019-05-12 +full_link: +# short_description: > +# The layer where various containerized applications run. +short_description: > + Шар, в якому запущено контейнерізовані застосунки. +aka: +tags: +- fundamental +--- + +Шар, в якому запущено контейнерізовані застосунки. diff --git a/content/uk/docs/reference/glossary/cluster-infrastructure.md b/content/uk/docs/reference/glossary/cluster-infrastructure.md new file mode 100644 index 0000000000000..557180912abc0 --- /dev/null +++ b/content/uk/docs/reference/glossary/cluster-infrastructure.md @@ -0,0 +1,17 @@ +--- +# title: Cluster Infrastructure +title: Інфраструктура кластера +id: cluster-infrastructure +date: 2019-05-12 +full_link: +# short_description: > +# The infrastructure layer provides and maintains VMs, networking, security groups and others. +short_description: > + Шар інфраструктури забезпечує і підтримує роботу ВМ, мережі, груп безпеки тощо. + +aka: +tags: +- operations +--- + +Шар інфраструктури забезпечує і підтримує роботу ВМ, мережі, груп безпеки тощо. diff --git a/content/uk/docs/reference/glossary/cluster-operations.md b/content/uk/docs/reference/glossary/cluster-operations.md new file mode 100644 index 0000000000000..0b8ec9f510f29 --- /dev/null +++ b/content/uk/docs/reference/glossary/cluster-operations.md @@ -0,0 +1,16 @@ +--- +# title: Cluster Operations +title: Операції з кластером +id: cluster-operations +date: 2019-05-12 +full_link: +# short_description: > +# Activities such as upgrading the clusters, implementing security, storage, ingress, networking, logging and monitoring, and other operations involved in managing a Kubernetes cluster. +short_description: > + Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером. +aka: +tags: +- operations +--- + +Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером. diff --git a/content/uk/docs/reference/glossary/cluster.md b/content/uk/docs/reference/glossary/cluster.md new file mode 100644 index 0000000000000..4748e61236ac7 --- /dev/null +++ b/content/uk/docs/reference/glossary/cluster.md @@ -0,0 +1,22 @@ +--- +# title: Cluster +title: Кластер +id: cluster +date: 2019-06-15 +full_link: +# short_description: > +# A set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node. +short_description: > + Група робочих машин (їх називають вузлами), на яких запущені контейнерізовані застосунки. Кожен кластер має щонайменше один вузол. + +aka: +tags: +- fundamental +- operation +--- + +Група робочих машин (їх називають вузлами), на яких запущені контейнерізовані застосунки. Кожен кластер має щонайменше один вузол. + + + +На робочих вузлах розміщуються Pod'и, які є складовими застосунку. Площина управління керує робочими вузлами і Pod'ами кластера. У прод оточеннях площина управління зазвичай розповсюджується на багато комп'ютерів, а кластер складається з багатьох вузлів для забезпечення відмовостійкості і високої доступності. diff --git a/content/uk/docs/reference/glossary/control-plane.md b/content/uk/docs/reference/glossary/control-plane.md new file mode 100644 index 0000000000000..1fb28e40a4df7 --- /dev/null +++ b/content/uk/docs/reference/glossary/control-plane.md @@ -0,0 +1,17 @@ +--- +# title: Control Plane +title: Площина управління +id: control-plane +date: 2019-05-12 +full_link: +# short_description: > +# The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. +short_description: > + Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів. + +aka: +tags: +- fundamental +--- + +Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів. diff --git a/content/uk/docs/reference/glossary/data-plane.md b/content/uk/docs/reference/glossary/data-plane.md new file mode 100644 index 0000000000000..263a544010700 --- /dev/null +++ b/content/uk/docs/reference/glossary/data-plane.md @@ -0,0 +1,17 @@ +--- +# title: Data Plane +title: Площина даних +id: data-plane +date: 2019-05-12 +full_link: +# short_description: > +# The layer that provides capacity such as CPU, memory, network, and storage so that the containers can run and connect to a network. +short_description: > + Шар, який надає контейнерам ресурси, такі як ЦПУ, пам'ять, мережа і сховище даних для того, щоб контейнери могли працювати і підключатися до мережі. + +aka: +tags: +- fundamental +--- + +Шар, який надає контейнерам ресурси, такі як ЦПУ, пам'ять, мережа і сховище даних для того, щоб контейнери могли працювати і підключатися до мережі. diff --git a/content/uk/docs/reference/glossary/deployment.md b/content/uk/docs/reference/glossary/deployment.md new file mode 100644 index 0000000000000..4c9c5ba3b90bc --- /dev/null +++ b/content/uk/docs/reference/glossary/deployment.md @@ -0,0 +1,23 @@ +--- +title: Deployment +id: deployment +date: 2018-04-12 +full_link: /docs/concepts/workloads/controllers/deployment/ +# short_description: > +# An API object that manages a replicated application. +short_description: > + Об'єкт API, що керує реплікованим застосунком. + +aka: +tags: +- fundamental +- core-object +- workload +--- + +Об'єкт API, що керує реплікованим застосунком. + + + + +Кожна репліка являє собою {{< glossary_tooltip term_id="pod" text="Pod" >}}; Pod'и розподіляються між вузлами кластера. diff --git a/content/uk/docs/reference/glossary/index.md b/content/uk/docs/reference/glossary/index.md new file mode 100644 index 0000000000000..fd57553ef82cd --- /dev/null +++ b/content/uk/docs/reference/glossary/index.md @@ -0,0 +1,17 @@ +--- +approvers: +- maxymvlasov +- anastyakulyk +# title: Standardized Glossary +title: Глосарій +layout: glossary +noedit: true +default_active_tag: fundamental +weight: 5 +card: + name: reference + weight: 10 +# title: Glossary + title: Глосарій +--- + diff --git a/content/uk/docs/reference/glossary/kube-apiserver.md b/content/uk/docs/reference/glossary/kube-apiserver.md new file mode 100644 index 0000000000000..82e3caa0bae63 --- /dev/null +++ b/content/uk/docs/reference/glossary/kube-apiserver.md @@ -0,0 +1,29 @@ +--- +# title: API server +title: API-сервер +id: kube-apiserver +date: 2018-04-12 +full_link: /docs/reference/generated/kube-apiserver/ +# short_description: > +# Control plane component that serves the Kubernetes API. +short_description: > + Компонент площини управління, що надає доступ до API Kubernetes. + +aka: +- kube-apiserver +tags: +- architecture +- fundamental +--- + +API-сервер є компонентом {{< glossary_tooltip text="площини управління" term_id="control-plane" >}} Kubernetes, через який можна отримати доступ до API Kubernetes. API-сервер є фронтендом площини управління Kubernetes. + + + + + + +Основною реалізацією Kubernetes API-сервера є [kube-apiserver](/docs/reference/generated/kube-apiserver/). kube-apiserver підтримує горизонтальне масштабування, тобто масштабується за рахунок збільшення кількості інстансів. kube-apiserver можна запустити на декількох інстансах, збалансувавши між ними трафік. diff --git a/content/uk/docs/reference/glossary/kube-controller-manager.md b/content/uk/docs/reference/glossary/kube-controller-manager.md new file mode 100644 index 0000000000000..edd56dcc90ff6 --- /dev/null +++ b/content/uk/docs/reference/glossary/kube-controller-manager.md @@ -0,0 +1,22 @@ +--- +title: kube-controller-manager +id: kube-controller-manager +date: 2018-04-12 +full_link: /docs/reference/command-line-tools-reference/kube-controller-manager/ +# short_description: > +# Control Plane component that runs controller processes. +short_description: > + Компонент площини управління, який запускає процеси контролера. + +aka: +tags: +- architecture +- fundamental +--- + +Компонент площини управління, який запускає процеси {{< glossary_tooltip text="контролера" term_id="controller" >}}. + + + + +За логікою, кожен {{< glossary_tooltip text="контролер" term_id="controller" >}} є окремим процесом. Однак для спрощення їх збирають в один бінарний файл і запускають як єдиний процес. diff --git a/content/uk/docs/reference/glossary/kube-proxy.md b/content/uk/docs/reference/glossary/kube-proxy.md new file mode 100644 index 0000000000000..a6db7e07debe9 --- /dev/null +++ b/content/uk/docs/reference/glossary/kube-proxy.md @@ -0,0 +1,33 @@ +--- +title: kube-proxy +id: kube-proxy +date: 2018-04-12 +full_link: /docs/reference/command-line-tools-reference/kube-proxy/ +# short_description: > +# `kube-proxy` is a network proxy that runs on each node in the cluster. +short_description: > + `kube-proxy` - це мережеве проксі, що запущене на кожному вузлі кластера. + +aka: +tags: +- fundamental +- networking +--- + +[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) є мережевим проксі, що запущене на кожному вузлі кластера і реалізує частину концепції Kubernetes {{< glossary_tooltip term_id="service" text="Service">}}. + + + + +kube-proxy відповідає за мережеві правила на вузлах. Ці правила обумовлюють підключення по мережі до ваших Pod'ів всередині чи поза межами кластера. + + +kube-proxy використовує шар фільтрації пакетів операційної системи, за наявності такого. В іншому випадку kube-proxy скеровує трафік самостійно. diff --git a/content/uk/docs/reference/glossary/kube-scheduler.md b/content/uk/docs/reference/glossary/kube-scheduler.md new file mode 100644 index 0000000000000..0864f8570e93c --- /dev/null +++ b/content/uk/docs/reference/glossary/kube-scheduler.md @@ -0,0 +1,22 @@ +--- +title: kube-scheduler +id: kube-scheduler +date: 2018-04-12 +full_link: /docs/reference/generated/kube-scheduler/ +# short_description: > +# Control Plane component that watches for newly created pods with no assigned node, and selects a node for them to run on. +short_description: > + Компонент площини управління, що відстежує створені Pod'и, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть. + +aka: +tags: +- architecture +--- + +Компонент площини управління, що відстежує створені Pod'и, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть. + + + + +При виборі вузла враховуються наступні фактори: індивідуальна і колективна потреба у ресурсах, обмеження за апаратним/програмним забезпеченням і політиками, характеристики affinity і anti-affinity, локальність даних, сумісність робочих навантажень і граничні терміни виконання. diff --git a/content/uk/docs/reference/glossary/kubelet.md b/content/uk/docs/reference/glossary/kubelet.md new file mode 100644 index 0000000000000..d46d18610c8b8 --- /dev/null +++ b/content/uk/docs/reference/glossary/kubelet.md @@ -0,0 +1,23 @@ +--- +title: Kubelet +id: kubelet +date: 2018-04-12 +full_link: /docs/reference/generated/kubelet +# short_description: > +# An agent that runs on each node in the cluster. It makes sure that containers are running in a pod. +short_description: > + Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Pod'ах. + +aka: +tags: +- fundamental +- core-object +--- + +Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Pod'ах. + + + + +kubelet використовує специфікації PodSpecs, які надаються за допомогою різних механізмів, і забезпечує працездатність і справність усіх контейнерів, що описані у PodSpecs. kubelet керує лише тими контейнерами, що були створені Kubernetes. diff --git a/content/uk/docs/reference/glossary/pod.md b/content/uk/docs/reference/glossary/pod.md new file mode 100644 index 0000000000000..7ee87bb1af05a --- /dev/null +++ b/content/uk/docs/reference/glossary/pod.md @@ -0,0 +1,23 @@ +--- +# title: Pod +title: Pod +id: pod +date: 2018-04-12 +full_link: /docs/concepts/workloads/pods/pod-overview/ +# short_description: > +# The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. +short_description: > + Найменший і найпростіший об'єкт Kubernetes. Pod являє собою групу контейнерів, що запущені у вашому кластері. + +aka: +tags: +- core-object +- fundamental +--- + + Найменший і найпростіший об'єкт Kubernetes. Pod являє собою групу {{< glossary_tooltip text="контейнерів" term_id="container" >}}, що запущені у вашому кластері. + + + + +Як правило, в одному Pod'і запускається один контейнер. У Pod'і також можуть бути запущені допоміжні контейнери, що забезпечують додаткову функціональність, наприклад, логування. Управління Pod'ами зазвичай здійснює {{< glossary_tooltip term_id="deployment" >}}. diff --git a/content/uk/docs/reference/glossary/selector.md b/content/uk/docs/reference/glossary/selector.md new file mode 100644 index 0000000000000..77eb861f4edb2 --- /dev/null +++ b/content/uk/docs/reference/glossary/selector.md @@ -0,0 +1,22 @@ +--- +# title: Selector +title: Селектор +id: selector +date: 2018-04-12 +full_link: /docs/concepts/overview/working-with-objects/labels/ +# short_description: > +# Allows users to filter a list of resources based on labels. +short_description: > + Дозволяє користувачам фільтрувати ресурси за мітками. + +aka: +tags: +- fundamental +--- + +Дозволяє користувачам фільтрувати ресурси за мітками. + + + + +Селектори застосовуються при створенні запитів для фільтрації ресурсів за {{< glossary_tooltip text="мітками" term_id="label" >}}. diff --git a/content/uk/docs/reference/glossary/service.md b/content/uk/docs/reference/glossary/service.md new file mode 100755 index 0000000000000..d813d7056c6b0 --- /dev/null +++ b/content/uk/docs/reference/glossary/service.md @@ -0,0 +1,24 @@ +--- +title: Service +id: service +date: 2018-04-12 +full_link: /docs/concepts/services-networking/service/ +# A way to expose an application running on a set of Pods as a network service. +short_description: > + Спосіб відкрити доступ до застосунку, що запущений на декількох Pod'ах у вигляді мережевої служби. + +aka: +tags: +- fundamental +- core-object +--- + +Це абстрактний спосіб відкрити доступ до застосунку, що працює як один (або декілька) {{< glossary_tooltip text="Pod'ів" term_id="pod" >}} у вигляді мережевої служби. + + + + +Переважно група Pod'ів визначається як Service за допомогою {{< glossary_tooltip text="селектора" term_id="selector" >}}. Додання або вилучення Pod'ів змінить групу Pod'ів, визначених селектором. Service забезпечує надходження мережевого трафіка до актуальної групи Pod'ів для підтримки робочого навантаження. diff --git a/content/uk/docs/setup/_index.md b/content/uk/docs/setup/_index.md new file mode 100644 index 0000000000000..f7874f9fc422a --- /dev/null +++ b/content/uk/docs/setup/_index.md @@ -0,0 +1,136 @@ +--- +reviewers: +- brendandburns +- erictune +- mikedanese +no_issue: true +title: Початок роботи +main_menu: true +weight: 20 +content_template: templates/concept +card: + name: setup + weight: 20 + anchors: + - anchor: "#навчальне-середовище" + title: Навчальне середовище + - anchor: "#прод-оточення" + title: Прод оточення +--- + +{{% capture overview %}} + + +У цьому розділі розглянуто різні варіанти налаштування і запуску Kubernetes. + + +Різні рішення Kubernetes відповідають різним вимогам: легкість в експлуатації, безпека, система контролю, наявні ресурси та досвід, необхідний для управління кластером. + + +Ви можете розгорнути Kubernetes кластер на робочому комп'ютері, у хмарі чи в локальному дата-центрі, або обрати керований Kubernetes кластер. Також можна створити індивідуальні рішення на базі різних провайдерів хмарних сервісів або на звичайних серверах. + + +Простіше кажучи, ви можете створити Kubernetes кластер у навчальному і в прод оточеннях. + +{{% /capture %}} + +{{% capture body %}} + + + +## Навчальне оточення {#навчальне-оточення} + + +Для вивчення Kubernetes використовуйте рішення на базі Docker: інструменти, підтримувані спільнотою Kubernetes, або інші інструменти з сімейства проектів для налаштування Kubernetes кластера на локальному комп'ютері. + +{{< table caption="Таблиця інструментів для локального розгортання Kubernetes, які підтримуються спільнотою або входять до сімейства проектів Kubernetes." >}} + +|Спільнота |Сімейство проектів | +| ------------ | -------- | +| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) | +| [kind (Kubernetes IN Docker)](https://github.com/kubernetes-sigs/kind) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| +| | [Minishift](https://docs.okd.io/latest/minishift/)| +| | [MicroK8s](https://microk8s.io/)| +| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | +| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| +| | [k3s](https://k3s.io)| + + +## Прод оточення {#прод-оточення} + + +Обираючи рішення для проду, визначіться, якими з функціональних складових (або абстракцій) Kubernetes кластера ви хочете керувати самі, а управління якими - доручити провайдеру. + + +У Kubernetes кластері можливі наступні абстракції: {{< glossary_tooltip text="застосунки" term_id="applications" >}}, {{< glossary_tooltip text="площина даних" term_id="data-plane" >}}, {{< glossary_tooltip text="площина управління" term_id="control-plane" >}}, {{< glossary_tooltip text="інфраструктура кластера" term_id="cluster-infrastructure" >}} та {{< glossary_tooltip text="операції з кластером" term_id="cluster-operations" >}}. + + +На діаграмі нижче показані можливі абстракції Kubernetes кластера із зазначенням, які з них потребують самостійного управління, а які можуть бути керовані провайдером. + +Рішення для прод оточення![Рішення для прод оточення](/images/docs/KubernetesSolutions.svg) + +{{< table caption="Таблиця рішень для прод оточення містить перелік провайдерів і їх технологій." >}} + +Таблиця рішень для прод оточення містить перелік провайдерів і технологій, які вони пропонують. + +|Провайдери | Керований сервіс | Хмара "під ключ" | Локальний дата-центр | Під замовлення (хмара) | Під замовлення (локальні ВМ)| Під замовлення (сервери без ОС) | +| --------- | ------ | ------ | ------ | ------ | ------ | ----- | +| [Agile Stacks](https://www.agilestacks.com/products/kubernetes)| | ✔ | ✔ | | | +| [Alibaba Cloud](https://www.alibabacloud.com/product/kubernetes)| | ✔ | | | | +| [Amazon](https://aws.amazon.com) | [Amazon EKS](https://aws.amazon.com/eks/) |[Amazon EC2](https://aws.amazon.com/ec2/) | | | | +| [AppsCode](https://appscode.com/products/pharmer/) | ✔ | | | | | +| [APPUiO](https://appuio.ch/)  | ✔ | ✔ | ✔ | | | | +| [Banzai Cloud Pipeline Kubernetes Engine (PKE)](https://banzaicloud.com/products/pke/) | | ✔ | | ✔ | ✔ | ✔ | +| [CenturyLink Cloud](https://www.ctl.io/) | | ✔ | | | | +| [Cisco Container Platform](https://cisco.com/go/containers) | | | ✔ | | | +| [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) | | | | ✔ |✔ | +| [CloudStack](https://cloudstack.apache.org/) | | | | | ✔| +| [Canonical](https://ubuntu.com/kubernetes) | ✔ | ✔ | ✔ | ✔ |✔ | ✔ +| [Containership](https://containership.io) | ✔ |✔ | | | | +| [D2iQ](https://d2iq.com/) | | [Kommander](https://d2iq.com/solutions/ksphere) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | +| [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | ✔ +| [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | ✔ | | | | | +| [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |✔ | ✔ | | | ✔ +| [Gardener](https://gardener.cloud/) | ✔ | ✔ | ✔ | ✔ | ✔ | [Custom Extensions](https://github.com/gardener/gardener/blob/master/docs/extensions/overview.md) | +| [Giant Swarm](https://www.giantswarm.io/) | ✔ | ✔ | ✔ | | +| [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | | +| [Hidora](https://hidora.com/) | ✔ | ✔| ✔ | | | | | | | | +| [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | | +| [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | | +| [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | | +| [KubeOne](https://kubeone.io/) | | ✔ | ✔ | ✔ | ✔ | ✔ | +| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | | +| [KubeSail](https://kubesail.com/) | ✔ | | | | | +| [Kubespray](https://kubespray.io/#/) | | | |✔ | ✔ | ✔ | +| [Kublr](https://kublr.com/) |✔ | ✔ |✔ |✔ |✔ |✔ | +| [Microsoft Azure](https://azure.microsoft.com) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) | | | | | +| [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) | | | ✔ | | | +| [NetApp Kubernetes Service (NKS)](https://cloud.netapp.com/kubernetes-service) | ✔ | ✔ | ✔ | | | +| [Nirmata](https://www.nirmata.com/) | | ✔ | ✔ | | | +| [Nutanix](https://www.nutanix.com/en) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | | | [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization) | +| [OpenNebula](https://www.opennebula.org) |[OpenNebula Kubernetes](https://marketplace.opennebula.systems/docs/service/kubernetes.html) | | | | | +| [OpenShift](https://www.openshift.com) |[OpenShift Dedicated](https://www.openshift.com/products/dedicated/) and [OpenShift Online](https://www.openshift.com/products/online/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) |[OpenShift Container Platform](https://www.openshift.com/products/container-platform/) +| [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | ✔ | ✔ | | | | +| [oVirt](https://www.ovirt.org/) | | | | | ✔ | +| [Pivotal](https://pivotal.io/) | | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | | | +| [Platform9](https://platform9.com/) | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | ✔ | ✔ | ✔ +| [Rancher](https://rancher.com/) | | [Rancher 2.x](https://rancher.com/docs/rancher/v2.x/en/) | | [Rancher Kubernetes Engine (RKE)](https://rancher.com/docs/rke/latest/en/) | | [k3s](https://k3s.io/) +| [Supergiant](https://supergiant.io/) | |✔ | | | | +| [SUSE](https://www.suse.com/) | | ✔ | | | | +| [SysEleven](https://www.syseleven.io/) | ✔ | | | | | +| [Tencent Cloud](https://intl.cloud.tencent.com/) | [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/product/tke) | ✔ | ✔ | | | ✔ | +| [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | | +| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) +| [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | | + +{{% /capture %}} diff --git a/content/uk/docs/setup/best-practices/_index.md b/content/uk/docs/setup/best-practices/_index.md new file mode 100644 index 0000000000000..696ad54d32b88 --- /dev/null +++ b/content/uk/docs/setup/best-practices/_index.md @@ -0,0 +1,5 @@ +--- +#title: Best practices +title: Найкращі практики +weight: 40 +--- diff --git a/content/uk/docs/setup/learning-environment/_index.md b/content/uk/docs/setup/learning-environment/_index.md new file mode 100644 index 0000000000000..879c4eb9f6c7f --- /dev/null +++ b/content/uk/docs/setup/learning-environment/_index.md @@ -0,0 +1,5 @@ +--- +# title: Learning environment +title: Навчальне оточення +weight: 20 +--- diff --git a/content/uk/docs/setup/production-environment/_index.md b/content/uk/docs/setup/production-environment/_index.md new file mode 100644 index 0000000000000..81463114fd563 --- /dev/null +++ b/content/uk/docs/setup/production-environment/_index.md @@ -0,0 +1,5 @@ +--- +#title: Production environment +title: Прод оточення +weight: 30 +--- diff --git a/content/uk/docs/setup/production-environment/on-premises-vm/_index.md b/content/uk/docs/setup/production-environment/on-premises-vm/_index.md new file mode 100644 index 0000000000000..d672032037d97 --- /dev/null +++ b/content/uk/docs/setup/production-environment/on-premises-vm/_index.md @@ -0,0 +1,5 @@ +--- +# title: On-Premises VMs +title: Менеджери віртуалізації +weight: 40 +--- diff --git a/content/uk/docs/setup/production-environment/tools/_index.md b/content/uk/docs/setup/production-environment/tools/_index.md new file mode 100644 index 0000000000000..8891b3ef4f3ed --- /dev/null +++ b/content/uk/docs/setup/production-environment/tools/_index.md @@ -0,0 +1,5 @@ +--- +# title: Installing Kubernetes with deployment tools +title: Встановлення Kubernetes за допомогою інструментів розгортання +weight: 30 +--- diff --git a/content/uk/docs/setup/production-environment/tools/kubeadm/_index.md b/content/uk/docs/setup/production-environment/tools/kubeadm/_index.md new file mode 100644 index 0000000000000..b5cd8e5426029 --- /dev/null +++ b/content/uk/docs/setup/production-environment/tools/kubeadm/_index.md @@ -0,0 +1,5 @@ +--- +# title: "Bootstrapping clusters with kubeadm" +title: "Запуск кластерів з kubeadm" +weight: 10 +--- diff --git a/content/uk/docs/setup/production-environment/turnkey/_index.md b/content/uk/docs/setup/production-environment/turnkey/_index.md new file mode 100644 index 0000000000000..4251bca2c4846 --- /dev/null +++ b/content/uk/docs/setup/production-environment/turnkey/_index.md @@ -0,0 +1,5 @@ +--- +# title: Turnkey Cloud Solutions +title: Хмарні рішення під ключ +weight: 30 +--- diff --git a/content/uk/docs/setup/production-environment/windows/_index.md b/content/uk/docs/setup/production-environment/windows/_index.md new file mode 100644 index 0000000000000..a0d1574f6c3b2 --- /dev/null +++ b/content/uk/docs/setup/production-environment/windows/_index.md @@ -0,0 +1,5 @@ +--- +# title: "Windows in Kubernetes" +title: "Windows в Kubernetes" +weight: 50 +--- diff --git a/content/uk/docs/setup/release/_index.md b/content/uk/docs/setup/release/_index.md new file mode 100755 index 0000000000000..5fed3d9c90ebf --- /dev/null +++ b/content/uk/docs/setup/release/_index.md @@ -0,0 +1,5 @@ +--- +#title: "Release notes and version skew" +title: "Зміни в релізах нових версій" +weight: 10 +--- diff --git a/content/uk/docs/templates/feature-state-alpha.txt b/content/uk/docs/templates/feature-state-alpha.txt new file mode 100644 index 0000000000000..e061aa52be02b --- /dev/null +++ b/content/uk/docs/templates/feature-state-alpha.txt @@ -0,0 +1,7 @@ +Наразі цей компонент у статусі *alpha*, що означає: + +* Назва версії містить слово alpha (напр. v1alpha1). +* Увімкнення цього компонента може призвести до помилок у системі. За умовчанням цей компонент вимкнутий. +* Підтримка цього компонентa може бути припинена у будь-який час без попередження. +* API може стати несумісним у наступних релізах без попередження. +* Рекомендований до використання лише у тестових кластерах через підвищений ризик виникнення помилок і відсутність довгострокової підтримки. diff --git a/content/uk/docs/templates/feature-state-beta.txt b/content/uk/docs/templates/feature-state-beta.txt new file mode 100644 index 0000000000000..3790be73f4718 --- /dev/null +++ b/content/uk/docs/templates/feature-state-beta.txt @@ -0,0 +1,22 @@ + +Наразі цей компонент у статусі *beta*, що означає: + + +* Назва версії містить слово beta (наприклад, v2beta3). + +* Код добре відтестований. Увімкнення цього компонента не загрожує роботі системи. Компонент увімкнутий за умовчанням. + +* Загальна підтримка цього компонента триватиме, однак деталі можуть змінитися. + +* У наступній beta- чи стабільній версії схема та/або семантика об'єктів може змінитися і стати несумісною. У такому випадку ми надамо інструкції для міграції на наступну версію. Це може призвести до видалення, редагування і перестворення об'єктів API. У процесі редагування вам, можливо, знадобиться продумати зміни в об'єкті. Це може призвести до недоступності застосунків, для роботи яких цей компонент є істотно важливим. + +* Використання компонента рекомендоване лише у некритичних для безперебійної діяльності випадках через ризик несумісних змін у подальших релізах. Це обмеження може бути пом'якшене у випадку декількох кластерів, які можна оновлювати окремо. + +* **Будь ласка, спробуйте beta-версії наших компонентів і поділіться з нами своєю думкою! Після того, як компонент вийде зі статусу beta, нам буде важче змінити його.** diff --git a/content/uk/docs/templates/feature-state-deprecated.txt b/content/uk/docs/templates/feature-state-deprecated.txt new file mode 100644 index 0000000000000..7c35b3fc2f04b --- /dev/null +++ b/content/uk/docs/templates/feature-state-deprecated.txt @@ -0,0 +1,4 @@ + + +Цей компонент є *застарілим*. Дізнатися більше про цей статус ви можете зі статті [Політика Kubernetes щодо застарілих компонентів](/docs/reference/deprecation-policy/). diff --git a/content/uk/docs/templates/feature-state-stable.txt b/content/uk/docs/templates/feature-state-stable.txt new file mode 100644 index 0000000000000..a794f5ceb6134 --- /dev/null +++ b/content/uk/docs/templates/feature-state-stable.txt @@ -0,0 +1,11 @@ + + +Цей компонент є *стабільним*, що означає: + + +* Назва версії становить vX, де X є цілим числом. + +* Стабільні версії компонентів з'являтимуться у багатьох наступних версіях програмного забезпечення. \ No newline at end of file diff --git a/content/uk/docs/templates/index.md b/content/uk/docs/templates/index.md new file mode 100644 index 0000000000000..0e0b890542ef8 --- /dev/null +++ b/content/uk/docs/templates/index.md @@ -0,0 +1,15 @@ +--- +headless: true + +resources: +- src: "*alpha*" + title: "alpha" +- src: "*beta*" + title: "beta" +- src: "*deprecated*" +# title: "deprecated" + title: "застарілий" +- src: "*stable*" +# title: "stable" + title: "стабільний" +--- \ No newline at end of file diff --git a/content/uk/docs/tutorials/_index.md b/content/uk/docs/tutorials/_index.md new file mode 100644 index 0000000000000..5c30bc87ffb7e --- /dev/null +++ b/content/uk/docs/tutorials/_index.md @@ -0,0 +1,90 @@ +--- +#title: Tutorials +title: Навчальні матеріали +main_menu: true +weight: 60 +content_template: templates/concept +--- + +{{% capture overview %}} + + +У цьому розділі документації Kubernetes зібрані навчальні матеріали. Кожний матеріал показує, як досягти окремої мети, що більша за одне [завдання](/docs/tasks/). Зазвичай навчальний матеріал має декілька розділів, кожен з яких містить певну послідовність дій. До ознайомлення з навчальними матеріалами вам, можливо, знадобиться додати у закладки сторінку з [Глосарієм](/docs/reference/glossary/) для подальшого консультування. + +{{% /capture %}} + +{{% capture body %}} + + +## Основи + + +* [Основи Kubernetes](/docs/tutorials/kubernetes-basics/) - детальний навчальний матеріал з інтерактивними уроками, що допоможе вам зрозуміти Kubernetes і спробувати його базову функціональність. + +* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) + +* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) + +* [Привіт Minikube](/docs/tutorials/hello-minikube/) + + +## Конфігурація + +* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/) + +## Застосунки без стану (Stateless Applications) {#застосунки-без-стану} + +* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) + +* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/) + +## Застосунки зі станом (Stateful Applications) {#застосунки-зі-станом} + +* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/) + +* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) + +* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/) + +* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/) + +## CI/CD Pipeline + +* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview) + +* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2) + +* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3) + +* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4) + +## Кластери + +* [AppArmor](/docs/tutorials/clusters/apparmor/) + +## Сервіси + +* [Using Source IP](/docs/tutorials/services/source-ip/) + +{{% /capture %}} + +{{% capture whatsnext %}} + + +Якщо ви хочете написати навчальний матеріал, у статті +[Використання шаблонів сторінок](/docs/home/contribute/page-templates/) +ви знайдете інформацію про тип навчальної сторінки і шаблон. + +{{% /capture %}} diff --git a/content/uk/docs/tutorials/hello-minikube.md b/content/uk/docs/tutorials/hello-minikube.md new file mode 100644 index 0000000000000..6a1d77d7d28c6 --- /dev/null +++ b/content/uk/docs/tutorials/hello-minikube.md @@ -0,0 +1,394 @@ +--- +#title: Hello Minikube +title: Привіт Minikube +content_template: templates/tutorial +weight: 5 +menu: + main: + #title: "Get Started" + title: "Початок роботи" + weight: 10 + #post: > + #

          Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.

          + post: > +

          Готові попрацювати? Створимо простий Kubernetes кластер для запуску Node.js застосунку "Hello World".

          +card: + #name: tutorials + name: навчальні матеріали + weight: 10 +--- + +{{% capture overview %}} + + +З цього навчального матеріалу ви дізнаєтесь, як запустити у Kubernetes простий Hello World застосунок на Node.js за допомогою [Minikube](/docs/setup/learning-environment/minikube) і Katacoda. Katacoda надає безплатне Kubernetes середовище, що доступне у вашому браузері. + + +{{< note >}} +Також ви можете навчатись за цим матеріалом, якщо встановили [Minikube локально](/docs/tasks/tools/install-minikube/). +{{< /note >}} + +{{% /capture %}} + +{{% capture objectives %}} + + +* Розгорнути Hello World застосунок у Minikube. + +* Запустити застосунок. + +* Переглянути логи застосунку. + +{{% /capture %}} + +{{% capture prerequisites %}} + + +У цьому навчальному матеріалі ми використовуємо образ контейнера, зібраний із наступних файлів: + +{{< codenew language="js" file="minikube/server.js" >}} + +{{< codenew language="conf" file="minikube/Dockerfile" >}} + + +Більше інформації про команду `docker build` ви знайдете у [документації Docker](https://docs.docker.com/engine/reference/commandline/build/). + +{{% /capture %}} + +{{% capture lessoncontent %}} + + +## Створення Minikube кластера + + +1. Натисніть кнопку **Запуск термінала** + + {{< kat-button >}} + + + {{< note >}}Якщо Minikube встановлений локально, виконайте команду `minikube start`.{{< /note >}} + + +2. Відкрийте Kubernetes дашборд у браузері: + + ```shell + minikube dashboard + ``` + + +3. Тільки для Katacoda: у верхній частині вікна термінала натисніть знак плюс, а потім -- **Select port to view on Host 1**. + + +4. Тільки для Katacoda: введіть `30000`, а потім натисніть **Display Port**. + + +## Створення Deployment + + +[*Pod*](/docs/concepts/workloads/pods/pod/) у Kubernetes -- це група з одного або декількох контейнерів, що об'єднані разом з метою адміністрування і роботи у мережі. У цьому навчальному матеріалі Pod має лише один контейнер. Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) перевіряє стан Pod'а і перезапускає контейнер Pod'а, якщо контейнер перестає працювати. Створювати і масштабувати Pod'и рекомендується за допомогою Deployment'ів. + + +1. За допомогою команди `kubectl create` створіть Deployment, який керуватиме Pod'ом. Pod запускає контейнер на основі наданого Docker образу. + + ```shell + kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 + ``` + + +2. Перегляньте інформацію про запущений Deployment: + + ```shell + kubectl get deployments + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + hello-node 1/1 1 1 1m + ``` + + +3. Перегляньте інформацію про запущені Pod'и: + + ```shell + kubectl get pods + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + NAME READY STATUS RESTARTS AGE + hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m + ``` + + +4. Перегляньте події кластера: + + ```shell + kubectl get events + ``` + + +5. Перегляньте конфігурацію `kubectl`: + + ```shell + kubectl config view + ``` + + + {{< note >}}Більше про команди `kubectl` ви можете дізнатися зі статті [Загальна інформація про kubectl](/docs/user-guide/kubectl-overview/).{{< /note >}} + + +## Створення Service + + +За умовчанням, Pod доступний лише за внутрішньою IP-адресою у межах Kubernetes кластера. Для того, щоб контейнер `hello-node` став доступний за межами віртуальної мережі Kubernetes, Pod необхідно відкрити як Kubernetes [*Service*](/docs/concepts/services-networking/service/). + + +1. Відкрийте Pod для публічного доступу з інтернету за допомогою команди `kubectl expose`: + + ```shell + kubectl expose deployment hello-node --type=LoadBalancer --port=8080 + ``` + + + Прапорець `--type=LoadBalancer` вказує, що ви хочете відкрити доступ до Service за межами кластера. + + +2. Перегляньте інформацію про Service, який ви щойно створили: + + ```shell + kubectl get services + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s + kubernetes ClusterIP 10.96.0.1 443/TCP 23m + ``` + + + Для хмарних провайдерів, що підтримують балансування навантаження, доступ до Service надається через зовнішню IP-адресу. Для Minikube, тип `LoadBalancer` робить Service доступним ззовні за допомогою команди `minikube service`. + + +3. Виконайте наступну команду: + + ```shell + minikube service hello-node + ``` + + +4. Тільки для Katacoda: натисніть знак плюс, а потім -- **Select port to view on Host 1**. + + +5. Тільки для Katacoda: запишіть п'ятизначний номер порту, що відображається напроти `8080` у виводі сервісу. Номер цього порту генерується довільно і тому може бути іншим у вашому випадку. Введіть номер порту у призначене для цього текстове поле і натисніть Display Port. У нашому прикладі номер порту `30369`. + + + Це відкриє вікно браузера, в якому запущений ваш застосунок, і покаже повідомлення "Hello World". + + +## Увімкнення розширень + + +Minikube має ряд вбудованих {{< glossary_tooltip text="розширень" term_id="addons" >}}, які можна увімкнути, вимкнути і відкрити у локальному Kubernetes оточенні. + + +1. Перегляньте перелік підтримуваних розширень: + + ```shell + minikube addons list + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + addon-manager: enabled + dashboard: enabled + default-storageclass: enabled + efk: disabled + freshpod: disabled + gvisor: disabled + helm-tiller: disabled + ingress: disabled + ingress-dns: disabled + logviewer: disabled + metrics-server: disabled + nvidia-driver-installer: disabled + nvidia-gpu-device-plugin: disabled + registry: disabled + registry-creds: disabled + storage-provisioner: enabled + storage-provisioner-gluster: disabled + ``` + + +2. Увімкніть розширення, наприклад `metrics-server`: + + ```shell + minikube addons enable metrics-server + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + metrics-server was successfully enabled + ``` + + +3. Перегляньте інформацію про Pod і Service, які ви щойно створили: + + ```shell + kubectl get pod,svc -n kube-system + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + NAME READY STATUS RESTARTS AGE + pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m + pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m + pod/metrics-server-67fb648c5 1/1 Running 0 26s + pod/etcd-minikube 1/1 Running 0 34m + pod/influxdb-grafana-b29w8 2/2 Running 0 26s + pod/kube-addon-manager-minikube 1/1 Running 0 34m + pod/kube-apiserver-minikube 1/1 Running 0 34m + pod/kube-controller-manager-minikube 1/1 Running 0 34m + pod/kube-proxy-rnlps 1/1 Running 0 34m + pod/kube-scheduler-minikube 1/1 Running 0 34m + pod/storage-provisioner 1/1 Running 0 34m + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/metrics-server ClusterIP 10.96.241.45 80/TCP 26s + service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m + service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s + service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s + ``` + + +4. Вимкніть `metrics-server`: + + ```shell + minikube addons disable metrics-server + ``` + + + У виводі ви побачите подібну інформацію: + + ``` + metrics-server was successfully disabled + ``` + + +## Вивільнення ресурсів + + +Тепер ви можете видалити ресурси, які створили у вашому кластері: + +```shell +kubectl delete service hello-node +kubectl delete deployment hello-node +``` + + +За бажанням, зупиніть віртуальну машину (ВМ) з Minikube: + +```shell +minikube stop +``` + + +За бажанням, видаліть ВМ з Minikube: + +```shell +minikube delete +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + + +* Дізнайтеся більше про [об'єкти Deployment](/docs/concepts/workloads/controllers/deployment/). + +* Дізнайтеся більше про [розгортання застосунків](/docs/user-guide/deploying-applications/). + +* Дізнайтеся більше про [об'єкти Service](/docs/concepts/services-networking/service/). + +{{% /capture %}} diff --git a/content/uk/docs/tutorials/kubernetes-basics/_index.html b/content/uk/docs/tutorials/kubernetes-basics/_index.html new file mode 100644 index 0000000000000..58100ccf1ec6c --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/_index.html @@ -0,0 +1,138 @@ +--- +title: Дізнатися про основи Kubernetes +linkTitle: Основи Kubernetes +weight: 10 +card: + name: навчальні матеріали + weight: 20 + title: Знайомство з основами +--- + + + + + + + + + +
          + +
          + +
          +
          + +

          Основи Kubernetes

          + +

          Цей навчальний матеріал ознайомить вас з основами системи оркестрації Kubernetes кластера. Кожен модуль містить загальну інформацію щодо основної функціональності і концепцій Kubernetes, а також інтерактивний онлайн-урок. Завдяки цим інтерактивним урокам ви зможете самостійно керувати простим кластером і розгорнутими в ньому контейнеризованими застосунками.

          + +

          З інтерактивних уроків ви дізнаєтесь:

          +
            + +
          • як розгорнути контейнеризований застосунок у кластері.
          • + +
          • як масштабувати Deployment.
          • + +
          • як розгорнути нову версію контейнеризованого застосунку.
          • + +
          • як відлагодити контейнеризований застосунок.
          • +
          + +

          Навчальні матеріали використовують Katacoda для запуску у вашому браузері віртуального термінала, в якому запущено Minikube - невеликий локально розгорнутий Kubernetes, що може працювати будь-де. Вам не потрібно встановлювати або налаштовувати жодне програмне забезпечення: кожен інтерактивний урок запускається просто у вашому браузері.

          +
          +
          + +
          + +
          +
          + +

          Чим Kubernetes може бути корисний для вас?

          + +

          Від сучасних вебсервісів користувачі очікують доступності 24/7, а розробники - можливості розгортати нові версії цих застосунків по кілька разів на день. Контейнеризація, що допомагає упакувати програмне забезпечення, якнайкраще сприяє цим цілям. Вона дозволяє випускати і оновлювати застосунки легко, швидко та без простою. Із Kubernetes ви можете бути певні, що ваші контейнеризовані застосунки запущені там і тоді, де ви цього хочете, а також забезпечені усіма необхідними для роботи ресурсами та інструментами. Kubernetes - це висококласна платформа з відкритим вихідним кодом, в основі якої - накопичений досвід оркестрації контейнерів від Google, поєднаний із найкращими ідеями і практиками від спільноти.

          +
          +
          + +
          + + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md new file mode 100644 index 0000000000000..9173c90d3448d --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/_index.md @@ -0,0 +1,4 @@ +--- +title: Створення кластера +weight: 10 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html new file mode 100644 index 0000000000000..f90f229c2a019 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -0,0 +1,37 @@ +--- +title: Інтерактивний урок - Створення кластера +weight: 20 +--- + + + + + + + + + + + +
          + +
          + +
          +
          + Для роботи з терміналом використовуйте комп'ютер або планшет +
          +
          +
          + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html new file mode 100644 index 0000000000000..a7ca8e994fc64 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -0,0 +1,152 @@ +--- +title: Використання Minikube для створення кластера +weight: 10 +--- + + + + + + + + + +
          + +
          + +
          + +
          + +

          Цілі

          +
            + +
          • Зрозуміти, що таке Kubernetes кластер.
          • + +
          • Зрозуміти, що таке Minikube.
          • + +
          • Запустити Kubernetes кластер за допомогою онлайн-термінала.
          • +
          +
          + +
          + +

          Kubernetes кластери

          + +

          + Kubernetes координує високодоступний кластер комп'ютерів, з'єднаних таким чином, щоб працювати як одне ціле. Абстракції Kubernetes дозволяють вам розгортати контейнеризовані застосунки в кластері без конкретної прив'язки до окремих машин. Для того, щоб скористатися цією новою моделлю розгортання, застосунки потрібно упакувати таким чином, щоб звільнити їх від прив'язки до окремих хостів, тобто контейнеризувати. Контейнеризовані застосунки більш гнучкі і доступні, ніж попередні моделі розгортання, що передбачали встановлення застосунків безпосередньо на призначені для цього машини у вигляді програмного забезпечення, яке глибоко інтегрувалося із хостом. Kubernetes дозволяє автоматизувати розподіл і запуск контейнерів застосунку у кластері, а це набагато ефективніше. Kubernetes - це платформа з відкритим вихідним кодом, готова для використання у проді. +

          + +

          Kubernetes кластер складається з двох типів ресурсів: +

            +
          • master, що координує роботу кластера
          • +
          • вузли (nodes) - робочі машини, на яких запущені застосунки
          • +
          +

          +
          + +
          +
          + +

          Зміст:

          +
            + +
          • Kubernetes кластер
          • + +
          • Minikube
          • +
          +
          +
          + +

          + Kubernetes - це довершена платформа з відкритим вихідним кодом, що оркеструє розміщення і запуск контейнерів застосунку всередині та між комп'ютерними кластерами. +

          +
          +
          +
          +
          + +
          +
          +

          Схема кластера

          +
          +
          + +
          +
          +

          +
          +
          +
          + +
          +
          + +

          Master відповідає за керування кластером. Master координує всі процеси у вашому кластері, такі як запуск застосунків, підтримка їх бажаного стану, масштабування застосунків і викатка оновлень.

          + + +

          Вузол (node) - це ВМ або фізичний комп'ютер, що виступає у ролі робочої машини в Kubernetes кластері. Кожен вузол має kubelet - агент для управління вузлом і обміну даними з Kubernetes master. Також на вузлі мають бути встановлені інструменти для виконання операцій з контейнерами, такі як Docker або rkt. Kubernetes кластер у проді повинен складатися як мінімум із трьох вузлів.

          + +
          +
          +
          + +

          Master'и керують кластером, а вузли використовуються для запуску застосунків.

          +
          +
          +
          + +
          +
          + +

          Коли ви розгортаєте застосунки у Kubernetes, ви кажете master-вузлу запустити контейнери застосунку. Master розподіляє контейнери для запуску на вузлах кластера. Для обміну даними з master вузли використовують Kubernetes API, який надається master-вузлом. Кінцеві користувачі також можуть взаємодіяти із кластером безпосередньо через Kubernetes API.

          + + +

          Kubernetes кластер можна розгорнути як на фізичних, так і на віртуальних серверах. Щоб розпочати розробку під Kubernetes, ви можете скористатися Minikube - спрощеною реалізацією Kubernetes. Minikube створює на вашому локальному комп'ютері ВМ, на якій розгортає простий кластер з одного вузла. Існують версії Minikube для операційних систем Linux, macOS та Windows. Minikube CLI надає основні операції для роботи з вашим кластером, такі як start, stop, status і delete. Однак у цьому уроці ви використовуватимете онлайн термінал із вже встановленим Minikube.

          + + +

          Тепер ви знаєте, що таке Kubernetes. Тож давайте перейдемо до інтерактивного уроку і створимо ваш перший кластер!

          + +
          +
          +
          + + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md new file mode 100644 index 0000000000000..a9c1ff23760ed --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/_index.md @@ -0,0 +1,4 @@ +--- +title: Розгортання застосунку +weight: 20 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html new file mode 100644 index 0000000000000..d91c2bb1c0d72 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -0,0 +1,41 @@ +--- +title: Інтерактивний урок - Розгортання застосунку +weight: 20 +--- + + + + + + + + + + + +
          + +
          + +
          +
          +
          + Для роботи з терміналом використовуйте комп'ютер або планшет +
          + +
          +
          + +
          + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html new file mode 100644 index 0000000000000..942e6ac55fd56 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -0,0 +1,151 @@ +--- +title: Використання kubectl для створення Deployment'а +weight: 10 +--- + + + + + + + + + +
          + +
          + +
          + +
          + +

          Цілі

          +
            + +
          • Дізнатися, що таке Deployment застосунків.
          • + +
          • Розгорнути свій перший застосунок у Kubernetes за допомогою kubectl.
          • +
          +
          + +
          + +

          Процеси Kubernetes Deployment

          + +

          + Після того, як ви запустили Kubernetes кластер, ви можете розгортати в ньому контейнеризовані застосунки. Для цього вам необхідно створити Deployment конфігурацію. Вона інформує Kubernetes, як створювати і оновлювати Pod'и для вашого застосунку. Після того, як ви створили Deployment, Kubernetes master розподіляє ці Pod'и по окремих вузлах кластера. +

          + + +

          Після створення Pod'и застосунку безперервно моніторяться контролером Kubernetes Deployment. Якщо вузол, на якому розміщено Pod, зупинив роботу або був видалений, Deployment контролер переміщає цей Pod на інший вузол кластера. Так працює механізм самозцілення, що підтримує робочий стан кластера у разі апаратного збою чи технічних робіт.

          + + +

          До появи оркестрації застосунки часто запускали за допомогою скриптів установлення. Однак скрипти не давали можливості відновити працездатний стан застосунку після апаратного збою. Завдяки створенню Pod'ів та їхньому запуску на вузлах кластера, Kubernetes Deployment надає цілковито інший підхід до управління застосунками.

          + +
          + +
          +
          + +

          Зміст:

          +
            + +
          • Deployment'и
          • +
          • Kubectl
          • +
          +
          +
          + +

          + Deployment відповідає за створення і оновлення Pod'ів для вашого застосунку +

          +
          +
          +
          +
          + +
          +
          +

          Як розгорнути ваш перший застосунок у Kubernetes

          +
          +
          + +
          +
          +

          +
          +
          +
          + +
          +
          + + +

          Ви можете створити Deployment і керувати ним за допомогою командного рядка Kubernetes - kubectl. kubectl взаємодіє з кластером через API Kubernetes. У цьому модулі ви вивчите найпоширеніші команди kubectl для створення Deployment'ів, які запускатимуть ваші застосунки у Kubernetes кластері.

          + + +

          Коли ви створюєте Deployment, вам необхідно задати образ контейнера для вашого застосунку і скільки реплік ви хочете запустити. Згодом цю інформацію можна змінити, оновивши Deployment. У навчальних модулях 5 і 6 йдеться про те, як масштабувати і оновлювати Deployment'и.

          + + + + +
          +
          +
          + +

          Для того, щоб розгортати застосунки в Kubernetes, їх потрібно упакувати в один із підтримуваних форматів контейнерів

          +
          +
          +
          + +
          +
          + +

          + Для створення Deployment'а ви використовуватимете застосунок, написаний на Node.js і упакований в Docker контейнер. (Якщо ви ще не пробували створити Node.js застосунок і розгорнути його у контейнері, радимо почати саме з цього; інструкції ви знайдете у навчальному матеріалі Привіт Minikube). +

          + + +

          Тепер ви знаєте, що таке Deployment. Тож давайте перейдемо до інтерактивного уроку і розгорнемо ваш перший застосунок!

          +
          +
          +
          + + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md b/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md new file mode 100644 index 0000000000000..93ac6a77745f9 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/explore/_index.md @@ -0,0 +1,4 @@ +--- +title: Вивчення застосунку +weight: 30 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html new file mode 100644 index 0000000000000..56a3793019cda --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -0,0 +1,41 @@ +--- +title: Інтерактивний урок - Вивчення застосунку +weight: 20 +--- + + + + + + + + + + + +
          + +
          + +
          +
          + +
          + Для роботи з терміналом використовуйте комп'ютер або планшет +
          + +
          +
          +
          + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html new file mode 100644 index 0000000000000..279eedde57a9c --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -0,0 +1,200 @@ +--- +title: Ознайомлення з Pod'ами і вузлами (nodes) +weight: 10 +--- + + + + + + + + + + +
          + +
          + +
          + +
          + +

          Цілі

          +
            + +
          • Дізнатися, що таке Pod'и Kubernetes.
          • + +
          • Дізнатися, що таке вузли Kubernetes.
          • + +
          • Діагностика розгорнутих застосунків.
          • +
          +
          + +
          + +

          Pod'и Kubernetes

          + +

          Коли ви створили Deployment у модулі 2, Kubernetes створив Pod, щоб розмістити ваш застосунок. Pod - це абстракція в Kubernetes, що являє собою групу з одного або декількох контейнерів застосунку (як Docker або rkt) і ресурси, спільні для цих контейнерів. До цих ресурсів належать:

          +
            + +
          • Спільні сховища даних, або Volumes
          • + +
          • Мережа, адже кожен Pod у кластері має унікальну IP-адресу
          • + +
          • Інформація з запуску кожного контейнера, така як версія образу контейнера або використання певних портів
          • +
          + +

          Pod моделює специфічний для даного застосунку "логічний хост" і може містити різні, але доволі щільно зв'язані контейнери. Наприклад, в одному Pod'і може бути контейнер з вашим Node.js застосунком та інший контейнер, що передає дані для публікації Node.js вебсерверу. Контейнери в межах Pod'а мають спільну IP-адресу і порти, завжди є сполученими, плануються для запуску разом і запускаються у спільному контексті на одному вузлі.

          + + +

          Pod є неподільною одиницею платформи Kubernetes. Коли ви створюєте Deployment у Kubernetes, цей Deployment створює Pod'и вже з контейнерами всередині, на відміну від створення контейнерів окремо. Кожен Pod прив'язаний до вузла, до якого його було розподілено, і лишається на ньому до припинення роботи (згідно з політикою перезапуску) або видалення. У разі відмови вузла ідентичні Pod'и розподіляються по інших доступних вузлах кластера.

          + +
          +
          +
          +

          Зміст:

          +
            +
          • Pod'и
          • +
          • Вузли
          • +
          • Основні команди kubectl
          • +
          +
          +
          + +

          + Pod - це група з одного або декількох контейнерів (таких як Docker або rkt), що має спільне сховище даних (volumes), унікальну IP-адресу і містить інформацію як їх запустити. +

          +
          +
          +
          +
          + +
          +
          +

          Узагальнена схема Pod'ів

          +
          +
          + +
          +
          +

          +
          +
          +
          + +
          +
          + +

          Вузли

          + +

          Pod завжди запускається на вузлі. Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. Функціонування кожного вузла контролюється master'ом. Вузол може мати декілька Pod'ів. Kubernetes master автоматично розподіляє Pod'и по вузлах кластера з урахуванням ресурсів, наявних на кожному вузлі.

          + + +

          На кожному вузлі Kubernetes запущені як мінімум:

          +
            + +
          • kubelet - процес, що забезпечує обмін даними між Kubernetes master і робочим вузлом; kubelet контролює Pod'и і контейнери, запущені на машині.
          • + +
          • оточення для контейнерів (таке як Docker, rkt), що забезпечує завантаження образу контейнера з реєстру, розпакування контейнера і запуск застосунку.
          • + +
          + +
          +
          +
          + +

          Контейнери повинні бути разом в одному Pod'і, лише якщо вони щільно зв'язані і мають спільні ресурси, такі як диск.

          +
          +
          +
          + +
          + +
          +
          +

          Узагальнена схема вузлів

          +
          +
          + +
          +
          +

          +
          +
          +
          + +
          +
          + +

          Діагностика за допомогою kubectl

          + +

          У модулі 2 ви вже використовували інтерфейс командного рядка kubectl. У модулі 3 ви продовжуватимете користуватися ним для отримання інформації про застосунки та оточення, в яких вони розгорнуті. Нижченаведені команди kubectl допоможуть вам виконати наступні поширені дії:

          +
            + +
          • kubectl get - відобразити список ресурсів
          • + +
          • kubectl describe - показати детальну інформацію про ресурс
          • + +
          • kubectl logs - вивести логи контейнера, розміщеного в Pod'і
          • + +
          • kubectl exec - виконати команду в контейнері, розміщеному в Pod'і
          • +
          + + +

          За допомогою цих команд ви можете подивитись, коли і в якому оточенні був розгорнутий застосунок, перевірити його поточний статус і конфігурацію.

          + + +

          А зараз, коли ми дізналися більше про складові нашого кластера і командний рядок, давайте детальніше розглянемо наш застосунок.

          + +
          +
          +
          + +

          Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. На одному вузлі можуть бути запущені декілька Pod'ів.

          +
          +
          +
          +
          + + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md b/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md new file mode 100644 index 0000000000000..ef49a1b632a48 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/expose/_index.md @@ -0,0 +1,4 @@ +--- +title: Відкриття доступу до застосунку за межами кластера +weight: 40 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html new file mode 100644 index 0000000000000..9ed658fa21c55 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -0,0 +1,38 @@ +--- +title: Інтерактивний урок - Відкриття доступу до застосунку +weight: 20 +--- + + + + + + + + + + + +
          + +
          + +
          +
          + Для роботи з терміналом використовуйте комп'ютер або планшет +
          +
          +
          +
          + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html new file mode 100644 index 0000000000000..d439553731027 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -0,0 +1,169 @@ +--- +#title: Using a Service to Expose Your App +title: Використання Cервісу для відкриття доступу до застосунку за межами кластера +weight: 10 +--- + + + + + + + + + +
          + +
          + +
          +
          + +

          Цілі

          +
            + +
          • Дізнатись, що таке Cервіс у Kubernetes
          • + +
          • Зрозуміти, яке відношення до Cервісу мають мітки та LabelSelector
          • + +
          • Відкрити доступ до застосунку за межами Kubernetes кластера, використовуючи Cервіс
          • +
          +
          + +
          + +

          Загальна інформація про Kubernetes Cервіси

          + + +

          Pod'и Kubernetes "смертні" і мають власний життєвий цикл. Коли робочий вузол припиняє роботу, ми також втрачаємо всі Pod'и, запущені на ньому. ReplicaSet здатна динамічно повернути кластер до бажаного стану шляхом створення нових Pod'ів, забезпечуючи безперебійність роботи вашого застосунку. Як інший приклад, візьмемо бекенд застосунку для обробки зображень із трьома репліками. Ці репліки взаємозамінні; система фронтенду не повинна зважати на репліки бекенду чи на втрату та перестворення Pod'а. Водночас, кожний Pod у Kubernetes кластері має унікальну IP-адресу, навіть Pod'и на одному вузлі. Відповідно, має бути спосіб автоматично синхронізувати зміни між Pod'ами для того, щоб ваші застосунки продовжували працювати.

          + + +

          Service у Kubernetes - це абстракція, що визначає логічний набір Pod'ів і політику доступу до них. Services уможливлюють слабку зв'язаність між залежними Pod'ами. Для визначення Service використовують YAML-файл (рекомендовано) або JSON, як для решти об'єктів Kubernetes. Набір Pod'ів, призначених для Service, зазвичай визначається через LabelSelector (нижче пояснюється, чому параметр selector іноді не включають у специфікацію Service).

          + + +

          Попри те, що кожен Pod має унікальний IP, ці IP-адреси не видні за межами кластера без Service. Services уможливлюють надходження трафіка до ваших застосунків. Відкрити Service можна по-різному, вказавши потрібний type у ServiceSpec:

          +
            + +
          • ClusterIP (типове налаштування) - відкриває доступ до Service у кластері за внутрішнім IP. Цей тип робить Service доступним лише у межах кластера.
          • + +
          • NodePort - відкриває доступ до Service на однаковому порту кожного обраного вузла в кластері, використовуючи NAT. Робить Service доступним поза межами кластера, використовуючи <NodeIP>:<NodePort>. Є надмножиною відносно ClusterIP.
          • + +
          • LoadBalancer - створює зовнішній балансувальник навантаження у хмарі (за умови хмарної інфраструктури) і призначає Service статичну зовнішню IP-адресу. Є надмножиною відносно NodePort.
          • + +
          • ExternalName - відкриває доступ до Service, використовуючи довільне ім'я (визначається параметром externalName у специфікації), повертає запис CNAME. Проксі не використовується. Цей тип потребує версії kube-dns 1.7 і вище.
          • +
          + +

          Більше інформації про різні типи Services ви знайдете у навчальному матеріалі Використання вихідної IP-адреси. Дивіться також Поєднання застосунків з Services.

          + +

          Також зауважте, що для деяких сценаріїв використання Services параметр selector не задається у специфікації Service. Service, створений без визначення параметра selector, також не створюватиме відповідного Endpoint об'єкта. Це дозволяє користувачам вручну спроектувати Service на конкретні кінцеві точки (endpoints). Інший випадок, коли Селектор може бути не потрібний - використання строго заданого параметра type: ExternalName.

          +
          +
          +
          + +

          Зміст

          +
            + +
          • Відкриття Pod'ів для зовнішнього трафіка
          • + +
          • Балансування навантаження трафіка між Pod'ами
          • + +
          • Використання міток
          • +
          +
          +
          + +

          Service Kubernetes - це шар абстракції, який визначає логічний набір Pod'ів і відкриває їх для зовнішнього трафіка, балансує навантаження і здійснює виявлення цих Pod'ів.

          +
          +
          +
          +
          + +
          +
          + +

          Services і мітки

          +
          +
          + +
          +
          +

          +
          +
          + +
          +
          + +

          Service маршрутизує трафік між Pod'ами, що входять до його складу. Service - це абстракція, завдяки якій Pod'и в Kubernetes "вмирають" і відтворюються, не впливаючи на роботу вашого застосунку. Services в Kubernetes здійснюють виявлення і маршрутизацію між залежними Pod'ами (як наприклад, фронтенд- і бекенд-компоненти застосунку).

          + +

          Services співвідносяться з набором Pod'ів за допомогою міток і Селекторів -- примітивів групування, що роблять можливими логічні операції з об'єктами у Kubernetes. Мітки являють собою пари ключ/значення, що прикріплені до об'єктів і можуть використовуватися для різних цілей:

          +
            + +
          • Позначення об'єктів для дев, тест і прод оточень
          • + +
          • Прикріплення тегу версії
          • + +
          • Класифікування об'єктів за допомогою тегів
          • +
          + +
          +
          +
          + +

          Ви можете створити Service одночасно із Deployment, виконавши команду
          --expose в kubectl.

          +
          +
          +
          + +
          + +
          +
          +

          +
          +
          +
          +
          +
          + +

          Мітки можна прикріпити до об'єктів під час створення або пізніше. Їх можна змінити у будь-який час. А зараз давайте відкриємо наш застосунок за допомогою Service і прикріпимо мітки.

          +
          +
          +
          + +
          +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md b/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md new file mode 100644 index 0000000000000..c6e1a94dc19d1 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/scale/_index.md @@ -0,0 +1,4 @@ +--- +title: Масштабування застосунку +weight: 50 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html new file mode 100644 index 0000000000000..a405a98e929ef --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -0,0 +1,40 @@ +--- +title: Інтерактивний урок - Масштабування застосунку +weight: 20 +--- + + + + + + + + + + + +
          + +
          + +
          +
          + Для роботи з терміналом використовуйте комп'ютер або планшет +
          +
          +
          +
          + + +
          + + + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html new file mode 100644 index 0000000000000..2cf50d65c70b3 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -0,0 +1,145 @@ +--- +title: Запуск вашого застосунку на декількох Pod'ах +weight: 10 +--- + + + + + + + + + +
          + +
          + +
          + +
          + +

          Цілі

          +
            + +
          • Масштабувати застосунок за допомогою kubectl.
          • +
          +
          + +
          + +

          Масштабування застосунку

          + + +

          У попередніх модулях ми створили Deployment і відкрили його для зовнішнього трафіка за допомогою Service. Deployment створив лише один Pod для запуску нашого застосунку. Коли трафік збільшиться, нам доведеться масштабувати застосунок, аби задовольнити вимоги користувачів.

          + + +

          Масштабування досягається шляхом зміни кількості реплік у Deployment'і.

          + +
          +
          +
          + +

          Зміст:

          +
            + +
          • Масштабування Deployment'а
          • +
          +
          +
          + +

          Кількість Pod'ів можна вказати одразу при створенні Deployment'а за допомогою параметра --replicas, під час запуску команди kubectl run

          +
          +
          +
          +
          + +
          +
          +

          Загальна інформація про масштабування

          +
          +
          + +
          +
          +
          + +
          +
          + +
          + +
          +
          + + +

          Масштабування Deployment'а забезпечує створення нових Pod'ів і їх розподілення по вузлах з доступними ресурсами. Масштабування збільшить кількість Pod'ів відповідно до нового бажаного стану. Kubernetes також підтримує автоматичне масштабування, однак це виходить за межі даного матеріалу. Масштабування до нуля також можливе - це призведе до видалення всіх Pod'ів у визначеному Deployment'і.

          + + +

          Запустивши застосунок на декількох Pod'ах, необхідно розподілити між ними трафік. Services мають інтегрований балансувальник навантаження, що розподіляє мережевий трафік між усіма Pod'ами відкритого Deployment'а. Services безперервно моніторять запущені Pod'и за допомогою кінцевих точок, для того щоб забезпечити надходження трафіка лише на доступні Pod'и.

          + +
          +
          +
          + +

          Масштабування досягається шляхом зміни кількості реплік у Deployment'і.

          +
          +
          +
          + +
          + +
          +
          + +

          Після запуску декількох примірників застосунку ви зможете виконувати послідовне оновлення без шкоди для доступності системи. Ми розповімо вам про це у наступному модулі. А зараз давайте повернемось до онлайн термінала і масштабуємо наш застосунок.

          +
          +
          +
          + + + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/_index.md b/content/uk/docs/tutorials/kubernetes-basics/update/_index.md new file mode 100644 index 0000000000000..c253433db1bf8 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/update/_index.md @@ -0,0 +1,4 @@ +--- +title: Оновлення застосунку +weight: 60 +--- diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html new file mode 100644 index 0000000000000..0ac0d169a8bb3 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/update/update-interactive.html @@ -0,0 +1,37 @@ +--- +title: Інтерактивний урок - Оновлення застосунку +weight: 20 +--- + + + + + + + + + + + +
          + +
          + +
          +
          + Для роботи з терміналом використовуйте комп'ютер або планшет +
          +
          +
          +
          + +
          + +
          + + + diff --git a/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html b/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html new file mode 100644 index 0000000000000..9921b6c884894 --- /dev/null +++ b/content/uk/docs/tutorials/kubernetes-basics/update/update-intro.html @@ -0,0 +1,168 @@ +--- +title: Виконання послідовного оновлення (rolling update) +weight: 10 +--- + + + + + + + + + + +
          + +
          + +
          + +
          + +

          Цілі

          +
            + +
          • Виконати послідовне оновлення, використовуючи kubectl.
          • +
          +
          + +
          + +

          Оновлення застосунку

          + + +

          Користувачі очікують від застосунків високої доступності у будь-який час, а розробники - оновлення цих застосунків декілька разів на день. У Kubernetes це стає можливим завдяки послідовному оновленню. Послідовні оновлення дозволяють оновити Deployment без простою, шляхом послідовної заміни одних Pod'ів іншими. Нові Pod'и розподіляються по вузлах з доступними ресурсами.

          + + +

          У попередньому модулі ми масштабували наш застосунок, запустивши його на декількох Pod'ах. Масштабування - необхідна умова для проведення оновлень без шкоди для доступності застосунку. За типовими налаштуваннями, максимальна кількість Pod'ів, недоступних під час оновлення, і максимальна кількість нових Pod'ів, які можуть бути створені, дорівнює одиниці. Обидві опції можна налаштувати в числовому або відсотковому (від кількості Pod'ів) еквіваленті. + У Kubernetes оновлення версіонуються, тому кожне оновлення Deployment'а можна відкотити до попередньої (стабільної) версії.

          + +
          +
          +
          + +

          Зміст:

          +
            + +
          • Оновлення застосунку
          • +
          +
          +
          + +

          Послідовне оновлення дозволяє оновити Deployment без простою шляхом послідовної заміни одних Pod'ів іншими.

          +
          +
          +
          +
          + +
          +
          +

          Загальна інформація про послідовне оновлення

          +
          +
          +
          +
          +
          + +
          +
          +
          + +
          +
          + + +

          Як і у випадку з масштабуванням, якщо Deployment "відкритий у світ", то під час оновлення Service розподілятиме трафік лише на доступні Pod'и. Під доступним мається на увазі Pod, готовий до експлуатації користувачами застосунку.

          + + +

          Послідовне оновлення дозволяє вам:

          +
            + +
          • Просувати застосунок з одного оточення в інше (шляхом оновлення образу контейнера)
          • + +
          • Відкочуватися до попередніх версій
          • + +
          • Здійснювати безперервну інтеграцію та розгортання застосунків без простою
          • + +
          + +
          +
          +
          + +

          Якщо Deployment "відкритий у світ", то під час оновлення Service розподілятиме трафік лише на доступні Pod'и.

          +
          +
          +
          + +
          + +
          +
          + +

          В інтерактивному уроці ми оновимо наш застосунок до нової версії, а потім відкотимося до попередньої.

          +
          +
          +
          + + + +
          + +
          + + + diff --git a/content/uk/examples/controllers/job.yaml b/content/uk/examples/controllers/job.yaml new file mode 100644 index 0000000000000..b448f2eb81daf --- /dev/null +++ b/content/uk/examples/controllers/job.yaml @@ -0,0 +1,14 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: pi +spec: + template: + spec: + containers: + - name: pi + image: perl + command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + restartPolicy: Never + backoffLimit: 4 + diff --git a/content/uk/examples/controllers/nginx-deployment.yaml b/content/uk/examples/controllers/nginx-deployment.yaml new file mode 100644 index 0000000000000..f7f95deebbb23 --- /dev/null +++ b/content/uk/examples/controllers/nginx-deployment.yaml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.7.9 + ports: + - containerPort: 80 diff --git a/content/uk/examples/controllers/replication.yaml b/content/uk/examples/controllers/replication.yaml new file mode 100644 index 0000000000000..6eff0b9b57642 --- /dev/null +++ b/content/uk/examples/controllers/replication.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: ReplicationController +metadata: + name: nginx +spec: + replicas: 3 + selector: + app: nginx + template: + metadata: + name: nginx + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 diff --git a/content/uk/examples/minikube/Dockerfile b/content/uk/examples/minikube/Dockerfile new file mode 100644 index 0000000000000..dd58cb7e7547e --- /dev/null +++ b/content/uk/examples/minikube/Dockerfile @@ -0,0 +1,4 @@ +FROM node:6.14.2 +EXPOSE 8080 +COPY server.js . +CMD [ "node", "server.js" ] diff --git a/content/uk/examples/minikube/server.js b/content/uk/examples/minikube/server.js new file mode 100644 index 0000000000000..76345a17d81db --- /dev/null +++ b/content/uk/examples/minikube/server.js @@ -0,0 +1,9 @@ +var http = require('http'); + +var handleRequest = function(request, response) { + console.log('Received request for URL: ' + request.url); + response.writeHead(200); + response.end('Hello World!'); +}; +var www = http.createServer(handleRequest); +www.listen(8080); diff --git a/content/uk/examples/service/networking/dual-stack-default-svc.yaml b/content/uk/examples/service/networking/dual-stack-default-svc.yaml new file mode 100644 index 0000000000000..00ed87ba196be --- /dev/null +++ b/content/uk/examples/service/networking/dual-stack-default-svc.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml new file mode 100644 index 0000000000000..a875f44d6d060 --- /dev/null +++ b/content/uk/examples/service/networking/dual-stack-ipv4-svc.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + ipFamily: IPv4 + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml new file mode 100644 index 0000000000000..2586ec9b39a44 --- /dev/null +++ b/content/uk/examples/service/networking/dual-stack-ipv6-lb-svc.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service + labels: + app: MyApp +spec: + ipFamily: IPv6 + type: LoadBalancer + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml b/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml new file mode 100644 index 0000000000000..2aa0725059bbc --- /dev/null +++ b/content/uk/examples/service/networking/dual-stack-ipv6-svc.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + ipFamily: IPv6 + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 \ No newline at end of file diff --git a/content/vi/docs/home/_index.md b/content/vi/docs/home/_index.md index 97417e27b66ba..1239237a64da3 100644 --- a/content/vi/docs/home/_index.md +++ b/content/vi/docs/home/_index.md @@ -3,7 +3,7 @@ title: Tài liệu Kubernetes noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "Home" main_menu: true weight: 10 diff --git a/content/vi/docs/reference/glossary/annotation.md b/content/vi/docs/reference/glossary/annotation.md new file mode 100644 index 0000000000000..4e53f0800a6de --- /dev/null +++ b/content/vi/docs/reference/glossary/annotation.md @@ -0,0 +1,18 @@ +--- +title: Chú thích (Annotation) +id: annotation +date: 2019-12-16 +full_link: /docs/concepts/overview/working-with-objects/annotations +short_description: > + Một cặp khóa-giá trị được sử dụng để đính kèm tùy ý một metadata không xác định cụ thể vào các đối tượng. + +aka: +tags: + - fundamental +--- + +Một cặp khóa-giá trị được sử dụng để đính kèm tùy ý một metadata không xác định cụ thể vào các đối tượng. + + + +Metadata (siêu dữ liệu) có trong một annotation có thể nhỏ hoặc lớn, có cấu trúc hoặc không, và có thể bao gồm những kí tự không được cho phép như {{< glossary_tooltip text="Labels" term_id="label" >}}. Các công cụ và thư viện ở phía client có thể thu thập những metadata này. \ No newline at end of file diff --git a/content/vi/docs/reference/glossary/api-group.md b/content/vi/docs/reference/glossary/api-group.md new file mode 100644 index 0000000000000..ce2cfd45327b0 --- /dev/null +++ b/content/vi/docs/reference/glossary/api-group.md @@ -0,0 +1,21 @@ +--- +title: API Group +id: api-group +date: 2019-12-16 +full_link: /docs/concepts/overview/kubernetes-api/#api-groups +short_description: > + Một tập những đường dẫn tương đối đến Kubernetes API. + +aka: +tags: + - fundamental + - architecture +--- + +Một tập những đường dẫn tương đối đến Kubernetes API. + + + +Bạn có thể cho phép hay vô hiệu từng API group bằng cách thay đổi cấu hình trên API server của mình. Đồng thời bạn cũng có thể vô hiệu hay kích hoạt các đường dẫn cho những tài nguyên cụ thể. API group đơn giản hóa việc mở rộng Kubernetes API. Nó được chỉ định dưới dạng REST và trong trường `apiVersion` của một đối tượng đã được chuyển hóa. + +- Đọc thêm về [API Group](/docs/concepts/overview/kubernetes-api/#api-groups). \ No newline at end of file diff --git a/content/vi/docs/reference/glossary/applications.md b/content/vi/docs/reference/glossary/applications.md new file mode 100644 index 0000000000000..cc04c4dbda5ff --- /dev/null +++ b/content/vi/docs/reference/glossary/applications.md @@ -0,0 +1,14 @@ +--- +title: Ứng dụng +id: applications +date: 2019-12-16 +full_link: +short_description: > + Một lớp nơi các ứng được đã được container-hóa chạy. + +aka: +tags: + - fundamental +--- + +Một lớp nơi các ứng được đã được containerized chạy. \ No newline at end of file diff --git a/content/vi/docs/reference/glossary/cgroup.md b/content/vi/docs/reference/glossary/cgroup.md new file mode 100644 index 0000000000000..792627ea71ca0 --- /dev/null +++ b/content/vi/docs/reference/glossary/cgroup.md @@ -0,0 +1,18 @@ +--- +title: cgroup (control group) +id: cgroup +date: 2019-12-16 +full_link: +short_description: > + Một nhóm các process trên Linux với sự tùy chọn trong cô lập tài nguyên, trách nhiệm và giới hạn. + +aka: +tags: + - fundamental +--- + +Một nhóm các process trên Linux với sự tùy chọn trong cô lập tài nguyên, trách nhiệm và giới hạn. + + + +cgroup là một tính năng của Linux kernel giúp giới hạn, giao trách nhiệm, và cô lập việc sử dụng các tài nguyên trên máy (CPU, memory, disk I/O, network) cho một tập các process. \ No newline at end of file diff --git a/content/vi/docs/reference/glossary/container-env-variables.md b/content/vi/docs/reference/glossary/container-env-variables.md new file mode 100644 index 0000000000000..697e0abda032d --- /dev/null +++ b/content/vi/docs/reference/glossary/container-env-variables.md @@ -0,0 +1,18 @@ +--- +title: Biến môi trường trong Container +id: container-env-variables +date: 2019-12-16 +full_link: /docs/concepts/containers/container-environment-variables/ +short_description: > + Biến môi trường của container là một cặp Tên-Giá trị nhằm cung cấp những thông tin hữu ích vô trong những containers bên trong một Pod. + +aka: +tags: + - fundamental +--- + +Biến môi trường của container là một cặp Tên-Giá trị nhằm cung cấp những thông tin hữu ích vô trong những containers bên trong một Pod. + + + +Biến môi trường của container cung cấp thông tin cần thiết cho mỗi ứng dụng cùng với những thông tin về những resources quan trọng đối với {{< glossary_tooltip text="Containers" term_id="container" >}} đó. Ví dụ, thông tin chi tiết về file system, thông tin về bản thân của chính container đó, và những resources khác ở trong cluster như điểm kết của một services. \ No newline at end of file diff --git a/content/vi/docs/reference/glossary/controller.md b/content/vi/docs/reference/glossary/controller.md new file mode 100644 index 0000000000000..e0ceedb463e6e --- /dev/null +++ b/content/vi/docs/reference/glossary/controller.md @@ -0,0 +1,22 @@ +--- +title: Vòng điều khiển (Controller) +id: controller +date: 2020-01-09 +full_link: /docs/concepts/architecture/controller/ +short_description: > + Vòng điều khiển lặp lại theo dõi trạng thái chung của cluster thông qua apiserver và tự động thay đổi để hệ thống từ trạng thái hiện tại đạt tới trạng thái mong muốn. + +aka: +tags: + - architecture + - fundamental +--- + +Trong hệ thống Kubernetes, các bộ controllers là các vòng lặp điều khiển theo dõi trạng thái của mỗi {{< glossary_tooltip term_id="cluster" text="cluster">}}, sau đó chúng sẽ tạo hoặc yêu cầu sự thay đổi cần thiết. +Mỗi controller cố thực hiện việc thay đổi để giúp hệ thống chuyển từ trạng thái hiện tại sang trạng thái mong muốn. + + + +Vòng điều khiển lặp lại theo dõi trạng thái chung của cluster thông qua Kubernetes API server (một thần phần của {{< glossary_tooltip term_id="control-plane" >}}). + +Một vài controllers chạy bên trong control plan, cung cấp các vòng lặp điều khiển vận hành nhân gốc của các hoạt động trong hệ thống Kubernetes. Ví dụ: với deployment controller, daemonset controller, namespace controller, và persistent volume controller (và một vài controller còn lại) đều chạy bên trong kube-controller-manager. \ No newline at end of file diff --git a/content/vi/docs/reference/glossary/customresourcedefinition.md b/content/vi/docs/reference/glossary/customresourcedefinition.md new file mode 100644 index 0000000000000..78a5986d3a355 --- /dev/null +++ b/content/vi/docs/reference/glossary/customresourcedefinition.md @@ -0,0 +1,20 @@ +--- +title: CustomResourceDefinition +id: CustomResourceDefinition +date: 2020-01-09 +full_link: docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/ +short_description: > + Những đoạn custom code chỉ định một loại tài nguyên được thêm vào Kubenretes API server mà không cần thiết phải xây dựng một custom server hoàn chỉnh. + +aka: +tags: + - fundamental + - operation + - extension +--- + +Những đoạn custom code chỉ định một loại tài nguyên được thêm vào Kubenretes API server mà không cần thiết phải xây dựng một custom server hoàn chỉnh. + + + +Custom Resource Definitions cho phép bạn có thể mở rộng Kubernetes API cho môi trường của bạn trong trường hợp những API được cung cấp sẵn không đáp ứng được các nhu cầu của bạn. \ No newline at end of file diff --git a/content/vi/docs/reference/glossary/label.md b/content/vi/docs/reference/glossary/label.md index df60ac73a4a71..f7a8f87e579d6 100644 --- a/content/vi/docs/reference/glossary/label.md +++ b/content/vi/docs/reference/glossary/label.md @@ -1,5 +1,5 @@ --- -title: Label +title: Nhãn (Label) id: label date: 2019-29-11 full_link: /docs/concepts/overview/working-with-objects/labels diff --git a/content/vi/docs/reference/glossary/selector.md b/content/vi/docs/reference/glossary/selector.md index 9d74599ba853b..3c96cefaba046 100644 --- a/content/vi/docs/reference/glossary/selector.md +++ b/content/vi/docs/reference/glossary/selector.md @@ -1,10 +1,10 @@ --- -title: Selector +title: Bộ chọn (Selector) id: selector date: 2019-29-11 full_link: /docs/concepts/overview/working-with-objects/labels/ short_description: > - Cho phép người dùng lọc ra một danh sách tài nguyên dựa trên labels (nhãn). + Bộ chọn cho phép người dùng lọc ra một danh sách tài nguyên dựa trên labels (nhãn). aka: tags: diff --git a/content/zh/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md b/content/zh/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md new file mode 100644 index 0000000000000..aa11815a9b7c6 --- /dev/null +++ b/content/zh/blog/_posts/2019-10-10-contributor-summit-san-diego-schedule.md @@ -0,0 +1,103 @@ +--- +layout: blog +title: "圣迭戈贡献者峰会日程公布!" +date: 2019-10-10 +slug: contributor-summit-san-diego-schedule +--- + + +作者:Josh Berkus (Red Hat), Paris Pittman (Google), Jonas Rosland (VMware) + +一周前,我们宣布贡献者峰会[开放注册][reg],现在我们已经完成了整个[贡献者峰会的日程安排][schedule]!趁现在还有票,马上抢占你的位置。这里有一个新贡献者研讨会的等待名单。 ([点击这里注册!][reg]) + +除了新贡献者研讨会之外,贡献者峰会还安排了许多精彩的会议,这些会议分布在当前五个贡献者内容的会议室中。由于这是一个上游贡献者峰会,并且我们不经常见面,所以作为一个全球分布的团队,这些会议大多是讨论或动手实践,而不仅仅是演示。我们希望大家互相学习,并于他们的开源代码队友玩的开心。 + +像去年一样,非组织会议将重新开始,会议将在周一上午进行选择。对于最新的热门话题和贡献者想要进行的特定讨论,这是理想的选择。在过去的几年中,我们涵盖了不稳定的测试,集群生命周期,KEP(Kubernetes增强建议),指导,安全性等等。 + +![非组织会议](/images/blog/2019-10-10-contributor-summit-san-diego-schedule/DSCF0806.jpg) + +尽管在每个时间间隙日程安排都包含困难的决定,但我们选择了以下几点,让您体验一下您将在峰会上听到、看到和参与的内容: + +* **[预见]**: SIG组织将分享他们对于明年和以后Kubernetes开发发展方向的认识。 +* **[安全]**: Tim Allclair和CJ Cullen将介绍Kubernetes安全的当前情况。在另一个安全性演讲中,Vallery Lancey将主持有关使我们的平台默认情况下安全的讨论。 +* **[Prow]**: 有兴趣与Prow合作并为Test-Infra做贡献,但不确定从哪里开始? Rob Keilty将帮助您在笔记本电脑上运行Prow测试环境 +* **[Git]**: GitHub的员工将与Christoph Blecker合作,为Kubernetes贡献者分享实用的Git技巧。 +* **[审阅]**: 蒂姆·霍金(Tim Hockin)将分享成为一名出色的代码审阅者的秘密,而乔丹·利吉特(Jordan Liggitt)将进行实时API审阅,以便您可以进行一次或至少了解一次审阅。 +* **[终端用户]**: 应Cheryl Hung邀请,来自CNCF合作伙伴生态的数个终端用户,将回答贡献者的问题,以加强我们的反馈循环。 +* **[文档]**: 与往常一样,SIG-Docs将举办一个为时三个小时的文档撰写研讨会。 + +我们还将向在2019年杰出的贡献者颁发奖项,周一星期一结束时将有一个巨大的见面会,供新的贡献者找到他们的SIG(以及现有的贡献者询问他们的PR)。 + +希望能够在峰会上见到您,并且确保您已经提前[注册][reg]! + + +[圣迭戈团队][team] + +[reg]: https://events19.linuxfoundation.org/events/kubernetes-contributor-summit-north-america-2019/register/ +[schedule]: https://events19.linuxfoundation.org/events/kubernetes-contributor-summit-north-america-2019/program/schedule/ +[预见]: https://sched.co/VvMc +[安全]: https://sched.co/VvMj +[Prow]: https://sched.co/Vv6Z +[Git]: https://sched.co/VvNa +[审阅]: https://sched.co/VutA +[终端用户]: https://sched.co/VvNJ +[文档]: https://sched.co/Vux2 +[team]: http://git.k8s.io/community/events/events-team diff --git a/content/zh/docs/concepts/cluster-administration/addons.md b/content/zh/docs/concepts/cluster-administration/addons.md index be54c6f48bdb3..c58f49b1ac4c7 100644 --- a/content/zh/docs/concepts/cluster-administration/addons.md +++ b/content/zh/docs/concepts/cluster-administration/addons.md @@ -89,12 +89,12 @@ Add-ons 扩展了 Kubernetes 的功能。 ## 基础设施 -* [KubeVirt](https://kubevirt.io/user-guide/docs/latest/administration/intro.html#cluster-side-add-on-deployment) 是可以让 Kubernetes 运行虚拟机的 add-ons 。通常运行在裸机群集上。 +* [KubeVirt](https://kubevirt.io/user-guide/#/installation/installation) 是可以让 Kubernetes 运行虚拟机的 add-ons 。通常运行在裸机群集上。 -你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的 {{< glossary_tooltip text="Node(s)" term_id="node" >}} 上运行,或者优先运行在特定的节点上。有几种方法可以实现这点,推荐的方法都是用[标签选择器](/docs/concepts/overview/working-with-objects/labels/)来进行选择。通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 pod 分散到节点上,而不是将 pod 放置在可以资源不足的节点上等等),但在某些情况下,你可以需要更多控制 pod 停靠的节点,例如,确保 pod 最终落在连接了 SSD 的机器上,或者将来自两个不通的服务且有大量通信的 pod 放置在同一个可用区。 +你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的 {{< glossary_tooltip text="Node(s)" term_id="node" >}} 上运行,或者优先运行在特定的节点上。有几种方法可以实现这点,推荐的方法都是用[标签选择器](/docs/concepts/overview/working-with-objects/labels/)来进行选择。通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 pod 分散到节点上,而不是将 pod 放置在可用资源不足的节点上等等),但在某些情况下,你可以需要更多控制 pod 停靠的节点,例如,确保 pod 最终落在连接了 SSD 的机器上,或者将来自两个不同的服务且有大量通信的 pod 放置在同一个可用区。 {{% /capture %}} @@ -153,7 +153,7 @@ The value of these labels is cloud provider specific and is not guaranteed to be For example, the value of `kubernetes.io/hostname` may be the same as the Node name in some environments and a different value in other environments. --> -这些标签的值特定于云供应商的,因此不能保证可靠。例如,`kubernetes.io/hostname` 的值在某些环境中可能与节点名称相同,但在其他环境中可能是一个不同的值。 +这些标签的值是特定于云供应商的,因此不能保证可靠。例如,`kubernetes.io/hostname` 的值在某些环境中可能与节点名称相同,但在其他环境中可能是一个不同的值。 {{< /note >}} 目前有两种类型的节点亲和,分别为 `requiredDuringSchedulingIgnoredDuringExecution` 和 -`preferredDuringSchedulingIgnoredDuringExecution`。你可以视它们为“硬”和“软”,意思是,前者指定了将 pod 调度到一个节点上*必须*满足的规则(就像 `nodeSelector` 但使用更具表现力的语法),后者指定调度器将尝试执行单不能保证的*偏好*。名称的“IgnoredDuringExecution”部分意味着,类似于 `nodeSelector` 的工作原理,如果节点的标签在运行时发生变更,从而不再满足 pod 上的亲和规则,那么 pod 将仍然继续在该节点上运行。将来我们计划提供 `requiredDuringSchedulingRequiredDuringExecution`,它将类似于 `requiredDuringSchedulingIgnoredDuringExecution`,除了它会将 pod 从不再满足 pod 的节点亲和要求的节点上驱逐。 +`preferredDuringSchedulingIgnoredDuringExecution`。你可以视它们为“硬”和“软”,意思是,前者指定了将 pod 调度到一个节点上*必须*满足的规则(就像 `nodeSelector` 但使用更具表现力的语法),后者指定调度器将尝试执行但不能保证的*偏好*。名称的“IgnoredDuringExecution”部分意味着,类似于 `nodeSelector` 的工作原理,如果节点的标签在运行时发生变更,从而不再满足 pod 上的亲和规则,那么 pod 将仍然继续在该节点上运行。将来我们计划提供 `requiredDuringSchedulingRequiredDuringExecution`,它将类似于 `requiredDuringSchedulingIgnoredDuringExecution`,除了它会将 pod 从不再满足 pod 的节点亲和要求的节点上驱逐。 ## 监控计算资源使用 Pod 的资源使用情况被报告为 Pod 状态的一部分。 -如果为集群配置了 [可选监控](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/),则可以从监控系统检索 Pod 资源的使用情况。 +如果为集群配置了可选 [监控工具](/docs/tasks/debug-application-cluster/resource-usage-monitoring/),则可以直接从 +[指标 API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api) 或者监控工具检索 Pod 资源的使用情况。 -- 不要不必要地指定默认值:简单的最小配置会降低错误的可能性。 +- 除非必要,否则不指定默认值:简单的最小配置会降低错误的可能性。 - 部署,它创建一个 ReplicaSet 以确保所需数量的 Pod 始终可用,并指定替换 Pod 的策略(例如 [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)),除了一些显式的[`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)场景之外,几乎总是优先考虑直接创建 Pod。 + Deployment,它创建一个 ReplicaSet 以确保所需数量的 Pod 始终可用,并指定替换 Pod 的策略(例如 [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)),除了一些显式的[`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)场景之外,几乎总是优先考虑直接创建 Pod。 [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) 也可能是合适的。 @@ -257,7 +257,7 @@ The caching semantics of the underlying image provider make even `imagePullPolic - Use label selectors for `get` and `delete` operations instead of specific object names. See the sections on [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively). --> - 使用标签选择器进行`get`和`delete`操作,而不是特定的对象名称。 -- 请参阅[标签选择器](/docs/concepts/overview/working-with-objects/labels/#label-selectors)和[有效使用标签]部分(/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)。 +- 请参阅[标签选择器](/docs/concepts/overview/working-with-objects/labels/#label-selectors)和[有效使用标签](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)部分。 @@ -30,63 +30,279 @@ _POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和 ## Pod 开销 -在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/docs/concepts/containers/runtime-class/) 相关联的开销在[准入](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)时设置的。 +在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/docs/concepts/containers/runtime-class/) 相关联的开销在 +[准入](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) 时设置的。 -当启用 Pod 开销时,在调度 Pod 时,除了考虑容器资源请求的总和外,还要考虑 Pod 开销。类似地,Kubelet 将在确定 pod cgroup 的大小和执行 Pod 驱逐排序时包含 Pod 开销。 +当启用 Pod 开销时,在调度 Pod 时,除了考虑容器资源请求的总和外,还要考虑 Pod 开销。类似地,Kubelet 将在确定 Pod cgroup 的大小和执行 Pod 驱逐排序时包含 Pod 开销。 -### 设置 +## 启用 Pod 开销 {#set-up} -您需要确保在集群中启用了 `PodOverhead` [特性门](/docs/reference/command-line-tools-reference/feature-gates/)(默认情况下是关闭的)。这意味着: +您需要确保在集群中启用了 `PodOverhead` [特性门](/docs/reference/command-line-tools-reference/feature-gates/)(在 1.18 默认是开启的),以及一个用于定义 `overhead` 字段的 `RuntimeClass`。 + + +## 使用示例 + + +要使用 PodOverhead 特性,需要一个定义 `overhead` 字段的 RuntimeClass. 作为例子,可以在虚拟机和来宾操作系统中通过一个虚拟化容器运行时来定义 RuntimeClass 如下,其中每个 Pod 大约使用 120MiB: + +```yaml +--- +kind: RuntimeClass +apiVersion: node.k8s.io/v1beta1 +metadata: + name: kata-fc +handler: kata-fc +overhead: + podFixed: + memory: "120Mi" + cpu: "250m" +``` + + +通过指定 `kata-fc` RuntimeClass 处理程序创建的工作负载会将内存和 cpu 开销计入资源配额计算、节点调度以及 Pod cgroup 分级。 + +假设我们运行下面给出的工作负载示例 test-pod: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-pod +spec: + runtimeClassName: kata-fc + containers: + - name: busybox-ctr + image: busybox + stdin: true + tty: true + resources: + limits: + cpu: 500m + memory: 100Mi + - name: nginx-ctr + image: nginx + resources: + limits: + cpu: 1500m + memory: 100Mi +``` + + +在准入阶段 RuntimeClass [准入控制器](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含 + RuntimeClass 中定义的 `overhead`. 如果 PodSpec 中该字段已定义,该 Pod 将会被拒绝。在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod, 包含了一个 `overhead`. + + +在 RuntimeClass 准入控制器之后,可以检验一下已更新的 PodSpec: + +```bash +kubectl get pod test-pod -o jsonpath='{.spec.overhead}' +``` + + +输出: +``` +map[cpu:250m memory:120Mi] +``` + + +如果定义了 ResourceQuata, 则容器请求的总量以及 `overhead` 字段都将计算在内。 + + +当 kube-scheduler 决定在哪一个节点调度运行新的 Pod 时,调度器会兼顾该 Pod 的 `overhead` 以及该 Pod 的容器请求总量。在这个示例中,调度器将资源请求和开销相加,然后寻找具备 2.25 CPU 和 320 MiB 内存可用的节点。 + + +一旦 Pod 调度到了某个节点, 该节点上的 kubelet 将为该 Pod 新建一个 {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}. 底层容器运行时将在这个 pod 中创建容器。 + + +如果该资源对每一个容器都定义了一个限制(定义了受限的 Guaranteed QoS 或者 Bustrable QoS),kubelet 会为与该资源(CPU 的 cpu.cfs_quota_us 以及内存的 memory.limit_in_bytes) +相关的 pod cgroup 设定一个上限。该上限基于容器限制总量与 PodSpec 中定义的 `overhead` 之和。 + + +对于 CPU, 如果 Pod 的 QoS 是 Guaranteed 或者 Burstable, kubelet 会基于容器请求总量与 PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`. + + +请看这个例子,验证工作负载的容器请求: +```bash +kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}' +``` + + +容器请求总计 2000m CPU 和 200MiB 内存: +``` +map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi] +``` + + +对照从节点观察到的情况来检查一下: +```bash +kubectl describe node | grep test-pod -B2 +``` + + +该输出显示请求了 2250m CPU 以及 320MiB 内存,包含了 PodOverhead 在内: +``` + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE + --------- ---- ------------ ---------- --------------- ------------- --- + default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m +``` + + +## 验证 Pod cgroup 限制 + + +在工作负载所运行的节点上检查 Pod 的内存 cgroups. 在接下来的例子中,将在该节点上使用具备 CRI 兼容的容器运行时命令行工具 [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md). +这是一个展示 PodOverhead 行为的进阶示例,用户并不需要直接在该节点上检查 cgroups. + +首先在特定的节点上确定该 Pod 的标识符:ying -- 在 {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} -- 在 {{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}} -- 在每一个 Node 的 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} -- 在任何使用特性门的自定义api服务器中 +```bash +# 在该 Pod 调度的节点上执行如下命令: +POD_ID="$(sudo crictl pods --name test-pod -q)" +``` + +可以依此判断该 Pod 的 cgroup 路径: -{{< note >}} -能够写入运行时类资源的用户能够对工作负载性能产生集群范围的影响。可以使用 Kubernetes 访问控制来限制对此功能的访问。 -有关详细信息,请参见[授权概述](/docs/reference/access-authn-authz/authorization/)。 -{{< /note >}} +```bash +# 在该 Pod 调度的节点上执行如下命令: +sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath +``` + +执行结果的 cgroup 路径中包含了该 Pod 的 `pause` 容器。Pod 级别的 cgroup 即上面的一个目录。 +``` + "cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a" +``` + + +在这个例子中,该 pod 的 cgroup 路径是 `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`。验证内存的 Pod 级别 cgroup 设置: + + +```bash +# 在该 Pod 调度的节点上执行这个命令。 +# 另外,修改 cgroup 的名称以匹配为该 pod 分配的 cgroup。 + cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes +``` + + +和预期的一样是 320 MiB +``` +335544320 +``` + + +### 可观察性 + + +在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过 `kube_pod_overhead` 指标来协助确定何时使用 PodOverhead 以及协助观察以一个既定开销运行的工作负载的稳定性。 +该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。在此之前,用户需要从源代码构建 kube-state-metrics. {{% /capture %}} {{% capture whatsnext %}} - +* [PodOverhead 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) {{% /capture %}} diff --git a/content/zh/docs/concepts/configuration/secret.md b/content/zh/docs/concepts/configuration/secret.md index e889c0ffccb68..f1f0b18809cc3 100644 --- a/content/zh/docs/concepts/configuration/secret.md +++ b/content/zh/docs/concepts/configuration/secret.md @@ -73,9 +73,9 @@ See the [Service Account](/docs/tasks/configure-pod-container/configure-service- information on how Service Accounts work. --> -如果需要,可以禁用或覆盖自动创建和使用API凭据。但是,如果您需要的只是安全地访问 apiserver,我们推荐这样的工作流程。 +如果需要,可以禁用或覆盖自动创建和使用 API 凭据。但是,如果您需要的只是安全地访问 apiserver,我们推荐这样的工作流程。 -参阅 [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) 文档获取关于 Service Account 如何工作的更多信息。 +参阅 [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) 文档获取关于 Service Account 如何工作的更多信息。 -对于某些情况,您可能希望改用 stringData 字段。 此字段允许您将非 base64 编码的字符串直接放入 Secret 中, +对于某些情况,您可能希望改用 stringData 字段。此字段允许您将非 base64 编码的字符串直接放入 Secret 中, 并且在创建或更新 Secret 时将为您编码该字符串。 下面的一个实践示例提供了一个参考,您正在部署使用密钥存储配置文件的应用程序,并希望在部署过程中填补齐配置文件的部分内容。 @@ -291,7 +291,7 @@ retrieving Secrets. For example, if you run the following command: --> 然后,您的部署工具可以在执行 `kubectl apply` 之前替换模板的 `{{username}}` 和 `{{password}}` 变量。 -stringData 是只写的便利字段。 检索 Secrets 时永远不会被输出。 例如,如果您运行以下命令: +stringData 是只写的便利字段。检索 Secrets 时永远不会被输出。例如,如果您运行以下命令: ```shell kubectl get secret mysecret -o yaml @@ -322,7 +322,7 @@ If a field is specified in both data and stringData, the value from stringData is used. For example, the following Secret definition: --> -如果在 data 和 stringData 中都指定了字段,则使用 stringData 中的值。 例如,以下是 Secret 定义: +如果在 data 和 stringData 中都指定了字段,则使用 stringData 中的值。例如,以下是 Secret 定义: ```yaml apiVersion: v1 @@ -374,13 +374,9 @@ the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if `-w` option is not available. --> -data和stringData的键必须由字母数字字符 '-', '_' 或者 '.' 组成。 +data 和 stringData 的键必须由字母数字字符 '-', '_' 或者 '.' 组成。 -** 编码注意:** 秘密数据的序列化 JSON 和 YAML 值被编码为base64字符串。 -换行符在这些字符串中无效,因此必须省略。 -在 Darwin / macOS 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来分隔长行。 -相反,Linux用户 *应该* 在 `base64` 命令中添加选项 `-w 0`, 或者,如果`-w`选项不可用的情况下, -执行 `base64 | tr -d '\n'`。 +** 编码注意:** 秘密数据的序列化 JSON 和 YAML 值被编码为 base64 字符串。换行符在这些字符串中无效,因此必须省略。在 Darwin/macOS 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来分隔长行。相反,Linux用户 *应该* 在 `base64` 命令中添加选项 `-w 0` ,或者,如果 `-w` 选项不可用的情况下,执行 `base64 | tr -d '\n'`。 #### 从生成器创建 Secret -Kubectl 从1.14版本开始支持 [使用 Kustomize 管理对象](/docs/tasks/manage-kubernetes-objects/kustomization/) +Kubectl 从 1.14 版本开始支持 [使用 Kustomize 管理对象](/docs/tasks/manage-kubernetes-objects/kustomization/) 使用此新功能,您还可以从生成器创建一个 Secret,然后将其应用于在 Apiserver 上创建对象。 -生成器应在目录内的“ kustomization.yaml”中指定。 +生成器应在目录内的 `kustomization.yaml` 中指定。 例如,从文件 `./username.txt` 和 `./password.txt` 生成一个 Secret。 @@ -476,8 +472,7 @@ The generated Secrets name has a suffix appended by hashing the contents. This e Secret is generated each time the contents is modified. --> -通过对内容进行序列化后,生成一个后缀作为 Secrets 的名称。 -这样可以确保每次修改内容时都会生成一个新的Secret。 +通过对内容进行序列化后,生成一个后缀作为 Secrets 的名称。这样可以确保每次修改内容时都会生成一个新的 Secret。 {{< /note >}} @@ -540,7 +535,7 @@ kubectl edit secrets mysecret This will open the default configured editor and allow for updating the base64 encoded secret values in the `data` field: --> -这将打开默认配置的编辑器,并允许更新 `data` 字段中的base64编码的 secret: +这将打开默认配置的编辑器,并允许更新 `data` 字段中的 base64 编码的 secret: ``` # Please edit the object below. Lines beginning with a '#' will be ignored, @@ -832,14 +827,12 @@ when new keys are projected to the Pod can be as long as kubelet sync period + c propagation delay, where cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero corespondingly). --> -当已经在 volume 中被消费的 secret 被更新时,被映射的 key 也将被更新。 -Kubelet 在周期性同步时检查被挂载的 secret 是不是最新的。 -但是,它正在使用其本地缓存来获取 Secret 的当前值。 +当已经在 volume 中被消费的 secret 被更新时,被映射的 key 也将被更新。Kubelet 在周期性同步时检查被挂载的 secret 是不是最新的。但是,它正在使用其本地缓存来获取 Secret 的当前值。 缓存的类型可以使用 (`ConfigMapAndSecretChangeDetectionStrategy` 中的 [KubeletConfiguration 结构](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). 它可以通过基于 ttl 的 watch(默认)传播,也可以将所有请求直接重定向到直接kube-apiserver。 结果,从更新密钥到将新密钥投射到 Pod 的那一刻的总延迟可能与 kubelet 同步周期 + 缓存传播延迟一样长,其中缓存传播延迟取决于所选的缓存类型。 -(它等于观察传播延迟,缓存的ttl或相应为0) +(它等于观察传播延迟,缓存的 ttl 或相应为 0) {{< note >}} @@ -853,6 +846,53 @@ Secret updates. {{< /note >}} +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + + +Kubernetes 的 alpha 特性 _不可变的 Secret 和 ConfigMap_ 提供了一个设置各个 Secret 和 ConfigMap 为不可变的选项。 +对于大量使用 Secret 的集群(至少有成千上万各不相同的 Secret 供 Pod 挂载),禁止变更它们的数据有下列好处: + + +- 防止意外(或非预期的)更新导致应用程序中断 +- 通过将 Secret 标记为不可变来关闭 kube-apiserver 对其的监视,以显著地降低 kube-apiserver 的负载来提升集群性能。 + + +使用这个特性需要启用 `ImmutableEmphemeralVolumes` [特性开关](/docs/reference/command-line-tools-reference/feature-gates/) 并将 Secret 或 ConfigMap 的 `immutable` 字段设置为 `true`. 例如: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + ... +data: + ... +immutable: true +``` + + +{{< note >}} +一旦一个 Secret 或 ConfigMap 被标记为不可变,撤销此操作或者更改 `data` 字段的内容都是 _不_ 可能的。 +只能删除并重新创建这个 Secret. 现有的 Pod 将维持对已删除 Secret 的挂载点 - 建议重新创建这些 pod. +{{< /note >}} + -每个 secret 的大小限制为1MB。这是为了防止创建非常大的 secret 会耗尽 apiserver 和 kubelet 的内存。然而,创建许多较小的 secret 也可能耗尽内存。更全面得限制 secret 对内存使用的功能还在计划中。 +每个 secret 的大小限制为 1MB。这是为了防止创建非常大的 secret 会耗尽 apiserver 和 kubelet 的内存。然而,创建许多较小的 secret 也可能耗尽内存。更全面得限制 secret 对内存使用的功能还在计划中。 -Kubelet 仅支持从 API server 获取的 Pod 使用 secret。这包括使用 kubectl 创建的任何 pod,或间接通过 replication controller 创建的 pod。它不包括通过 kubelet `--manifest-url` 标志,其 `--config` 标志或其 REST API 创建的pod(这些不是创建 pod 的常用方法)。 +Kubelet 仅支持从 API server 获取的 Pod 使用 secret。这包括使用 kubectl 创建的任何 pod,或间接通过 replication controller 创建的 pod。它不包括通过 kubelet `--manifest-url` 标志,其 `--config` 标志或其 REST API 创建的 pod(这些不是创建 pod 的常用方法)。 ### Secret 与 Pod 生命周期的联系 -通过 API 创建 Pod 时,不会检查应用的 secret 是否存在。一旦 Pod 被调度,kubelet 就会尝试获取该 secret 的值。如果获取不到该 secret,或者暂时无法与 API server 建立连接,kubelet 将会定期重试。Kubelet 将会报告关于 pod 的事件,并解释它无法启动的原因。一旦获取到 secret,kubelet将创建并装载一个包含它的卷。在所有 pod 的卷被挂载之前,都不会启动 pod 的容器。 +通过 API 创建 Pod 时,不会检查应用的 secret 是否存在。一旦 Pod 被调度,kubelet 就会尝试获取该 secret 的值。如果获取不到该 secret,或者暂时无法与 API server 建立连接,kubelet 将会定期重试。Kubelet 将会报告关于 pod 的事件,并解释它无法启动的原因。一旦获取到 secret,kubelet 将创建并装载一个包含它的卷。在所有 pod 的卷被挂载之前,都不会启动 pod 的容器。 -现在我们可以创建一个使用 ssh 密钥引用 secret 的pod,并在一个卷中使用它: +现在我们可以创建一个使用 ssh 密钥引用 secret 的 pod,并在一个卷中使用它: ```yaml apiVersion: v1 @@ -1403,7 +1443,7 @@ privileged, system-level components. Secret 中的值对于不同的环境来说重要性可能不同,例如对于 Kubernetes 集群内部(例如 service account 令牌)和集群外部来说就不一样。即使一个应用程序可以理解其期望的与之交互的 secret 有多大的能力,但是同一命名空间中的其他应用程序却可能不这样认为。 -由于这些原因,在命名空间中 `watch` 和 `list` secret 的请求是非常强大的功能,应该避免这样的行为,因为列出 secret 可以让客户端检查所有 secret 是否在该命名空间中。在群集中`watch` 和 `list` 所有 secret 的能力应该只保留给最有特权的系统级组件。 +由于这些原因,在命名空间中 `watch` 和 `list` secret 的请求是非常强大的功能,应该避免这样的行为,因为列出 secret 可以让客户端检查所有 secret 是否在该命名空间中。在群集中 `watch` 和 `list` 所有 secret 的能力应该只保留给最有特权的系统级组件。 -您可以在PodSpec中为容器设定容忍标签。以下两个容忍标签都与上面的 `kubectl taint` 创建的污点“匹配”, -因此具有任一容忍标签的Pod都可以将其调度到“ node1”上: +您可以在 PodSpec 中为容器设定容忍标签。以下两个容忍标签都与上面的 `kubectl taint` 创建的污点“匹配”, +因此具有任一容忍标签的Pod都可以将其调度到 `node1` 上: ```yaml tolerations: @@ -109,7 +109,7 @@ will tolerate everything. 存在两种特殊情况: -* 如果一个 toleration 的 `key` 为空且 operator 为 `Exists` ,表示这个 toleration 与任意的 key 、 value 和 effect 都匹配,即这个 toleration 能容忍任意 taint。 +* 如果一个 toleration 的 `key` 为空且 operator 为 `Exists`,表示这个 toleration 与任意的 key 、value 和 effect 都匹配,即这个 toleration 能容忍任意 taint。 ```yaml tolerations: @@ -134,7 +134,7 @@ This is a "preference" or "soft" version of `NoSchedule` -- the system will *try pod that does not tolerate the taint on the node, but it is not required. The third kind of `effect` is `NoExecute`, described later. --> -上述例子使用到的 `effect` 的一个值 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。这是“优化”或“软”版本的 `NoSchedule` ——系统会 *尽量* 避免将 pod 调度到存在其不能容忍 taint 的节点上,但这不是强制的。`effect` 的值还可以设置为 `NoExecute` ,下文会详细描述这个值。 +上述例子使用到的 `effect` 的一个值 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。这是“优化”或“软”版本的 `NoSchedule` ——系统会 *尽量* 避免将 pod 调度到存在其不能容忍 taint 的节点上,但这不是强制的。`effect` 的值还可以设置为 `NoExecute`,下文会详细描述这个值。 -然后存在一个 pod,它有两个 toleration +然后存在一个 pod,它有两个 toleration: ```yaml tolerations: @@ -192,7 +192,7 @@ toleration matching the third taint. But it will be able to continue running if already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. --> -在这个例子中,上述 pod 不会被分配到上述节点,因为其没有 toleration 和第三个 taint 相匹配。但是如果在给节点添加 上述 taint 之前,该 pod 已经在上述节点运行,那么它还可以继续运行在该节点上,因为第三个 taint 是三个 taint 中唯一不能被这个 pod 容忍的。 +在这个例子中,上述 pod 不会被分配到上述节点,因为其没有 toleration 和第三个 taint 相匹配。但是如果在给节点添加上述 taint 之前,该 pod 已经在上述节点运行,那么它还可以继续运行在该节点上,因为第三个 taint 是三个 taint 中唯一不能被这个 pod 容忍的。 -通过 taint 和 toleration ,可以灵活地让 pod *避开* 某些节点或者将 pod 从某些节点驱逐。下面是几个使用例子: +通过 taint 和 toleration,可以灵活地让 pod *避开* 某些节点或者将 pod 从某些节点驱逐。下面是几个使用例子: * **专用节点**:如果您想将某些节点专门分配给特定的一组用户使用,您可以给这些节点添加一个 taint(即, - `kubectl taint nodes nodename dedicated=groupName:NoSchedule`),然后给这组用户的 pod 添加一个相对应的 toleration(通过编写一个自定义的[admission controller](/docs/admin/admission-controllers/),很容易就能做到)。拥有上述 toleration 的 pod 就能够被分配到上述专用节点,同时也能够被分配到集群中的其它节点。如果您希望这些 pod 只能被分配到上述专用节点,那么您还需要给这些专用节点另外添加一个和上述 taint 类似的 label (例如:`dedicated=groupName`),同时 还要在上述 admission controller 中给 pod 增加节点亲和性要求上述 pod 只能被分配到添加了 `dedicated=groupName` 标签的节点上。 + `kubectl taint nodes nodename dedicated=groupName:NoSchedule`),然后给这组用户的 pod 添加一个相对应的 toleration(通过编写一个自定义的 [admission controller](/docs/admin/admission-controllers/),很容易就能做到)。拥有上述 toleration 的 pod 就能够被分配到上述专用节点,同时也能够被分配到集群中的其它节点。如果您希望这些 pod 只能被分配到上述专用节点,那么您还需要给这些专用节点另外添加一个和上述 taint 类似的 label (例如:`dedicated=groupName`),同时 还要在上述 admission controller 中给 pod 增加节点亲和性要求上述 pod 只能被分配到添加了 `dedicated=groupName` 标签的节点上。 -* **基于 taint 的驱逐 (beta 特性)**: 这是在每个 pod 中配置的在节点出现问题时的驱逐行为,接下来的章节会描述这个特性 +* **基于 taint 的驱逐**: 这是在每个 pod 中配置的在节点出现问题时的驱逐行为,接下来的章节会描述这个特性 前文我们提到过 taint 的 effect 值 `NoExecute` ,它会影响已经在节点上运行的 pod - * 如果 pod 不能忍受effect 值为 `NoExecute` 的 taint,那么 pod 将马上被驱逐 - * 如果 pod 能够忍受effect 值为 `NoExecute` 的 taint,但是在 toleration 定义中没有指定 `tolerationSeconds`,则 pod 还会一直在这个节点上运行。 - * 如果 pod 能够忍受effect 值为 `NoExecute` 的 taint,而且指定了 `tolerationSeconds`,则 pod 还能在这个节点上继续运行这个指定的时间长度。 + * 如果 pod 不能忍受 effect 值为 `NoExecute` 的 taint,那么 pod 将马上被驱逐 + * 如果 pod 能够忍受 effect 值为 `NoExecute` 的 taint,但是在 toleration 定义中没有指定 `tolerationSeconds`,则 pod 还会一直在这个节点上运行。 + * 如果 pod 能够忍受 effect 值为 `NoExecute` 的 taint,而且指定了 `tolerationSeconds`,则 pod 还能在这个节点上继续运行这个指定的时间长度。 -在版本1.13中,`TaintBasedEvictions` 功能已升级为Beta,并且默认启用,因此污点会自动给节点添加这类 taint,上述基于节点状态 Ready 对 pod 进行驱逐的逻辑会被禁用。 +在节点被驱逐时,节点控制器或者 kubelet 会添加带有 `NoExecute` 效应的相关污点。如果异常状态恢复正常,kubelet 或节点控制器能够移除相关的污点。 + {{< note >}} -使用这个 beta 功能特性,结合 `tolerationSeconds` ,pod 就可以指定当节点出现一个或全部上述问题时还将在这个节点上运行多长的时间。 +使用这个功能特性,结合 `tolerationSeconds`,pod 就可以指定当节点出现一个或全部上述问题时还将在这个节点上运行多长的时间。 [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) 中的 pod 被创建时,针对以下 taint 自动添加的 `NoExecute` 的 toleration 将不会指定 `tolerationSeconds`: * `node.kubernetes.io/unreachable` * `node.kubernetes.io/not-ready` -这保证了出现上述问题时 DaemonSet 中的 pod 永远不会被驱逐,这和 `TaintBasedEvictions` 这个特性被禁用后的行为是一样的。 +这保证了出现上述问题时 DaemonSet 中的 pod 永远不会被驱逐。 -Node 生命周期控制器会自动创建与 Node 条件相对应的污点。 +Node 生命周期控制器会自动创建与 Node 条件相对应的带有 `NoSchedule` 效应的污点。 同样,调度器不检查节点条件,而是检查节点污点。这确保了节点条件不会影响调度到节点上的内容。用户可以通过添加适当的 Pod 容忍度来选择忽略某些 Node 的问题(表示为 Node 的调度条件)。 -注意,`TaintNodesByCondition` 只会污染具有 `NoSchedule` 设定的节点。 `NoExecute` 效应由 `TaintBasedEviction` 控制, -`TaintBasedEviction` 是 Beta 版功能,自 Kubernetes 1.13 起默认启用。 + +自 Kubernetes 1.8 起, DaemonSet 控制器自动为所有守护进程添加如下 `NoSchedule` toleration 以防 DaemonSet 崩溃: * `node.kubernetes.io/memory-pressure` * `node.kubernetes.io/disk-pressure` diff --git a/content/zh/docs/concepts/containers/container-environment.md b/content/zh/docs/concepts/containers/container-environment.md new file mode 100644 index 0000000000000..26d599f9fd637 --- /dev/null +++ b/content/zh/docs/concepts/containers/container-environment.md @@ -0,0 +1,96 @@ +--- +title: 容器环境 +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + + +本页描述了在容器环境里容器可用的资源。 + +{{% /capture %}} + + +{{% capture body %}} + + +## 容器环境 + +Kubernetes 的容器环境给容器提供了几个重要的资源: + +* 文件系统,其中包含一个[镜像](/docs/concepts/containers/images/) 和一个或多个的[卷](/docs/concepts/storage/volumes/)。 +* 容器自身的信息。 +* 集群中其他对象的信息。 + + +### 容器信息 + +容器的 *hostname* 是它所运行在的 pod 的名称。它可以通过 `hostname` 命令或者调用 libc 中的 [`gethostname`](http://man7.org/linux/man-pages/man2/gethostname.2.html) 函数来获取。 + +Pod 名称和命名空间可以通过 [downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) 使用环境变量。 + +Pod 定义中的用户所定义的环境变量也可在容器中使用,就像在 Docker 镜像中静态指定的任何环境变量一样。 + + +### 集群信息 + +创建容器时正在运行的所有服务的列表都可用作该容器的环境变量。这些环境变量与 Docker 链接的语法匹配。 + +对于名为 *foo* 的服务,当映射到名为 *bar* 的容器时,以下变量是被定义了的: + +```shell +FOO_SERVICE_HOST= +FOO_SERVICE_PORT= +``` + + +Service 具有专用的 IP 地址。如果启用了 [DNS插件](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/),就可以在容器中通过 DNS 来访问。 + +{{% /capture %}} + +{{% capture whatsnext %}} + + +* 学习更多有关[容器生命周期钩子](/docs/concepts/containers/container-lifecycle-hooks/)的知识。 +* 动手获得经验[将处理程序附加到容器生命周期事件](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)。 + +{{% /capture %}} diff --git a/content/zh/docs/concepts/containers/runtime-class.md b/content/zh/docs/concepts/containers/runtime-class.md index 53bb567960b29..6229009612c9b 100644 --- a/content/zh/docs/concepts/containers/runtime-class.md +++ b/content/zh/docs/concepts/containers/runtime-class.md @@ -16,30 +16,21 @@ This page describes the RuntimeClass resource and runtime selection mechanism. --> 本页面描述了 RuntimeClass 资源和运行时的选择机制。 -{{< warning >}} RuntimeClass 特性在 v1.14 版本升级为 beta 特性时引入了不兼容的改变。 -如果你在 v1.14 以前的版本中使用 RuntimeClass,请查阅 -[Upgrading RuntimeClass from Alpha to Beta](#upgrading-runtimeclass-from-alpha-to-beta)。 -{{< /warning >}} +RuntimeClass is a feature for selecting the container runtime configuration. The container runtime +configuration is used to run a Pod's containers. +--> +RuntimeClass 是一个用于选择容器运行时配置的特性,容器运行时配置用于运行 Pod 中的容器。 {{% /capture %}} {{% capture body %}} -## Runtime Class - - -RuntimeClass 是用于选择容器运行时配置的特性,容器运行时配置用于运行 Pod 中的容器。 - + +## 动机 -### 设置 +## 设置 -#### 1. 在节点上配置 CRI 实现 +### 1. 在节点上配置 CRI 实现 RuntimeClass 假设集群中的节点配置是同构的 -(换言之,所有的节点在容器运行时方面的配置是相同的)。 -如果需要支持异构节点,配置方法请参阅下面的 [Scheduling](#scheduling)。 +heterogenous node configurations, see [Scheduling](#scheduling) below.--> +RuntimeClass 假设集群中的节点配置是同构的(换言之,所有的节点在容器运行时方面的配置是相同的)。 +如果需要支持异构节点,配置方法请参阅下面的 [调度](#scheduling)。 {{< /note >}} -#### 2. 创建相应的 RuntimeClass 资源 +### 2. 创建相应的 RuntimeClass 资源 -### 使用说明 +## 使用说明 如果未指定 `runtimeClassName` ,则将使用默认的 RuntimeHandler,相当于禁用 RuntimeClass 功能特性。 + +### CRI 配置 -关于如何安装 CRI 运行时,请查阅[CRI installation](/docs/setup/production-environment/container-runtimes/)。 +关于如何安装 CRI 运行时,请查阅 [CRI 安装](/docs/setup/production-environment/container-runtimes/)。 #### dockershim @@ -229,7 +223,7 @@ handlers are configured under the [crio.runtime table](https://github.com/kubernetes-sigs/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table): --> 通过 cri-o 的 `/etc/crio/crio.conf` 配置文件来配置运行时 handler。 -handler 需要配置在[crio.runtime 表](https://github.com/kubernetes-sigs/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table) +handler 需要配置在 [crio.runtime 表](https://github.com/kubernetes-sigs/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table) 下方: ``` @@ -244,7 +238,10 @@ https://github.com/kubernetes-sigs/cri-o/blob/master/cmd/crio/config.go 更详细信息,请查阅 containerd 配置文档: https://github.com/kubernetes-sigs/cri-o/blob/master/cmd/crio/config.go -### Scheduling + +## 调度 {{< feature-state for_k8s_version="v1.16" state="beta" >}} @@ -252,11 +249,11 @@ https://github.com/kubernetes-sigs/cri-o/blob/master/cmd/crio/config.go As of Kubernetes v1.16, RuntimeClass includes support for heterogenous clusters through its `scheduling` fields. Through the use of these fields, you can ensure that pods running with this RuntimeClass are scheduled to nodes that support it. To use the scheduling support, you must have -the RuntimeClass [admission controller][] enabled (the default, as of 1.16). +the [RuntimeClass admission controller][] enabled (the default, as of 1.16). --> 在 Kubernetes v1.16 版本里,RuntimeClass 特性引入了 `scheduling` 字段来支持异构集群。 通过该字段,可以确保 pod 被调度到支持指定运行时的节点上。 -该调度支持,需要确保 RuntimeClass [admission controller][] 处于开启状态(1.16 版本默认开启)。 +该调度支持,需要确保 [RuntimeClass admission controller][] 处于开启状态(1.16 版本默认开启)。 +### Pod 开销 -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.18" state="beta" >}} -在 Kubernetes v1.16 版本中,RuntimeClass 开始支持 pod 的 overhead,作为 [`PodOverhead`](/docs/concepts/configuration/pod-overhead) -特性的一部分。 -若要使用 `PodOverhead` 特性,你需要确保 PodOverhead 特性开关处于开启状态(默认为关闭状态)。 +你可以指定与运行 Pod 相关的 _开销_ 资源。声明开销即允许集群(包括调度器)在决策 Pod 和资源时将其考虑在内。 +若要使用 Pod 开销特性,你必须确保 PodOverhead [特性开关](/docs/reference/command-line-tools-reference/feature-gates/) 处于开启状态(默认为启用状态)。 -Pod 的 overhead 在 RuntimeClass 的 `Overhead` 字段定义,该字段用于指定使用 RuntimeClass 特性时带来的 overhead。 +Pod 开销通过 RuntimeClass 的 `overhead` 字段定义。通过使用这些字段,你可以指定使用该 RuntimeClass 运行 Pod 时的开销并确保 Kubernetes 将这些开销计算在内。 -### Upgrading RuntimeClass from Alpha to Beta - - -RuntimeClass Beta 特性包含如下几个改变: - - -- `node.k8s.io` API 组和 `runtimeclasses.node.k8s.io` 资源已从 CRD 中迁移到内置的 API 中; -- `spec` 被放置到 RuntimeClass 中(例如,没有 RuntimeClassSpec 了); -- `runtimeHandler` 字段重命名为 `handler`; -- `handler` 字段需要在所有版本的 API 提供,这意味着 `runtimeHandler` 字段在 Alpha API 中也需要提供; -- `handler` 字段必须是一个合法的 DNS 标识([RFC 1123](https://tools.ietf.org/html/rfc1123)), - 这意味着不可以包含 `.` 字符。合法的 handler 必须满足如下规则:`^[a-z0-9]([-a-z0-9]*[a-z0-9])?$`。 - - -**Action Required:** RuntimeClass 特性从 alpha 版本升级到 beta 版本,需要做如下动作: +{{% /capture %}} +{{% capture whatsnext %}} -- RuntimeClass 资源必须在升级到 v1.14 *之后* 再创建,并且 CRD 资源 `runtimeclasses.node.k8s.io` 必须要手动删除: - ``` - kubectl delete customresourcedefinitions.apiextensions.k8s.io runtimeclasses.node.k8s.io - ``` -- RuntimeClasses 中未指定或为空的 `runtimeHandler` 和 使用包含 `.` 符号的 handler 将不再合法, - 必须迁移成合法的 handler 配置(见上)。 - -### Further Reading - - [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) - [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md) - Read about the [Pod Overhead](/docs/concepts/configuration/pod-overhead/) concept - [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) +--> +- [RuntimeClass 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md) +- [RuntimeClass 调度设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md) +- 阅读关于 [Pod 开销](/docs/concepts/configuration/pod-overhead/) 的概念 +- [PodOverhead 特性设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) {{% /capture %}} diff --git a/content/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md new file mode 100644 index 0000000000000..257406596c7b9 --- /dev/null +++ b/content/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -0,0 +1,356 @@ +--- +title: 设备插件 +description: 使用 Kubernetes 设备插件框架来实现适用于 GPU、NIC、FPGA、InfiniBand 以及类似的需要特定于供应商设置的资源的插件。 +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} +{{< feature-state for_k8s_version="v1.10" state="beta" >}} + + +Kubernetes 提供了一个[设备插件框架](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md),您可以用来将系统硬件资源发布到 {{< glossary_tooltip term_id="kubelet" >}}。 + +供应商可以实现设备插件,由您手动部署或作为 {{< glossary_tooltip term_id="daemonset" >}} 来部署,而不必定制 Kubernetes 本身的代码。目标设备包括 GPU、高性能 NIC、FPGA、InfiniBand 适配器以及其他类似的、可能需要特定于供应商的初始化和设置的计算资源。 + +{{% /capture %}} + +{{% capture body %}} + +## 注册设备插件 + + +kubelet 输出了一个 `Registration` 的 gRPC 服务: + +```gRPC +service Registration { + rpc Register(RegisterRequest) returns (Empty) {} +} +``` + + +设备插件可以通过此 gRPC 服务在 kubelet 进行注册。在注册期间,设备插件需要发送下面几样内容: + + * 设备插件的 Unix 套接字。 + * 设备插件的 API 版本。 + * `ResourceName` 是需要公布的。这里 `ResourceName` 需要遵循[扩展资源命名方案](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources),类似于 `vendor-domain/resourcetype`。(比如 NVIDIA GPU 就被公布为 `nvidia.com/gpu`。) + +成功注册后,设备插件就向 kubelet 发送他所管理的设备列表,然后 kubelet 负责将这些资源发布到 API 服务器,作为 kubelet 节点状态更新的一部分。 + +比如,设备插件在 kubelet 中注册了 `hardware-vendor.example/foo` 并报告了节点上的两个运行状况良好的设备后,节点状态将更新以通告该节点已安装2个 `Foo` 设备并且是可用的。 + + +然后用户需要去请求其他类型的资源的时候,就可以在[Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)规范请求这类设备,但是有以下的限制: + + * 扩展资源仅可作为整数资源使用,并且不能被过量使用 + * 设备不能在容器之间共享 + + + +假设 Kubernetes 集群正在运行一个设备插件,该插件在一些节点上公布的资源为 `hardware-vendor.example/foo`。 +下面就是一个 Pod 示例,请求此资源以运行某演示负载: + +```yaml +--- +apiVersion: v1 +kind: Pod +metadata: + name: demo-pod +spec: + containers: + - name: demo-container-1 + image: k8s.gcr.io/pause:2.0 + resources: + limits: + hardware-vendor.example/foo: 2 +# +# 这个 pod 需要两个 hardware-vendor.example/foo 设备 +# 而且只能够调度到满足需求的 node 上 +# +# 如果该节点中有2个以上的设备可用,其余的可供其他 pod 使用 +``` + + + +## 设备插件的实现 + +设备插件的常规工作流程包括以下几个步骤: + + * 初始化。在这个阶段,设备插件将执行供应商特定的初始化和设置,以确保设备处于就绪状态。 + * 插件使用主机路径 `/var/lib/kubelet/device-plugins/` 下的 Unix socket 启动一个 gRPC 服务,该服务实现以下接口: + + ```gRPC + service DevicePlugin { + // ListAndWatch returns a stream of List of Devices + // Whenever a Device state change or a Device disappears, ListAndWatch + // returns the new list + rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} + + // Allocate is called during container creation so that the Device + // Plugin can run device specific operations and instruct Kubelet + // of the steps to make the Device available in the container + rpc Allocate(AllocateRequest) returns (AllocateResponse) {} + } + ``` + + + + * 插件通过 Unix socket 在主机路径 `/var/lib/kubelet/device-plugins/kubelet.sock` 处向 kubelet 注册自身。 + * 成功注册自身后,设备插件将以服务模式运行,在此期间,它将持续监控设备运行状况,并在设备状态发生任何变化时向 kubelet 报告。它还负责响应 `Allocate` gRPC 请求。在`Allocate`期间,设备插件可能还会做一些设备特定的准备;例如 GPU 清理或 QRNG 初始化。如果操作成功,则设备插件将返回 `AllocateResponse`,其中包含用于访问被分配的设备容器运行时的配置。kubelet 将此信息传递到容器运行时。 + + +### 处理 kubelet 重启 + +设备插件应能监测到 kubelet 重启,并且向新的 kubelet 实例来重新注册自己。在当前实现中,当 kubelet 重启的时候,新的 kubelet 实例会删除 `/var/lib/kubelet/device-plugins` 下所有已经存在的 Unix sockets。设备插件需要能够监控到它的 Unix socket 被删除,并且当发生此类事件时重新注册自己。 + + +## 设备插件部署 + +你可以将你的设备插件作为节点操作系统的软件包来部署、作为 DaemonSet 来部署或者手动部署。 + +规范目录 `/var/lib/kubelet/device-plugins` 是需要特权访问的,所以设备插件必须要在被授权的安全的上下文中运行。如果你将设备插件部署为 DaemonSet,`/var/lib/kubelet/device-plugins` 目录必须要在插件的 [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) 中声明作为 {{< glossary_tooltip term_id="volume" >}} 被 mount 到插件中。 + +如果你选择 DaemonSet 方法,你可以通过 Kubernetes 进行以下操作:将设备插件的 Pod 放置在节点上,在出现故障后重新启动 daemon Pod,来进行自动进行升级。 + + +## API 兼容性 + +Kubernetes 设备插件支持还处于 beta 版本。所以在稳定版本出来之前 API 会以不兼容的方式进行更改。作为一个项目,Kubernetes 建议设备插件开发者: + +* 注意未来版本的更改 +* 支持多个版本的设备插件 API,以实现向后/向前兼容性。 + +如果你启用 DevicePlugins 功能,并在需要升级到 Kubernetes 版本来获得较新的设备插件 API 版本的节点上运行设备插件,请在升级这些节点之前先升级设备插件以支持这两个版本。采用该方法将确保升级期间设备分配的连续运行。 + + +## 监控设备插件资源 + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +为了监控设备插件提供的资源,监控代理程序需要能够发现节点上正在使用的设备,并获取元数据来描述哪个指标与容器相关联。设备监控代理暴露给 [Prometheus](https://prometheus.io/) 的指标应该遵循 [Kubernetes Instrumentation Guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/instrumentation.md),使用 `pod`、`namespace` 和 `container` 标签来标识容器。 + + +kubelet 提供了 gRPC 服务来使得正在使用中的设备被发现,并且还未这些设备提供了元数据: + +```gRPC +// PodResourcesLister is a service provided by the kubelet that provides information about the +// node resources consumed by pods and containers on the node +service PodResourcesLister { + rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {} +} +``` + + +gRPC 服务通过 `/var/lib/kubelet/pod-resources/kubelet.sock` 的 UNIX 套接字来提供服务。设备插件资源的监控代理程序可以部署为守护进程或者 DaemonSet。规范的路径 `/var/lib/kubelet/pod-resources` 需要特权来进入,所以监控代理程序必须要在获得授权的安全的上下文中运行。如果设备监控代理以 DaemonSet 形式运行,必须要在插件的 [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) 中声明将 `/var/lib/kubelet/pod-resources` 目录以 {{< glossary_tooltip term_id="volume" >}} 形式被 mount 到容器中。 + +对“PodResources 服务”的支持要求启用 `KubeletPodResources` [特性门控](/docs/reference/command-line-tools-reference/feature-gates/)。从 Kubernetes 1.15 开始默认启用。 + + +## 设备插件与拓扑管理器的集成 + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +拓扑管理器是 Kubelet 的一个组件,它允许以拓扑对齐方式来调度资源。为了做到这一点,设备插件 API 进行了扩展来包括一个 `TopologyInfo` 结构体。 + +```gRPC +message TopologyInfo { + repeated NUMANode nodes = 1; +} + +message NUMANode { + int64 ID = 1; +} +``` + + +设备插件希望拓扑管理器可以将填充的 TopologyInfo 结构体作为设备注册的一部分以及设备 ID 和设备的运行状况发送回去。然后设备管理器将使用此信息来咨询拓扑管理器并做出资源分配决策。 + +`TopologyInfo` 支持定义 `nodes` 字段,允许为 `nil`(默认)或者是一个 NUMA nodes 的列表。这样就可以使设备插件可以跨越 NUMA nodes 去发布。 + +下面是一个由设备插件为设备填充 `TopologyInfo` 结构体的示例: + +``` +pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}} +``` + + +## 设备插件示例 {#examples} + +下面是一些设备插件实现的示例: + +* [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin) +* [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) 支持 Intel GPU、FPGA 和 QuickAssist 设备 +* [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) 用于硬件辅助的虚拟化 +* The [NVIDIA GPU device plugin](https://github.com/NVIDIA/k8s-device-plugin) + * 需要 [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) 2.0,允许运行 Docker 容器的时候开启 GPU。 +* [NVIDIA GPU device plugin for Container-Optimized OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu) +* [RDMA device plugin](https://github.com/hustcat/k8s-rdma-device-plugin) +* [Solarflare device plugin](https://github.com/vikaschoudhary16/sfc-device-plugin) +* [SR-IOV Network device plugin](https://github.com/intel/sriov-network-device-plugin) +* [Xilinx FPGA device plugins](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin/trunk) + +{{% /capture %}} +{{% capture whatsnext %}} + + +* 查看 [调度 GPU 资源](/docs/tasks/manage-gpus/scheduling-gpus/) 来学习使用设备插件 +* 查看在 node 上如何[广告扩展资源](/docs/tasks/administer-cluster/extended-resource-node/) +* 阅读如何在 Kubernetes 中如何使用 [TLS 入口的硬件加速](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) +* 学习 [Topology Manager] (/docs/tasks/adminster-cluster/topology-manager/) + +{{% /capture %}} diff --git a/content/zh/docs/concepts/extend-kubernetes/extend-cluster.md b/content/zh/docs/concepts/extend-kubernetes/extend-cluster.md index c3335958cfaa9..ae231de49d29a 100644 --- a/content/zh/docs/concepts/extend-kubernetes/extend-cluster.md +++ b/content/zh/docs/concepts/extend-kubernetes/extend-cluster.md @@ -163,20 +163,17 @@ This diagram shows the extension points in a Kubernetes system. -1. 用户通常使用 `kubectl` 与 Kubernetes API 进行交互。[kubectl 插件](/docs/tasks/extend-kubectl/kubectl-plugins/)扩展了 kubectl 二进制程序。它们只影响个人用户的本地环境,因此不能执行站点范围的策略。 -2. apiserver 处理所有请求。apiserver 中的几种类型的扩展点允许对请求进行身份认证或根据其内容对其进行阻止、编辑内容以及处理删除操作。这些内容在[API 访问扩展](/docs/concepts/overview/extending#api-access-extensions)小节中描述。 - -3. apiserver 提供各种 *资源* 。 *内置的资源种类* ,如 `pods`,由 Kubernetes 项目定义,不能更改。您还可以添加您自己定义的资源或其他项目已定义的资源,称为 自定义资源,如[自定义资源](/docs/concepts/overview/extending#user-defined-types)部分所述。自定义资源通常与 API 访问扩展一起使用。 -4. Kubernetes 调度器决定将 Pod 放置到哪个节点。有几种方法可以扩展调度器。这些内容在 [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) 小节中描述。 - + +1. 用户通常使用 `kubectl` 与 Kubernetes API 进行交互。[kubectl 插件](/docs/tasks/extend-kubectl/kubectl-plugins/)扩展了 kubectl 二进制程序。它们只影响个人用户的本地环境,因此不能执行站点范围的策略。 +2. apiserver 处理所有请求。apiserver 中的几种类型的扩展点允许对请求进行身份认证或根据其内容对其进行阻止、编辑内容以及处理删除操作。这些内容在[API 访问扩展](/docs/concepts/overview/extending#api-access-extensions)小节中描述。 +3. apiserver 提供各种 *资源* 。 *内置的资源种类* ,如 `pods`,由 Kubernetes 项目定义,不能更改。您还可以添加您自己定义的资源或其他项目已定义的资源,称为 自定义资源,如[自定义资源](/docs/concepts/overview/extending#user-defined-types)部分所述。自定义资源通常与 API 访问扩展一起使用。 +4. Kubernetes 调度器决定将 Pod 放置到哪个节点。有几种方法可以扩展调度器。这些内容在 [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) 小节中描述。 5. Kubernetes 的大部分行为都是由称为控制器的程序实现的,这些程序是 API-Server 的客户端。控制器通常与自定义资源一起使用。 6. kubelet 在主机上运行,并帮助 pod 看起来就像在集群网络上拥有自己的 IP 的虚拟服务器。[网络插件](/docs/concepts/overview/extending#network-plugins)让您可以实现不同的 pod 网络。 7. kubelet 也挂载和卸载容器的卷。新的存储类型可以通过[存储插件](/docs/concepts/overview/extending#storage-plugins)支持。 @@ -184,6 +181,7 @@ This diagram shows the extension points in a Kubernetes system. + 如果您不确定从哪里开始扩展,此流程图可以提供帮助。请注意,某些解决方案可能涉及多种类型的扩展。 diff --git a/content/zh/docs/concepts/overview/components.md b/content/zh/docs/concepts/overview/components.md index 495982a463e16..8e2ad9dc3ec91 100644 --- a/content/zh/docs/concepts/overview/components.md +++ b/content/zh/docs/concepts/overview/components.md @@ -19,33 +19,47 @@ card: --- --> -{{% capture overview %}} -本文档概述了交付正常运行的 Kubernetes 集群所需的各种二进制组件。 +{{% capture overview %}} +当你部署完 Kubernetes, 即拥有了一个完整的集群。 +{{< glossary_definition term_id="cluster" length="all" prepend="一个 Kubernetes 集群包含">}} + +本文档概述了交付正常运行的 Kubernetes 集群所需的各种组件。 + +这张图表展示了包含所有相互关联组件的 Kubernetes 集群。 + +![Components of Kubernetes](/images/docs/components-of-kubernetes.png) + {{% /capture %}} {{% capture body %}} - -## Master 组件 +## 控制平面组件(Control Plane Components) - -Master 组件提供集群的控制平面。Master 组件对集群进行全局决策(例如,调度),并检测和响应集群事件(例如,当不满足部署的 `replicas` 字段时,启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。 + +控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 `replicas` 字段时,启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。 - -Master 组件可以在集群中的任何节点上运行。然而,为了简单起见,安装脚本通常会启动同一个计算机上所有 Master 组件,并且不会在计算机上运行用户容器。请参阅[构建高可用性集群](/docs/admin/high-availability/)示例对于多主机 VM 的安装。 + --> +控制平面组件可以在集群中的任何节点上运行。然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件,并且不会在此计算机上运行用户容器。请参阅[构建高可用性集群](/docs/admin/high-availability/)中对于多主机 VM 的设置示例。 ### kube-apiserver diff --git a/content/zh/docs/concepts/overview/what-is-kubernetes.md b/content/zh/docs/concepts/overview/what-is-kubernetes.md index 24fef5243b994..427073cd097b6 100644 --- a/content/zh/docs/concepts/overview/what-is-kubernetes.md +++ b/content/zh/docs/concepts/overview/what-is-kubernetes.md @@ -111,7 +111,7 @@ Containers are becoming popular because they have many benefits. Some of the con -#### 为什么需要 Kubernetes,它能做什么? +## 为什么需要 Kubernetes,它能做什么? * Kubernetes 不限制支持的应用程序类型。Kubernetes 旨在支持极其多种多样的工作负载,包括无状态、有状态和数据处理工作负载。如果应用程序可以在容器中运行,那么它应该可以在 Kubernetes 上很好地运行。 * Kubernetes 不部署源代码,也不构建您的应用程序。持续集成(CI)、交付和部署(CI/CD)工作流取决于组织的文化和偏好以及技术要求。 -* Kubernetes 不提供应用程序级别的服务作为内置服务,例如中间件(例如,消息中间件)、数据处理框架(例如,Spark)、数据库(例如,mysql)、缓存、集群存储系统(例如,Ceph)。这样的组件可以在 Kubernetes 上运行,并且/或者可以由运行在 Kubernetes 上的应用程序通过可移植机制(例如,开放服务代理)来访问。 +* Kubernetes 不提供应用程序级别的服务作为内置服务,例如中间件(例如,消息中间件)、数据处理框架(例如,Spark)、数据库(例如,mysql)、缓存、集群存储系统(例如,Ceph)。这样的组件可以在 Kubernetes 上运行,并且/或者可以由运行在 Kubernetes 上的应用程序通过可移植机制(例如,[开放服务代理](https://openservicebrokerapi.org/))来访问。 -Kubernetes REST API 中的所有对象都由名称和 UID 明确标识。 +集群中的每一个对象都一个[_名称_](#名称) 来标识在同类资源中的唯一性。 + +每个 Kubernetes 对象也有一个[_UID_](#uids) 来标识在整个集群中的唯一性。 + +比如,在同一个[namespace](/docs/concepts/overview/working-with-objects/namespaces/)中只能命名一个名为 `myapp-1234` 的 Pod, 但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`. 对于非唯一的用户提供的属性,Kubernetes 提供了[标签](/docs/user-guide/labels)和[注释](/docs/concepts/overview/working-with-objects/annotations/)。 @@ -28,7 +42,7 @@ See the [identifiers design doc](https://git.k8s.io/community/contributors/desig {{% capture body %}} ## 名称 @@ -36,16 +50,70 @@ See the [identifiers design doc](https://git.k8s.io/community/contributors/desig {{< glossary_definition term_id="name" length="all" >}} + +以下是比较常见的三种资源命名约束。 + + -按照惯例,Kubernetes 资源的名称最大长度应为 253 个字符,由小写字母、数字、`-`和 `.` 组成,但某些资源有更具体的限制。 +### DNS 子域名 + +某些资源类型需要一个 name 来作为一个 DNS 子域名,见定义 [RFC 1123](https://tools.ietf.org/html/rfc1123)。也就是命名必须满足如下规则: + +- 不能超过253个字符 +- 只能包含字母数字,以及'-' 和 '.' +- 须以字母数字开头 +- 须以字母数字结尾 -例如,下面是一个配置文件,Pod 名为 `nginx-demo`,容器名为 `nginx`: +### DNS 标签名称 + +某些资源类型需要其名称遵循 DNS 标签的标准,见[RFC 1123](https://tools.ietf.org/html/rfc1123)。也就是命名必须满足如下规则: + +- 最多63个字符 +- 只能包含字母数字,以及'-' +- 须以字母数字开头 +- 须以字母数字结尾 + + + +### Path 部分名称 + +一些用与 Path 部分的资源类型要求名称能被安全的 encode。换句话说,其名称不能含有这些字符 "."、".."、"/"或"%"。 + + + +下面是一个名为`nginx-demo`的 Pod 的配置清单: ```yaml apiVersion: v1 @@ -55,13 +123,39 @@ metadata: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 ``` +{{< note >}} + + +某些资源类型可能有其相应的附加命名约束。 + +{{< /note >}} + ## UIDs {{< glossary_definition term_id="uid" length="all" >}} + +Kubernetes UIDs 是通用的唯一标识符 (也叫 UUIDs). +UUIDs 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667. + +{{% /capture %}} + +{{% capture whatsnext %}} + +* 阅读关于 Kubernetes [labels](/docs/concepts/overview/working-with-objects/labels/)。 +* 更多参见 [Kubernetes 标识符和名称设计文档](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md). + {{% /capture %}} diff --git a/content/zh/docs/concepts/overview/working-with-objects/object-management.md b/content/zh/docs/concepts/overview/working-with-objects/object-management.md index 27ab353f0eb88..a6225ff4c823d 100644 --- a/content/zh/docs/concepts/overview/working-with-objects/object-management.md +++ b/content/zh/docs/concepts/overview/working-with-objects/object-management.md @@ -138,7 +138,7 @@ types whose specs are updated independently of the configuration file. Services of type `LoadBalancer`, for example, have their `externalIPs` field updated independently from the configuration by the cluster. --> -`replace` 命令式命令将现有规范替换为新提供的规范,并删除对配置文件中缺少的对象的所有更改。此方法不应与规范独立于配置文件进行更新的资源类型一起使用。比如类型为 `LoadBalancer` 的服务,它的 `externalIPs` 字段就是独立于集群的配置进行更新。 +`replace` 命令式命令将现有规范替换为新提供的规范,并删除对配置文件中缺少的对象的所有更改。此方法不应与规范独立于配置文件进行更新的资源类型一起使用。比如类型为 `LoadBalancer` 的服务,它的 `externalIPs` 字段就是独立于集群配置进行更新。 {{< /warning >}} -创建在配置文件中定义的对象: +创建配置文件中定义的对象: ```sh kubectl create -f nginx.yaml @@ -158,7 +158,7 @@ kubectl create -f nginx.yaml -删除在两个配置文件中定义的对象: +删除两个配置文件中定义的对象: ```sh kubectl delete -f nginx.yaml -f redis.yaml @@ -168,7 +168,7 @@ kubectl delete -f nginx.yaml -f redis.yaml Update the objects defined in a configuration file by overwriting the live configuration: --> -通过覆盖实时配置来更新配置文件中定义的对象: +通过覆盖活动配置来更新配置文件中定义的对象: ```sh kubectl replace -f nginx.yaml @@ -203,7 +203,7 @@ Disadvantages compared to imperative commands: - Object configuration requires the additional step of writing a YAML file. --> - 对象配置需要对对象架构有基本的了解。 -- 对象配置需要额外的写 YAML 文件的步骤。 +- 对象配置需要额外的步骤来编写 YAML 文件。 -- 命令式对象配置针对文件而不是目录上效果最佳。 -- 对活动对象的更新必须反映在配置文件中,否则将在下一次替换是丢失。 +- 命令式对象配置更适合文件,而非目录。 +- 对活动对象的更新必须反映在配置文件中,否则会在下一次替换时丢失。 -使用声明式对象配置时,用户对本地存储的对象配置文件进行操作,但是用户未定义要对该文件执行的操作。会自动通过 `kubectl` 按对象检测来创建、更新和删除对象。这使得可以在目录上工作,其中可能需要对不同的对象执行不同的操作。 +使用声明式对象配置时,用户对本地存储的对象配置文件进行操作,但是用户未定义要对该文件执行的操作。`kubectl` 会自动检测每个文件的创建、更新和删除操作。这使得配置可以在目录上工作,根据目录中配置文件对不同的对象执行不同的操作。 {{< note >}} -- 即使未将对活动对象所做的更改未合并回到配置文件中,也将保留这些更改。 -- 声明性对象配置更好地支持对目录进行操作并自动检测每个对象的操作类型(创建,修补,删除)。 +- 对活动对象所做的更改即使未合并到配置文件中,也会被保留下来。 +- 声明性对象配置更好地支持对目录进行操作并自动检测每个文件的操作类型(创建,修补,删除)。 -- 声明式对象配置难于调试并且出现异常时难以理解。 -- 使用差异的部分更新会创建复杂的合并和补丁操作。 +- 声明式对象配置难于调试并且出现异常时结果难以理解。 +- 使用 diff 产生的部分更新会创建复杂的合并和补丁操作。 {{% /capture %}} diff --git a/content/zh/docs/concepts/policy/limit-range.md b/content/zh/docs/concepts/policy/limit-range.md new file mode 100644 index 0000000000000..aa217f33117d7 --- /dev/null +++ b/content/zh/docs/concepts/policy/limit-range.md @@ -0,0 +1,148 @@ +--- +title: 限制范围 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + + + +默认情况下, Kubernetes 集群上的容器运行使用的[计算资源](/docs/user-guide/compute-resources) 没有限制。 +使用资源配额,集群管理员可以以命名空间为单位,限制其资源的使用与创建。 +在命名空间中,一个 Pod 或 Container 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。有人担心,一个 Pod 或 Container 会垄断所有可用的资源。LimitRange 是在命名空间内限制资源分配(给多个 Pod 或 Container)的策略对象。 + +{{% /capture %}} + + +{{% capture body %}} + + + +一个 _LimitRange_ 对象提供的限制能够做到: + +- 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。 +- 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。 +- 在一个命名空间中实施对一种资源的申请值和限制值的比值的控制。 +- 设置一个命名空间中对计算资源的默认申请/限制值,并且自动的在运行时注入到多个 Container 中。 + + + +## 启用 LimitRange + + + +对 LimitRange 的支持默认在多数 Kubernetes 发行版中启用。当 apiserver 的 `--enable-admission-plugins` 标志的参数包含 `LimitRanger` 准入控制器时即启用。 + + + +当一个命名空间中有 LimitRange 时,实施该 LimitRange 所定义的限制。 + + + +LimitRange 的名称必须是合法的 [DNS 子域名](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 + + + +### 限制范围总览 + + + +- 管理员在一个命名空间内创建一个 `LimitRange` 对象。 +- 用户在命名空间内创建 Pod ,Container 和 PersistentVolumeClaim 等资源。 +- `LimitRanger` 准入控制器对所有没有设置计算资源需求的 Pod 和 Container 设置默认值与限制值,并跟踪其使用量以保证没有超出命名空间中存在的任意 LimitRange 对象中的最小、最大资源使用量以及使用量比值。 +- 若创建或更新资源(Pod, Container, PersistentVolumeClaim)违反了 LimitRange 的约束,向 API 服务器的请求会失败,并返回 HTTP 状态码 `403 FORBIDDEN` 与描述哪一项约束被违反的消息。 +- 若命名空间中的 LimitRange 启用了对 `cpu` 和 `memory` 的限制,用户必须指定这些值的需求使用量与限制使用量。否则,系统将会拒绝创建 Pod。 +- LimitRange 的验证仅在 Pod 准入阶段进行,不对正在运行的 Pod 进行验证。 + + + +能够使用限制范围创建策略的例子有: + +- 在一个有两个节点,8 GiB 内存与16个核的集群中,限制一个命名空间的 Pod 申请 100m 单位,最大 500m 单位的 CPU,以及申请 200Mi,最大 600Mi 的内存。 +- 为 spec 中没有 cpu 和内存需求值的 Container 定义默认 CPU 限制值与需求值 150m,内存默认需求值 300Mi。 + + + +在命名空间的总限制值小于 Pod 或 Container 的限制值的总和的情况下,可能会产生资源竞争。在这种情况下,将不会创建 Container 或 Pod。 + + + +竞争和对 LimitRange 的改变都不会影响任何已经创建了的资源。 + + + +## 示例 + + + +- 查看[如何配置每个命名空间最小和最大的 CPU 约束](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)。 +- 查看[如何配置每个命名空间最小和最大的内存约束](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)。 +- 查看[如何配置每个命名空间默认的 CPU 申请值和限制值](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)。 +- 查看[如何配置每个命名空间默认的内存申请值和限制值](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)。 +- 查看[如何配置每个命名空间最小和最大存储使用量](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage)。 +- 查看[配置每个命名空间的配额的详细例子](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)。 + +{{% /capture %}} + +{{% capture whatsnext %}} + + + +查看 [LimitRanger 设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md)获取更多信息。 + +{{% /capture %}} diff --git a/content/zh/docs/concepts/scheduling-eviction/_index.md b/content/zh/docs/concepts/scheduling-eviction/_index.md new file mode 100644 index 0000000000000..742a054e4071b --- /dev/null +++ b/content/zh/docs/concepts/scheduling-eviction/_index.md @@ -0,0 +1,4 @@ +--- +title: "调度和驱逐(Scheduling and Eviction)" +weight: 90 +--- diff --git a/content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md b/content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md new file mode 100644 index 0000000000000..cc1bbda060e26 --- /dev/null +++ b/content/zh/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -0,0 +1,169 @@ +--- +title: Kubernetes 调度器 +content_template: templates/concept +weight: 50 +--- + + +{{% capture overview %}} + + +在 Kubernetes 中,_调度_ 是指将 {{< glossary_tooltip text="Pod" term_id="pod" >}} 放置到合适的 +{{< glossary_tooltip text="Node" term_id="node" >}} 上,然后对应 Node 上的 {{< glossary_tooltip term_id="kubelet" >}} 才能够运行这些 pod。 + +{{% /capture %}} + +{{% capture body %}} + +## 调度概览 {#scheduling} + + +调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod。调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行。调度器会依据下文的调度原则来做出调度选择。 + + +如果你想要理解 Pod 为什么会被调度到特定的 Node 上,或者你想要尝试实现一个自定义的调度器,这篇文章将帮助你了解调度。 + + +## kube-scheduler + + +[kube-scheduler](/zh/docs/reference/command-line-tools-reference/kube-scheduler/) 是 Kubernetes 集群的默认调度器,并且是集群 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 的一部分。如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许你自己写一个调度组件并替换原有的 kube-scheduler。 + + +对每一个新创建的 Pod 或者是未被调度的 Pod,kube-scheduler 会选择一个最优的 Node 去运行这个 Pod。然而,Pod 内的每一个容器对资源都有不同的需求,而且 Pod 本身也有不同的资源需求。因此,Pod 在被调度到 Node 上之前,根据这些特定的资源调度需求,需要对集群中的 Node 进行一次过滤。 + + +在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度节点_。如果没有任何一个 Node 能满足 Pod 的资源请求,那么这个 Pod 将一直停留在未调度状态直到调度器能够找到合适的 Node。 + + +调度器先在集群中找到一个 Pod 的所有可调度节点,然后根据一系列函数对这些可调度节点打分,然后选出其中得分最高的 Node 来运行 Pod。之后,调度器将这个调度决定通知给 kube-apiserver,这个过程叫做 _绑定_。 + + +在做调度决定时需要考虑的因素包括:单独和整体的资源请求、硬件/软件/策略限制、亲和以及反亲和要求、数据局域性、负载间的干扰等等。 + + +## kube-scheduler 调度流程 {#kube-scheduler-implementation} + + +kube-scheduler 给一个 pod 做调度选择包含两个步骤: + +1. 过滤 +2. 打分 + + +过滤阶段会将所有满足 Pod 调度需求的 Node 选出来。例如,PodFitsResources 过滤函数会检查候选 Node 的可用资源能否满足 Pod 的资源请求。在过滤之后,得出一个 Node 列表,里面包含了所有可调度节点;通常情况下,这个 Node 列表包含不止一个 Node。如果这个列表是空的,代表这个 Pod 不可调度。 + + +在打分阶段,调度器会为 Pod 从所有可调度节点中选取一个最合适的 Node。根据当前启用的打分规则,调度器会给每一个可调度节点进行打分。 + + +最后,kube-scheduler 会将 Pod 调度到得分最高的 Node 上。如果存在多个得分最高的 Node,kube-scheduler 会从中随机选取一个。 + + +支持以下两种方式配置调度器的过滤和打分行为: + + +1. [调度策略](/docs/reference/scheduling/policies) 允许你配置过滤的 _谓词(Predicates)_ 和打分的 _优先级(Priorities)_ 。 +2. [调度配置](/docs/reference/scheduling/profiles) 允许你配置实现不同调度阶段的插件,包括:`QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit` 等等。你也可以配置 kube-scheduler 运行不同的配置文件。 + +{{% /capture %}} +{{% capture whatsnext %}} + + + +* 阅读关于 [调度器性能调优](/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) +* 阅读关于 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +* 阅读关于 kube-scheduler 的 [参考文档](/zh/docs/reference/command-line-tools-reference/kube-scheduler/) +* 了解关于 [配置多个调度器](/zh/docs/tasks/administer-cluster/configure-multiple-schedulers/) 的方式 +* 了解关于 [拓扑结构管理策略](/zh/docs/tasks/administer-cluster/topology-manager/) +* 了解关于 [Pod 额外开销](/zh/docs/concepts/configuration/pod-overhead/) +{{% /capture %}} diff --git a/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md new file mode 100644 index 0000000000000..ee0b285712c29 --- /dev/null +++ b/content/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md @@ -0,0 +1,276 @@ +--- +title: 调度器性能调优 +content_template: templates/concept +weight: 70 +--- + + +{{% capture overview %}} + +{{< feature-state for_k8s_version="1.14" state="beta" >}} + + +作为 kubernetes 集群的默认调度器,[kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) 主要负责将 Pod 调度到集群的 Node 上。 + + +在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度_ Node。调度器先在集群中找到一个 Pod 的可调度 Node,然后根据一系列函数对这些可调度 Node打分,之后选出其中得分最高的 Node 来运行 Pod。最后,调度器将这个调度决定告知 kube-apiserver,这个过程叫做 _绑定_。 + + +这篇文章将会介绍一些在大规模 Kubernetes 集群下调度器性能优化的方式。 + +{{% /capture %}} + +{{% capture body %}} + + +在大规模集群中,你可以调节调度器的表现来平衡调度的延迟(新 Pod 快速就位)和精度(调度器很少做出糟糕的放置决策)。 + +你可以通过设置 kube-scheduler 的 `percentageOfNodesToScore` 来配置这个调优设置。这个 KubeSchedulerConfiguration 设置决定了调度集群中节点的阈值。 + + +### 设置阈值 + + +`percentageOfNodesToScore` 选项接受从 0 到 100 之间的整数值。0 值比较特殊,表示 kube-scheduler 应该使用其编译后的默认值。 +如果你设置 `percentageOfNodesToScore` 的值超过了 100,kube-scheduler 的表现等价于设置值为 100。 + + +要修改这个值,编辑 kube-scheduler 的配置文件(通常是 `/etc/kubernetes/config/kube-scheduler.yaml`),然后重启调度器。 + + +修改完成后,你可以执行 +```bash +kubectl get componentstatuses +``` + + +来检查该 kube-scheduler 组件是否健康。输出类似如下: +``` +NAME STATUS MESSAGE ERROR +controller-manager Healthy ok +scheduler Healthy ok +... +``` + + +## 节点打分阈值 {#percentage-of-nodes-to-score} + + +要提升调度性能,kube-scheduler 可以在找到足够的可调度节点之后停止查找。在大规模集群中,比起考虑每个节点的简单方法相比可以节省时间。 + + +你可以使用整个集群节点总数的百分比作为阈值来指定需要多少节点就足够。 kube-scheduler 会将它转换为节点数的整数值。在调度期间,如果 +kube-scheduler 已确认的可调度节点数足以超过了配置的百分比数量,kube-scheduler 将停止继续查找可调度节点并继续进行 [打分阶段](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)。 + + +[调度器如何遍历节点](#how-the-scheduler-iterates-over-nodes) 详细介绍了这个过程。 + + +### 默认阈值 + + +如果你不指定阈值,Kubernetes 使用线性公式计算出一个比例,在 100-node 集群下取 50%,在 5000-node 的集群下取 10%。 +这个自动设置的参数的最低值是 5%。 + + +这意味着,调度器至少会对集群中 5% 的节点进行打分,除非用户将该参数设置的低于 5。 + + +如果你想让调度器对集群内所有节点进行打分,则将 `percentageOfNodesToScore` 设置为 100。 + + +## 示例 + + +下面就是一个将 `percentageOfNodesToScore` 参数设置为 50% 的例子。 + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1alpha1 +kind: KubeSchedulerConfiguration +algorithmSource: + provider: DefaultProvider + +... + +percentageOfNodesToScore: 50 +``` + + +### 调节 percentageOfNodesToScore 参数 + + +`percentageOfNodesToScore` 的值必须在 1 到 100 之间,而且其默认值是通过集群的规模计算得来的。 +另外,还有一个 50 个 Node 的最小值是硬编码在程序中。 + + +{{< note >}} +当集群中的可调度节点少于 50 个时,调度器仍然会去检查所有的 Node,因为可调度节点太少,不足以停止调度器最初的过滤选择。 + +同理,在小规模集群中,如果你将 `percentageOfNodesToScore` 设置为一个较低的值,则没有或者只有很小的效果。 + +如果集群只有几百个节点或者更少,请保持这个配置的默认值。改变基本不会对调度器的性能有明显的提升。 + +{{< /note >}} + + +值得注意的是,该参数设置后可能会导致只有集群中少数节点被选为可调度节点,很多 node 都没有进入到打分阶段。这样就会造成一种后果,一个本来可以在打分阶段得分很高的 Node 甚至都不能进入打分阶段。 + +由于这个原因,这个参数不应该被设置成一个很低的值。通常的做法是不会将这个参数的值设置的低于 10。很低的参数值一般在调度器的吞吐量很高且对 node 的打分不重要的情况下才使用。换句话说,只有当你更倾向于在可调度节点中任意选择一个 Node 来运行这个 Pod 时,才使用很低的参数设置。 + + +### 调度器做调度选择的时候如何覆盖所有的 Node + + +如果你想要理解这一个特性的内部细节,那么请仔细阅读这一章节。 + + +在将 Pod 调度到 Node 上时,为了让集群中所有 Node 都有公平的机会去运行这些 Pod,调度器将会以轮询的方式覆盖全部的 Node。你可以将 Node 列表想象成一个数组。调度器从数组的头部开始筛选可调度节点,依次向后直到可调度节点的数量达到 `percentageOfNodesToScore` 参数的要求。在对下一个 Pod 进行调度的时候,前一个 Pod 调度筛选停止的 Node 列表的位置,将会来作为这次调度筛选 Node 开始的位置。 + + +如果集群中的 Node 在多个区域,那么调度器将从不同的区域中轮询 Node,来确保不同区域的 Node 接受可调度性检查。如下例,考虑两个区域中的六个节点: + +``` +Zone 1: Node 1, Node 2, Node 3, Node 4 +Zone 2: Node 5, Node 6 +``` + + +调度器将会按照如下的顺序去评估 Node 的可调度性: + +``` +Node 1, Node 5, Node 2, Node 6, Node 3, Node 4 +``` + + +在评估完所有 Node 后,将会返回到 Node 1,从头开始。 + +{{% /capture %}} diff --git a/content/zh/docs/concepts/configuration/scheduling-framework.md b/content/zh/docs/concepts/scheduling-eviction/scheduling-framework.md similarity index 50% rename from content/zh/docs/concepts/configuration/scheduling-framework.md rename to content/zh/docs/concepts/scheduling-eviction/scheduling-framework.md index 40ad637ed456c..bebea8d02e294 100644 --- a/content/zh/docs/concepts/configuration/scheduling-framework.md +++ b/content/zh/docs/concepts/scheduling-eviction/scheduling-framework.md @@ -3,7 +3,7 @@ reviewers: - ahg-g title: 调度框架 content_template: templates/concept -weight: 70 +weight: 60 --- @@ -21,7 +21,7 @@ weight: 70 {{< feature-state for_k8s_version="1.15" state="alpha" >}} -调度框架是 Kubernetes Scheduler 的一种新的可插入架构,可以简化调度器的自定义。它向现有的调度器增加了一组新的“插件” API。插件被编译到调度器程序中。这些 API 允许大多数调度功能以插件的形式实现,同时使调度“核心”保持简单且可维护。请参考[调度框架的设计提案][kep]获取框架设计的更多技术信息。 +调度框架是 Kubernetes Scheduler 的一种可插入架构,可以简化调度器的自定义。它向现有的调度器增加了一组新的“插件” API。插件被编译到调度器程序中。这些 API 允许大多数调度功能以插件的形式实现,同时使调度“核心”保持简单且可维护。请参考[调度框架的设计提案][kep]获取框架设计的更多技术信息。 [kep]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md @@ -57,7 +57,7 @@ Each attempt to schedule one Pod is split into two phases, the **scheduling cycle** and the **binding cycle**. --> -每次调度一个 pod 的尝试都分为两个阶段,即**调度周期**和**绑定周期**。 +每次调度一个 Pod 的尝试都分为两个阶段,即 **调度周期** 和 **绑定周期**。 -调度周期为 pod 选择一个节点,绑定周期将该决策应用于集群。调度周期和绑定周期一起被称为“调度上下文”。 +调度周期为 Pod 选择一个节点,绑定周期将该决策应用于集群。调度周期和绑定周期一起被称为“调度上下文”。 -如果确定 pod 不可调度或者存在内部错误,则可以终止调度周期或绑定周期。Pod 将返回队列并重试。 +如果确定 Pod 不可调度或者存在内部错误,则可以终止调度周期或绑定周期。Pod 将返回队列并重试。 -下图显示了一个 pod 的调度上下文以及调度框架公开的扩展点。在此图片中,“过滤器”等同于“断言”,“评分”相当于“优先级函数”。 +下图显示了一个 Pod 的调度上下文以及调度框架公开的扩展点。在此图片中,“过滤器”等同于“断言”,“评分”相当于“优先级函数”。 - {{< figure src="/images/docs/scheduling-framework-extensions.png" title="调度框架扩展点" >}} + - -### 队列排序 +### 队列排序 {#queue-sort} -队列排序插件用于对调度队列中的 pod 进行排序。队列排序插件本质上将提供 "less(Pod1, Pod2)" 函数。一次只能启动一个队列插件。 +队列排序插件用于对调度队列中的 Pod 进行排序。队列排序插件本质上提供 "less(Pod1, Pod2)" 函数。一次只能启动一个队列插件。 -### 前置过滤 +### 前置过滤 {#pre-filter} -前置过滤插件用于预处理 pod 的相关信息,或者检查集群或 pod 必须满足的某些条件。如果前置过滤插件返回错误,则调度周期将终止。 +前置过滤插件用于预处理 Pod 的相关信息,或者检查集群或 Pod 必须满足的某些条件。如果 PreFilter 插件返回错误,则调度周期将终止。 -过滤插件用于过滤出不能运行该 pod 的节点。对于每个节点,调度器将按照其配置顺序调用这些过滤插件。如果任何过滤插件将节点标记为不可行,则不会为该节点调用剩下的过滤插件。节点可以被同时进行评估。 - - - -### 后置过滤 - - - -后置过滤是一个信息性的扩展点。通过过滤阶段的节点列表将调用这些后置过滤。插件将使用这些数据来更新内部的状态或者生成日志/指标。 - - +过滤插件用于过滤出不能运行该 Pod 的节点。对于每个节点,调度器将按照其配置顺序调用这些过滤插件。如果任何过滤插件将节点标记为不可行,则不会为该节点调用剩下的过滤插件。节点可以被同时进行评估。 -**注意:**希望执行“预评分”工作的插件应该使用后置过滤扩展点。 + +### 前置评分 {#pre-score} - + +前置评分插件用于执行 “前置评分” 工作,即生成一个可共享状态供评分插件使用。如果 PreScore 插件返回错误,则调度周期将终止。 -### 评分 + +### 评分 {#scoring} -评分插件用于对通过过滤阶段的节点进行排名。调度器将为每个节点调用每个评分插件。将有一个定义明确的整数范围,代表最小和最大分数。在[标准化评分](#标准化评分)阶段之后,调度器将根据配置的插件权重合并所有插件的节点分数。 +评分插件用于对通过过滤阶段的节点进行排名。调度器将为每个节点调用每个评分插件。将有一个定义明确的整数范围,代表最小和最大分数。在[标准化评分](#normalize-scoring)阶段之后,调度器将根据配置的插件权重合并所有插件的节点分数。 -### 标准化评分 +### 标准化评分 {#normalize-scoring} -标准化评分插件用于在调度器计算节点的排名之前修改分数。在此扩展点注册的插件将使用同一插件的[评分](#评分)结果被调用。每个插件在每个调度周期调用一次。 +标准化评分插件用于在调度器计算节点的排名之前修改分数。在此扩展点注册的插件将使用同一插件的[评分](#scoring) 结果被调用。每个插件在每个调度周期调用一次。 -如果任何标准化评分插件返回错误,则调度阶段将终止。 +如果任何 NormalizeScore 插件返回错误,则调度阶段将终止。 - -**注意:**希望执行“预保留”工作的插件应该使用标准化评分扩展点。 +{{< note >}} +希望执行“预保留”工作的插件应该使用 NormalizeScore 扩展点。 +{{< /note >}} -保留是一个信息性的扩展点。管理运行时状态的插件(也成为“有状态插件”)应该使用此扩展点,以便调度器在节点给指定 pod 预留了资源时能够通知该插件。这是在调度器真正将 pod 绑定到节点之前发生的,并且它存在是为了防止在调度器等待绑定成功时发生竞争情况。 +保留是一个信息性的扩展点。管理运行时状态的插件(也成为“有状态插件”)应该使用此扩展点,以便调度器在节点给指定 Pod 预留了资源时能够通知该插件。这是在调度器真正将 Pod 绑定到节点之前发生的,并且它存在是为了防止在调度器等待绑定成功时发生竞争情况。 - -这个是调度周期的最后一步。一旦 pod 处于保留状态,它将在绑定周期结束时触发[不保留](#不保留)插件(失败时)或 -[绑定后](#绑定后)插件(成功时)。 - - -*注意:此概念曾被称为“假设”。* +这个是调度周期的最后一步。一旦 Pod 处于保留状态,它将在绑定周期结束时触发[不保留](#不保留) 插件(失败时)或 +[绑定后](#post-bind) 插件(成功时)。 -允许插件用于防止或延迟 pod 的绑定。一个允许插件可以做以下三件事之一。 +_Permit_ 插件在每个 Pod 调度周期的最后调用,用于防止或延迟 Pod 的绑定。一个允许插件可以做以下三件事之一: 1. **批准** \ - 一旦所有允许插件批准 pod 后,该 pod 将被发送以进行绑定。 + 一旦所有 Permit 插件批准 Pod 后,该 Pod 将被发送以进行绑定。 1. **拒绝** \ - 如果任何允许插件拒绝 pod,则该 pod 将被返回到调度队列。这将触发[不保留](#不保留)插件。 + 如果任何 Permit 插件拒绝 Pod,则该 Pod 将被返回到调度队列。这将触发[不保留](#不保留) 插件。 1. **等待**(带有超时) \ - 如果一个允许插件返回“等待”结果,则 pod 将保持在允许阶段,直到插件批准它。如果超时发生,**等待**变成**拒绝**,并且 pod 将返回调度队列,从而触发[不保留](#不保留)插件。 - - + 如果一个 Permit 插件返回 “等待” 结果,则 Pod 将保持在一个内部的 “等待中” 的 Pod 列表,同时该 Pod 的绑定周期启动时即直接阻塞直到得到[批准](#frameworkhandle)。如果超时发生,**等待** 变成 **拒绝**,并且 Pod 将返回调度队列,从而触发[不保留](#不保留) 插件。 -**批准 pod 绑定** - - -尽管任何插件可以从缓存中访问“等待”状态的 pod 列表并批准它们。我们希望只有允许插件可以批准处于“等待”状态的 预留 pod 的绑定。一旦 pod 被批准了,它将发送到预绑定阶段。 + +{{< note >}} +尽管任何插件可以访问 “等待中” 状态的 Pod 列表并批准它们 (查看 [`FrameworkHandle`](#frameworkhandle))。我们希望只有允许插件可以批准处于 “等待中” 状态的 预留 Pod 的绑定。一旦 Pod 被批准了,它将发送到[预绑定](#pre-bind) 阶段。 +{{< /note >}} -### 预绑定 +### 预绑定 {#pre-bind} -预绑定插件用于执行 pod 绑定前所需的任何工作。例如,一个预绑定插件可能需要提供网络卷并且在允许 pod 运行在该节点之前将其挂载到目标节点上。 +预绑定插件用于执行 Pod 绑定前所需的任何工作。例如,一个预绑定插件可能需要提供网络卷并且在允许 Pod 运行在该节点之前将其挂载到目标节点上。 -如果任何预绑定插件返回错误,则 pod 将被[拒绝](#不保留)并且返回到调度队列中。 +如果任何 PreBind 插件返回错误,则 Pod 将被[拒绝](#不保留) 并且返回到调度队列中。 -绑定插件用于将 pod 绑定到节点上。直到所有的预绑定插件都完成,绑定插件才会被调用。每个绑定插件按照配置顺序被调用。绑定插件可以选择是否处理指定的 pod。如果绑定插件选择处理 pod,**剩余的绑定插件将被跳过**。 +绑定插件用于将 Pod 绑定到节点上。直到所有的 PreBind 插件都完成,绑定插件才会被调用。每个绑定插件按照配置顺序被调用。绑定插件可以选择是否处理指定的 Pod。如果绑定插件选择处理 Pod,**剩余的绑定插件将被跳过**。 -### 绑定后 +### 绑定后 {#post-bind} -这是个信息性的扩展点。绑定后插件在 pod 成功绑定后被调用。这是绑定周期的结尾,可用于清理相关的资源。 +这是个信息性的扩展点。绑定后插件在 Pod 成功绑定后被调用。这是绑定周期的结尾,可用于清理相关的资源。 -这是个信息性的扩展点。如果 pod 被保留,然后在后面的阶段中被拒绝,则不保留插件将被通知。不保留插件应该清楚保留 pod 的相关状态。 +这是个信息性的扩展点。如果 Pod 被保留,然后在后面的阶段中被拒绝,则不保留插件将被通知。不保留插件应该清楚保留 Pod 的相关状态。 - -可以在调度器配置中启用插件。另外,默认的插件可以在配置中禁用。在 1.15 版本,调度框架没有默认的插件。 - - - -调度器配置也可以包含插件的配置。这些配置在调度器初始化插件时传给插件。配置是一个任意值。接收插件应该解码并处理配置信息。 - - - -下面的例子显示一个调度器配置,该配置在 `reserve` 和 `preBind` 扩展点启用了一些插件并且禁用了一个插件。它还提供了 `foo` 插件的配置。 - -```yaml -apiVersion: kubescheduler.config.k8s.io/v1alpha1 -kind: KubeSchedulerConfiguration - -... - -plugins: - reserve: - enabled: - - name: foo - - name: bar - disabled: - - name: baz - preBind: - enabled: - - name: foo - disabled: - - name: baz - -pluginConfig: -- name: foo - args: > - Arbitrary set of args to plugin foo -``` - - - -当配置省略扩展点时,将使用该扩展点的默认插件。当存在扩展掉并且配置为 `enabled`,则 `enabled` 的插件将和默认插件一同调用。首先调用默认插件,然后以配置中指定的顺序来调用其他已启用的插件。如果希望以不同的顺序来调用默认插件,默认插件必须 `disabled`,然后以期望的顺序 `enabled`。 - - - -假设在 `reserve` 扩展点有一个默认的 `foo` 插件,且添加 `bar` 插件并且希望在 `foo` 插件之前执行,我们应该禁用 `foo` 并且按顺序启用 `bar` 和 `foo`。下面的例子显示了实现此目的的配置: - -```yaml -apiVersion: kubescheduler.config.k8s.io/v1alpha1 -kind: KubeSchedulerConfiguration - -... - -plugins: - reserve: - enabled: - - name: bar - - name: foo - disabled: - - name: foo -``` + +你可以在调度器配置中启用或禁用插件。如果你在使用 Kubernetes v1.18 或更高版本,大部分调度[插件](/docs/reference/scheduling/profiles/#scheduling-plugins) 都在使用中且默认启用。 + + +除了默认的插件,你同样可以实现自己的调度插件并且将他们与默认插件一起配置。你可以访问 [调度插件](https://github.com/kubernetes-sigs/scheduler-plugins) 了解更多详情。 + + +如果你正在使用 Kubernetes v1.18 或更高版本,你可以将一组插件设置为一个调度器配置文件,然后定义不同的配置文件来满足各类工作负载。 +了解更多关于 [多配置文件](/docs/reference/scheduling/profiles/#multiple-profiles)。 {{% /capture %}} diff --git a/content/zh/docs/concepts/scheduling/kube-scheduler.md b/content/zh/docs/concepts/scheduling/kube-scheduler.md deleted file mode 100644 index 5464e1a71bf50..0000000000000 --- a/content/zh/docs/concepts/scheduling/kube-scheduler.md +++ /dev/null @@ -1,319 +0,0 @@ ---- -title: Kubernetes 调度器 -content_template: templates/concept -weight: 60 ---- - - -{{% capture overview %}} - - -在 Kubernetes 中,_调度_ 是指将 {{< glossary_tooltip text="Pod" term_id="pod" >}} 放置到合适的 -{{< glossary_tooltip text="Node" term_id="node" >}} 上,然后对应 Node 上的 {{< glossary_tooltip term_id="kubelet" >}} 才能够运行这些 pod。 - -{{% /capture %}} - -{{% capture body %}} - -## 调度概览 {#scheduling} - - -调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod。调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行。调度器会依据下文的调度原则来做出调度选择。 - - -如果你想要理解 Pod 为什么会被调度到特定的 Node 上,或者你想要尝试实现一个自定义的调度器,这篇文章将帮助你了解调度。 - - -## kube-scheduler - - -[kube-scheduler](/zh/docs/reference/command-line-tools-reference/kube-scheduler/) 是 Kubernetes 集群的默认调度器,并且是集群 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 的一部分。如果你真的希望或者有这方面的需求,kube-scheduler 在设计上是允许你自己写一个调度组件并替换原有的 kube-scheduler。 - - -对每一个新创建的 Pod 或者是未被调度的 Pod,kube-scheduler 会选择一个最优的 Node 去运行这个 Pod。然而,Pod 内的每一个容器对资源都有不同的需求,而且 Pod 本身也有不同的资源需求。因此,Pod 在被调度到 Node 上之前,根据这些特定的资源调度需求,需要对集群中的 Node 进行一次过滤。 - - -在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度节点_。如果没有任何一个 Node 能满足 Pod 的资源请求,那么这个 Pod 将一直停留在未调度状态直到调度器能够找到合适的 Node。 - - -调度器先在集群中找到一个 Pod 的所有可调度节点,然后根据一系列函数对这些可调度节点打分,然后选出其中得分最高的 Node 来运行 Pod。之后,调度器将这个调度决定通知给 kube-apiserver,这个过程叫做 _绑定_。 - - -在做调度决定时需要考虑的因素包括:单独和整体的资源请求、硬件/软件/策略限制、亲和以及反亲和要求、数据局域性、负载间的干扰等等。 - - -## kube-scheduler 调度流程 {#kube-scheduler-implementation} - - -kube-scheduler 给一个 pod 做调度选择包含两个步骤: - -1. 过滤 - -2. 打分 - - -过滤阶段会将所有满足 Pod 调度需求的 Node 选出来。例如,PodFitsResources 过滤函数会检查候选 Node 的可用资源能否满足 Pod 的资源请求。在过滤之后,得出一个 Node 列表,里面包含了所有可调度节点;通常情况下,这个 Node 列表包含不止一个 Node。如果这个列表是空的,代表这个 Pod 不可调度。 - - -在打分阶段,调度器会为 Pod 从所有可调度节点中选取一个最合适的 Node。根据当前启用的打分规则,调度器会给每一个可调度节点进行打分。 - - -最后,kube-scheduler 会将 Pod 调度到得分最高的 Node 上。如果存在多个得分最高的 Node,kube-scheduler 会从中随机选取一个。 - - -### 默认策略 - - -kube-scheduler 有一系列的默认调度策略。 - - -### 过滤策略 - -- `PodFitsHostPorts`:如果 Pod 中定义了 hostPort 属性,那么需要先检查这个指定端口是否 - 已经被 Node 上其他服务占用了。 - -- `PodFitsHost`:若 pod 对象拥有 hostname 属性,则检查 Node 名称字符串与此属性是否匹配。 - -- `PodFitsResources`:检查 Node 上是否有足够的资源(如,cpu 和内存)来满足 pod 的资源请求。 - -- `PodMatchNodeSelector`:检查 Node 的 {{< glossary_tooltip text="标签" term_id="label" >}} 是否能匹配 - Pod 属性上 Node 的 {{< glossary_tooltip text="标签" term_id="label" >}} 值。 - -- `NoVolumeZoneConflict`:检测 pod 请求的 {{< glossary_tooltip text="Volumes" term_id="volume" >}} 在 - Node 上是否可用,因为某些存储卷存在区域调度约束。 - -- `NoDiskConflict`:检查 Pod 对象请求的存储卷在 Node 上是否可用,若不存在冲突则通过检查。 - -- `MaxCSIVolumeCount`:检查 Node 上已经挂载的 {{< glossary_tooltip text="CSI" term_id="csi" >}} - 存储卷数量是否超过了指定的最大值。 - -- `CheckNodeMemoryPressure`:如果 Node 上报了内存资源压力过大,而且没有配置异常,那么 Pod 将不会被调度到这个 Node 上。 - -- `CheckNodePIDPressure`:如果 Node 上报了 PID 资源压力过大,而且没有配置异常,那么 Pod 将不会被调度到这个 Node 上。 - -- `CheckNodeDiskPressure`:如果 Node 上报了磁盘资源压力过大(文件系统满了或者将近满了), - 而且配置没有异常,那么 Pod 将不会被调度到这个 Node 上。 - -- `CheckNodeCondition`:Node 可以上报其自身的状态,如磁盘、网络不可用,表明 kubelet 未准备好运行 pod。 - 如果 Node 被设置成这种状态,那么 pod 将不会被调度到这个 Node 上。 - -- `PodToleratesNodeTaints`:检查 pod 属性上的 {{< glossary_tooltip text="tolerations" term_id="toleration" >}} 能否容忍 - Node 的 {{< glossary_tooltip text="taints" term_id="taint" >}}。 - -- `CheckVolumeBinding`:检查 Node 上已经绑定的和未绑定的 {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} - 能否满足 Pod 对象的存储卷需求。 - - -### 打分策略 - -- `SelectorSpreadPriority`:尽量将归属于同一个 {{< glossary_tooltip text="Service" term_id="service" >}}、{{< glossary_tooltip term_id="statefulset" >}} 或 {{< glossary_tooltip term_id="replica-set" >}} 的 Pod 资源分散到不同的 Node 上。 - -- `InterPodAffinityPriority`:遍历 Pod 对象的亲和性条目,并将那些能够匹配到给定 Node 的条目的权重相加,结果值越大的 Node 得分越高。 - -- `LeastRequestedPriority`:空闲资源比例越高的 Node 得分越高。换句话说,Node 上的 Pod 越多,并且资源被占用的越多,那么这个 Node 的得分就会越少。 - -- `MostRequestedPriority`:空闲资源比例越低的 Node 得分越高。这个调度策略将会把你所有的工作负载(Pod)调度到尽量少的 Node 上。 - -- `RequestedToCapacityRatioPriority`:为 Node 上每个资源占用比例设定得分值,给资源打分函数在打分时使用。 - -- `BalancedResourceAllocation`:优选那些使得资源利用率更为均衡的节点。 - -- `NodePreferAvoidPodsPriority`:这个策略将根据 Node 的注解信息中是否含有 `scheduler.alpha.kubernetes.io/preferAvoidPods` 来 - 计算其优先级。使用这个策略可以将两个不同 Pod 运行在不同的 Node 上。 - -- `NodeAffinityPriority`:基于 Pod 属性中 PreferredDuringSchedulingIgnoredDuringExecution 来进行 Node 亲和性调度。你可以通过这篇文章 - [Pods 到 Nodes 的分派](/zh/docs/concepts/configuration/assign-pod-node/) 来了解到更详细的内容。 - -- `TaintTolerationPriority`:基于 Pod 中对每个 Node 上污点容忍程度进行优先级评估,这个策略能够调整待选 Node 的排名。 - -- `ImageLocalityPriority`:Node 上已经拥有 Pod 需要的 {{< glossary_tooltip text="容器镜像" term_id="image" >}} 的 Node 会有较高的优先级。 - -- `ServiceSpreadingPriority`:这个调度策略的主要目的是确保将归属于同一个 Service 的 Pod 调度到不同的 Node 上。如果 Node 上 - 没有归属于同一个 Service 的 Pod,这个策略更倾向于将 Pod 调度到这类 Node 上。最终的目的:即使在一个 Node 宕机之后 Service 也具有很强容灾能力。 - -- `CalculateAntiAffinityPriorityMap`:这个策略主要是用来实现[pod反亲和] - (/zh/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)。 - -- `EqualPriorityMap`:将所有的 Node 设置成相同的权重为 1。 - -{{% /capture %}} -{{% capture whatsnext %}} -* 阅读关于 [调度器性能调优](/zh/docs/concepts/scheduling/scheduler-perf-tuning/) -* 阅读关于 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/) -* 阅读关于 kube-scheduler 的 [参考文档](/zh/docs/reference/command-line-tools-reference/kube-scheduler/) -* 了解关于 [配置多个调度器](/zh/docs/tasks/administer-cluster/configure-multiple-schedulers/) 的方式 -* 了解关于 [拓扑结构管理策略](/zh/docs/tasks/administer-cluster/topology-manager/) -* 了解关于 [Pod 额外开销](/zh/docs/concepts/configuration/pod-overhead/) -{{% /capture %}} diff --git a/content/zh/docs/concepts/scheduling/scheduler-perf-tuning.md b/content/zh/docs/concepts/scheduling/scheduler-perf-tuning.md deleted file mode 100644 index ccdb76fcc4587..0000000000000 --- a/content/zh/docs/concepts/scheduling/scheduler-perf-tuning.md +++ /dev/null @@ -1,185 +0,0 @@ ---- -title: 调度器性能调优 -content_template: templates/concept -weight: 70 ---- - - -{{% capture overview %}} - -{{< feature-state for_k8s_version="1.14" state="beta" >}} - - -作为 kubernetes 集群的默认调度器,kube-scheduler 主要负责将 Pod 调度到集群的 Node 上。 - - -在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度_ Node。调度器先在集群中找到一个 Pod 的可调度 Node,然后根据一系列函数对这些可调度 Node打分,之后选出其中得分最高的 Node 来运行 Pod。最后,调度器将这个调度决定告知 kube-apiserver,这个过程叫做 _绑定_。 - - -这篇文章将会介绍一些在大规模 Kubernetes 集群下调度器性能优化的方式。 - -{{% /capture %}} - -{{% capture body %}} - - -## 设置打分阶段 Node 数量占集群总规模的百分比 - - -在 Kubernetes 1.12 版本之前,kube-scheduler 会检查集群中所有节点的可调度性,并且给可调度节点打分。Kubernetes 1.12 版本添加了一个新的功能,允许调度器在找到一定数量的可调度节点之后就停止继续寻找可调度节点。该功能能提高调度器在大规模集群下的调度性能。这个数值是集群规模的百分比。这个百分比通过 `percentageOfNodesToScore` 参数来进行配置。其值的范围在 1 到 100 之间,最大值就是 100%。如果设置为 0 就代表没有提供这个参数配置。Kubernetes 1.14 版本又加入了一个特性,在该参数没有被用户配置的情况下,调度器会根据集群的规模自动设置一个集群比例,然后通过这个比例筛选一定数量的可调度节点进入打分阶段。该特性使用线性公式计算出集群比例,如在 100-node 的集群下取 50%。在 5000-node 的集群下取 10%。这个自动设置的参数的最低值是 5%。换句话说,调度器至少会对集群中 5% 的节点进行打分,除非用户将该参数设置的低于 5。 - - -下面就是一个将 `percentageOfNodesToScore` 参数设置为 50% 的例子。 - -```yaml -apiVersion: kubescheduler.config.k8s.io/v1alpha1 -kind: KubeSchedulerConfiguration -algorithmSource: - provider: DefaultProvider - -... - -percentageOfNodesToScore: 50 -``` - - -{{< note >}} 当集群中的可调度节点少于 50 个时,调度器仍然会去检查所有的 Node,因为可调度节点太少,不足以停止调度器最初的过滤选择。{{< /note >}} - - -**如果想要关闭这个功能**,你可以将 `percentageOfNodesToScore` 值设置成 100。 - - -### 调节 percentageOfNodesToScore 参数 - - -`percentageOfNodesToScore` 的值必须在 1 到 100 之间,而且其默认值是通过集群的规模计算得来的。另外,还有一个 50 个 Node 的数值是硬编码在程序里面的。设置这个值的作用在于:当集群的规模是数百个 Node 并且 `percentageOfNodesToScore` 参数设置的过低的时候,调度器筛选到的可调度节点数目基本不会受到该参数影响。当集群规模较小时,这个设置将导致调度器性能提升并不明显。然而在一个超过 1000 个 Node 的集群中,将调优参数设置为一个较低的值可以很明显的提升调度器性能。 - - -值得注意的是,该参数设置后可能会导致只有集群中少数节点被选为可调度节点,很多 node 都没有进入到打分阶段。这样就会造成一种后果,一个本来可以在打分阶段得分很高的 Node 甚至都不能进入打分阶段。由于这个原因,这个参数不应该被设置成一个很低的值。通常的做法是不会将这个参数的值设置的低于 10。很低的参数值一般在调度器的吞吐量很高且对 node 的打分不重要的情况下才使用。换句话说,只有当你更倾向于在可调度节点中任意选择一个 Node 来运行这个 Pod 时,才使用很低的参数设置。 - - -如果你的集群规模只有数百个节点或者更少,我们并不推荐你将这个参数设置得比默认值更低。因为这种情况下不太可能有效的提高调度器性能。 - - -### 调度器做调度选择的时候如何覆盖所有的 Node - - -如果你想要理解这一个特性的内部细节,那么请仔细阅读这一章节。 - - -在将 Pod 调度到 Node 上时,为了让集群中所有 Node 都有公平的机会去运行这些 Pod,调度器将会以轮询的方式覆盖全部的 Node。你可以将 Node 列表想象成一个数组。调度器从数组的头部开始筛选可调度节点,依次向后直到可调度节点的数量达到 `percentageOfNodesToScore` 参数的要求。在对下一个 Pod 进行调度的时候,前一个 Pod 调度筛选停止的 Node 列表的位置,将会来作为这次调度筛选 Node 开始的位置。 - - -如果集群中的 Node 在多个区域,那么调度器将从不同的区域中轮询 Node,来确保不同区域的 Node 接受可调度性检查。如下例,考虑两个区域中的六个节点: - -``` -Zone 1: Node 1, Node 2, Node 3, Node 4 -Zone 2: Node 5, Node 6 -``` - - -调度器将会按照如下的顺序去评估 Node 的可调度性: - -``` -Node 1, Node 5, Node 2, Node 6, Node 3, Node 4 -``` - - -在评估完所有 Node 后,将会返回到 Node 1,从头开始。 - -{{% /capture %}} diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index b1d8becc60ce9..0ccc19d65bed5 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -26,7 +26,7 @@ with [volumes](/docs/concepts/storage/volumes/) and ## 介绍 -`StorageClass` 为管理员提供了描述存储 `"类"` 的方法。 -不同的`类型`可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。 -Kubernetes 本身并不清楚各种`类`代表的什么。这个`类`的概念在其他存储系统中有时被称为"配置文件"。 +StorageClass 为管理员提供了描述存储 "类" 的方法。 +不同的类型可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。 +Kubernetes 本身并不清楚各种类代表的什么。这个类的概念在其他存储系统中有时被称为 "配置文件"。 ## StorageClass 资源 -每个 `StorageClass` 都包含 `provisioner`、`parameters` 和 `reclaimPolicy` 字段, -这些字段会在`StorageClass`需要动态分配 `PersistentVolume` 时会使用到。 +每个 StorageClass 都包含 `provisioner`、`parameters` 和 `reclaimPolicy` 字段, +这些字段会在 StorageClass 需要动态分配 `PersistentVolume` 时会使用到。 -`StorageClass` 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。 -当创建 `StorageClass` 对象时,管理员设置 StorageClass 对象的命名和其他参数,一旦创建了对象就不能再对其更新。 +StorageClass 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。 +当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数,一旦创建了对象就不能再对其更新。 -管理员可以为没有申请绑定到特定 `StorageClass` 的 PVC 指定一个默认的存储`类` : -更多详情请参阅 [`PersistentVolumeClaim` 章节](/docs/concepts/storage/persistent-volumes/#class-1)。 +管理员可以为没有申请绑定到特定 StorageClass 的 PVC 指定一个默认的存储类 : +更多详情请参阅 [PersistentVolumeClaim 章节](/docs/concepts/storage/persistent-volumes/#class-1)。 ```yaml apiVersion: storage.k8s.io/v1 @@ -87,38 +87,38 @@ volumeBindingMode: Immediate ### 存储分配器 -`StorageClass` 有一个分配器,用来决定使用哪个`卷插件`分配`PV`。该字段必须指定。 +每个 StorageClass 都有一个分配器,用来决定使用哪个卷插件分配 PV。该字段必须指定。 -| 卷插件 | 内置分配器 | 配置例子 | -| :--- | :---: | :---: | -| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) | -| AzureFile | ✓ | [Azure File](#azure-file) | -| AzureDisk | ✓ | [Azure Disk](#azure-disk) | -| CephFS | - | - | -| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder)| -| FC | - | - | -| FlexVolume | - | - | -| Flocker | ✓ | - | -| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) | -| Glusterfs | ✓ | [Glusterfs](#glusterfs) | -| iSCSI | - | - | -| Quobyte | ✓ | [Quobyte](#quobyte) | -| NFS | - | - | -| RBD | ✓ | [Ceph RBD](#ceph-rbd) | -| VsphereVolume | ✓ | [vSphere](#vsphere) | -| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) | -| ScaleIO | ✓ | [ScaleIO](#scaleio) | -| StorageOS | ✓ | [StorageOS](#storageos) | -| Local | - | [Local](#local) | +| 卷插件 | 内置分配器 | 配置例子 | +|:---------------------|:----------:|:-------------------------------------:| +| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) | +| AzureFile | ✓ | [Azure File](#azure-file) | +| AzureDisk | ✓ | [Azure Disk](#azure-disk) | +| CephFS | - | - | +| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder) | +| FC | - | - | +| FlexVolume | - | - | +| Flocker | ✓ | - | +| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) | +| Glusterfs | ✓ | [Glusterfs](#glusterfs) | +| iSCSI | - | - | +| Quobyte | ✓ | [Quobyte](#quobyte) | +| NFS | - | - | +| RBD | ✓ | [Ceph RBD](#ceph-rbd) | +| VsphereVolume | ✓ | [vSphere](#vsphere) | +| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) | +| ScaleIO | ✓ | [ScaleIO](#scaleio) | +| StorageOS | ✓ | [StorageOS](#storageos) | +| Local | - | [Local](#local) | -您不限于指定此处列出的"内置"分配器(其名称前缀为 kubernetes.io 并打包在 Kubernetes 中)。 +您不限于指定此处列出的 "内置" 分配器(其名称前缀为 "kubernetes.io" 并打包在 Kubernetes 中)。 您还可以运行和指定外部分配器,这些独立的程序遵循由 Kubernetes 定义的 [规范](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md)。 外部供应商的作者完全可以自由决定他们的代码保存于何处、打包方式、运行方式、使用的插件(包括 Flex)等。 代码仓库 [kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner) @@ -152,20 +152,20 @@ vendors provide their own external provisioner. ### 回收策略 -由 `StorageClass` 动态创建的持久化卷会在的 `reclaimPolicy` 字段中指定回收策略,可以是 -`Delete` 或者 `Retain`。如果 `StorageClass` 对象被创建时没有指定 `reclaimPolicy` ,它将默认为 `Delete`。 +由 StorageClass 动态创建的 PersistentVolume 会在类的 `reclaimPolicy` 字段中指定回收策略,可以是 +`Delete` 或者 `Retain`。如果 StorageClass 对象被创建时没有指定 `reclaimPolicy`,它将默认为 `Delete`。 -通过 `StorageClass` 手动创建并管理的 Persistent Volume 会使用它们被创建时指定的回收政策。 +通过 StorageClass 手动创建并管理的 PersistentVolume 会使用它们被创建时指定的回收政策。 -永久卷可以配置为可扩展。将此功能设置为 `true` 时,允许用户通过编辑相应的PVC对象来调整卷大小。 +PersistentVolume 可以配置为可扩展。将此功能设置为 `true` 时,允许用户通过编辑相应的 PVC 对象来调整卷大小。 -当基础存储类的 `allowVolumeExpansion` 字段设置为true时,以下类型的卷支持卷扩展。 +当基础存储类的 `allowVolumeExpansion` 字段设置为 true 时,以下类型的卷支持卷扩展。 -* gcePersistentDisk -* awsElasticBlockStore -* Cinder -* glusterfs -* rbd -* Azure File -* Azure Disk -* Portworx -* FlexVolumes -* CSI {{< feature-state for_k8s_version="v1.14" state="alpha" >}} +{{< table caption = "Table of Volume types and the version of Kubernetes they require" >}} -{{< note >}} + +| 卷类型 | Kubernetes 版本要求 | +|:---------------------|:--------------------------| +| gcePersistentDisk | 1.11 | +| awsElasticBlockStore | 1.11 | +| Cinder | 1.11 | +| glusterfs | 1.11 | +| rbd | 1.11 | +| Azure File | 1.11 | +| Azure Disk | 1.11 | +| Portworx | 1.11 | +| FlexVolume | 1.13 | +| CSI | 1.14 (alpha), 1.16 (beta) | + +{{< /table >}} +{{< note >}} -此功能不能用于缩小卷。 - +此功能仅可用于扩容卷,不能用于缩小卷。 {{< /note >}} ### 挂载选项 -由 `StorageClass` 动态创建的 Persistent Volume 将使用`类`中 `mountOption` 字段指定的挂载选项。 +由 StorageClass 动态创建的 PersistentVolume 将使用类中 `mountOptions` 字段指定的挂载选项。 如果卷插件不支持挂载选项,却指定了该选项,则分配操作会失败。 -挂载选项在 `StorageClass` 和持久卷上都不会做验证,所以如果挂载选项无效,那么这个 PV 就会失败。 +挂载选项在 StorageClass 和 PV 上都不会做验证,所以如果挂载选项无效,那么这个 PV 就会失败。 ### 卷绑定模式 -{{< feature-state for_k8s_version="v1.12" state="beta" >}} - - -**注意:** 这个功能特性需要启用 `VolumeScheduling` 参数才能使用。 - 以下插件支持预创建绑定 PersistentVolume 的 `WaitForFirstConsumer` 模式: -* All of the above +* 上述全部 * [Local](#local) -{{< feature-state state="beta" for_k8s_version="1.14" >}} +{{< feature-state state="beta" for_k8s_version="1.17" >}} -动态配置和预先创建的PVs也支持 [CSI卷](/docs/concepts/storage/volumes/#csi), -但是您需要查看特定CSI驱动程序的文档以查看其支持的拓扑密钥和例子。 必须启用 `CSINodeInfo` 特性。 +动态配置和预先创建的 PV 也支持 [CSI卷](/docs/concepts/storage/volumes/#csi), +但是您需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑键名和例子。 -**注意:** 这个特性需要开启 `VolumeScheduling` 特性开关。 - -这个例子描述了如何将分配卷限的拓扑限制在特定的区域,在使用时应该根据插件支持情况替换 `zone` 和 `zones` 参数。 +这个例子描述了如何将分配卷的拓扑限制在特定的区域,在使用时应该根据插件支持情况替换 `zone` 和 `zones` 参数。 ```yaml apiVersion: storage.k8s.io/v1 @@ -358,12 +352,18 @@ class. Different parameters may be accepted depending on the `provisioner`. For example, the value `io1`, for the parameter `type`, and the parameter `iopsPerGB` are specific to EBS. When a parameter is omitted, some default is used. + +There can be at most 512 parameters defined for a StorageClass. +The total length of the parameters object including its keys and values cannot +exceed 256 KiB. --> ## 参数 Storage class 具有描述属于卷的参数。取决于分配器,可以接受不同的参数。 例如,参数 type 的值 io1 和参数 iopsPerGB 特定于 EBS PV。当参数被省略时,会使用默认值。 +一个 StorageClass 最多可以定义 512 个参数。这些参数对象的总长度不能超过 256 KiB, 包括参数的键和值。 + ### AWS EBS ```yaml @@ -441,11 +441,13 @@ parameters: is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. `zone` and `zones` parameters must not be used at the same time. +* `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system. * `replication-type`: `none` or `regional-pd`. Default: `none`. --> * `type`:`pd-standard` 或者 `pd-ssd`。默认:`pd-standard` * `zone`(弃用):GCE 区域。如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone` 和 `zones` 参数不能同时使用。 * `zones`(弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度(round-robin)分配。`zone` 和 `zones` 参数不能同时使用。 +* `fstype`: `ext4` 或 `xfs`。 默认: `ext4`。宿主机操作系统必须支持所定义的文件系统类型。 * `replication-type`:`none` 或者 `regional-pd`。默认值:`none`。 -#### Azure Unmanaged Disk Storage Class(非托管磁盘存储类) +#### Azure Unmanaged Disk Storage Class(非托管磁盘存储类){#azure-unmanaged-disk-storage-class} ```yaml kind: StorageClass @@ -921,9 +923,9 @@ parameters: * `storageAccount`:Azure 存储帐户名称。如果提供存储帐户,它必须位于与集群相同的资源组中,并且 `location` 是被忽略的。如果未提供存储帐户,则会在与群集相同的资源组中创建新的存储帐户。 -#### 新的 Azure 磁盘 Storage Class(从 v1.7.2 开始) +#### Azure 磁盘 Storage Class(从 v1.7.2 开始){#azure-disk-storage-class} ```yaml kind: StorageClass @@ -945,12 +947,15 @@ parameters: unmanaged disk in the same resource group as the cluster. When `kind` is `managed`, all managed disks are created in the same resource group as the cluster. +* `resourceGroup`: Specify the resource group in which the Azure disk will be created. + It must be an existing resource group name. If it is unspecified, the disk will be + placed in the same resource group as the current Kubernetes cluster. --> * `storageaccounttype`:Azure 存储帐户 Sku 层。默认为空。 * `kind`:可能的值是 `shared`(默认)、`dedicated` 和 `managed`。 当 `kind` 的值是 `shared` 时,所有非托管磁盘都在集群的同一个资源组中的几个共享存储帐户中创建。 当 `kind` 的值是 `dedicated` 时,将为在集群的同一个资源组中新的非托管磁盘创建新的专用存储帐户。 - +* `resourceGroup`: 指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。 -例如,当此值设置为 30 时,启动滚动更新后,会立即展开新的 ReplicaSet ,以便新旧 Pod 的总数不超过所需的 130%。一旦旧 Pods 被杀死,新的 ReplicaSet 可以进一步扩展,确保更新期间任何时间运行的 Pods 总数最多为所需 Pods 总数的130%。 +例如,当此值设置为 30% 时,启动滚动更新后,会立即展开新的 ReplicaSet ,以便新旧 Pod 的总数不超过所需的 130%。一旦旧 Pods 被杀死,新的 ReplicaSet 可以进一步扩展,确保更新期间任何时间运行的 Pods 总数最多为所需 Pods 总数的130%。 -在*显式级联删除*模式下,根对象首先进入 `deletion in progress` 状态。在 `deletion in progress` 状态会有如下的情况: +在 *显式级联删除* 模式下,根对象首先进入 `deletion in progress` 状态。在 `deletion in progress` 状态会有如下的情况: * 对象仍然可以通过 REST API 可见。 * 会设置对象的 `deletionTimestamp` 字段。 diff --git a/content/zh/docs/concepts/workloads/pods/init-containers.md b/content/zh/docs/concepts/workloads/pods/init-containers.md index b9cc7cd9d9400..5cc386b2dcd76 100644 --- a/content/zh/docs/concepts/workloads/pods/init-containers.md +++ b/content/zh/docs/concepts/workloads/pods/init-containers.md @@ -135,7 +135,7 @@ Here are some ideas for how to use init containers: * 在启动应用容器之前等一段时间,使用类似命令: - `sleep 60` + sleep 60 * 克隆 Git 仓库到 {{< glossary_tooltip text="Volume" term_id="volume" >}}。 * 将配置值放到配置文件中,运行模板工具为主应用容器动态地生成配置文件。例如,在配置文件中存放 POD_IP 值,并使用 Jinja 生成主应用配置文件。 @@ -404,7 +404,7 @@ Pod level control groups (cgroups) are based on the effective Pod request and li 给定Init 容器的执行顺序下,资源使用适用于如下规则: * 所有 Init 容器上定义的任何特定资源的 limit 或 request 的最大值,作为 Pod *有效初始 request/limit* -* Pod 对资源的 *有效 limit/request * 是如下两者的较大者: +* Pod 对资源的 *有效 limit/request* 是如下两者的较大者: * 所有应用容器对某个资源的 limit/request 之和 * 对某个资源的有效初始 limit/request * 基于有效 limit/request 完成调度,这意味着 Init 容器能够为初始化过程预留资源,这些资源在 Pod 生命周期过程中并没有被使用。 diff --git a/content/zh/docs/concepts/workloads/pods/pod-lifecycle.md b/content/zh/docs/concepts/workloads/pods/pod-lifecycle.md index 614d62cf0ebbb..679fc29cd5a14 100644 --- a/content/zh/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/zh/docs/concepts/workloads/pods/pod-lifecycle.md @@ -48,10 +48,11 @@ Pod 有一个 PodStatus 对象,其中包含一个 [PodCondition](/docs/resourc - 失败:容器未通过诊断。 - 未知:诊断失败,因此不会采取任何行动。 -Kubelet 可以选择是否执行在容器上运行的两种探针执行和做出反应: +Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应: - `livenessProbe`:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其 [重启策略](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) 的影响。如果容器不提供存活探针,则默认状态为 `Success`。 - `readinessProbe`:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 `Failure`。如果容器不提供就绪探针,则默认状态为 `Success`。 +- `startupProbe`: 指示容器中的应用是否已经启动。如果提供了启动探测(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提供启动探测,则默认状态为成功`Success`。 ### 该什么时候使用存活(liveness)和就绪(readiness)探针? diff --git a/content/zh/docs/home/_index.md b/content/zh/docs/home/_index.md index d50c07e0911ec..cf6cdc00327a6 100644 --- a/content/zh/docs/home/_index.md +++ b/content/zh/docs/home/_index.md @@ -3,7 +3,7 @@ title: Kubernetes 文档 noedit: true cid: docsHome layout: docsportal_home -class: gridPage +class: gridPage gridPageHome linkTitle: "主页" main_menu: true weight: 10 diff --git a/content/zh/docs/reference/command-line-tools-reference/kubelet.md b/content/zh/docs/reference/command-line-tools-reference/kubelet.md index e519ad9f923b5..c9bf7c21a7ab0 100644 --- a/content/zh/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/zh/docs/reference/command-line-tools-reference/kubelet.md @@ -658,9 +658,9 @@ kubelet [flags] - 启用 cAdvisor JSON 数据的 /spec 和 /stats/* 端点。(默认值为 true) + 启用 cAdvisor JSON 数据的 /spec 和 /stats/* 端点。(默认值为 false)(已弃用:未来版本将会移除该参数) diff --git a/content/zh/docs/reference/glossary/cluster.md b/content/zh/docs/reference/glossary/cluster.md index 3b2f95566a518..1046f9a223600 100644 --- a/content/zh/docs/reference/glossary/cluster.md +++ b/content/zh/docs/reference/glossary/cluster.md @@ -12,7 +12,8 @@ tags: - operation --- - +--- +--> + 集群由一组被称作节点的机器组成。这些节点上运行 Kubernetes 所管理的容器化应用。集群具有至少一个工作节点和至少一个主节点。 -工作节点托管作为应用程序组件的 Pod 。主节点管理集群中的工作节点和 Pod 。多个主节点用于为集群提供故障转移和高可用性。 \ No newline at end of file +工作节点托管作为应用程序组件的 Pod 。主节点管理集群中的工作节点和 Pod 。多个主节点用于为集群提供故障转移和高可用性。 diff --git a/content/zh/docs/reference/glossary/kube-proxy.md b/content/zh/docs/reference/glossary/kube-proxy.md index 84b95e228fa4c..5b323f0d28f12 100755 --- a/content/zh/docs/reference/glossary/kube-proxy.md +++ b/content/zh/docs/reference/glossary/kube-proxy.md @@ -38,4 +38,4 @@ kube-proxy 维护节点上的网络规则。这些网络规则允许从集群内 -如果有 kube-proxy 可用,它将使用操作系统数据包过滤层。否则,kube-proxy 会转发流量本身。 +如果操作系统提供了数据包过滤层并可用的话,kube-proxy会通过它来实现网络规则。否则,kube-proxy 仅转发流量本身。 diff --git a/content/zh/docs/reference/kubectl/overview.md b/content/zh/docs/reference/kubectl/overview.md index 2aaf1308e446f..f0ea83a7315d1 100644 --- a/content/zh/docs/reference/kubectl/overview.md +++ b/content/zh/docs/reference/kubectl/overview.md @@ -166,7 +166,7 @@ Operation | Syntax | Description `proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server. `replace` | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin. `rolling-update` | `kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE \| -f NEW_CONTROLLER_SPEC) [flags]` | Perform a rolling update by gradually replacing the specified replication controller and its pods. -`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Run a specified image on the cluster. +`run` | kubectl run NAME --image=image [--env="key=value"] [--port=port] [--dry-run=server | client | none] [--overrides=inline-json] [flags] | Run a specified image on the cluster. `scale` | `kubectl scale (-f FILENAME \| TYPE NAME \| TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags]` | Update the size of the specified replication controller. `stop` | `kubectl stop` | Deprecated: Instead, see `kubectl delete`. `version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server. @@ -197,7 +197,7 @@ Operation | Syntax | Description `proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | 运行 Kubernetes API 服务器的代理。 `replace` | `kubectl replace -f FILENAME` | 从文件或标准输入中替换资源。 `rolling-update` | kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags] | 通过逐步替换指定的副本控制器及其 pod 来执行滚动更新。 -`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | 在集群上运行指定的镜像。 +`run` | kubectl run NAME --image=image [--env="key=value"] [--port=port] [--dry-run=server | client | none] [--overrides=inline-json] [flags] | 在集群上运行指定的镜像。 `scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | 更新指定副本控制器的大小。 `stop` | `kubectl stop` | 不推荐:相反,请参阅 kubectl delete。 `version` | `kubectl version [--client] [flags]` | 显示运行在客户端和服务器上的 Kubernetes 版本。 diff --git a/content/zh/docs/reference/kubernetes-api/labels-annotations-taints.md b/content/zh/docs/reference/kubernetes-api/labels-annotations-taints.md index 6e586da9c5f7e..4ed29847fdf5c 100644 --- a/content/zh/docs/reference/kubernetes-api/labels-annotations-taints.md +++ b/content/zh/docs/reference/kubernetes-api/labels-annotations-taints.md @@ -263,7 +263,7 @@ The expectation is that failures of nodes in different zones should be uncorrela --> 区域和地域(region)的实际值无关紧要,两者的层次含义也没有严格的定义。最终期望是,除非整个地域故障, 否则某一区域节点的故障不应该影响到其他区域的节点。例如,通常区域间应该避免共用同一个网络交换机。 -具体的规划取决于特定的基础设备 - three-rack 设备所选择的设置与多数据中心截然不同。 +具体的规划取决于特定的基础设备 - 三机架安装所选择的设置与多数据中心截然不同。 -## 启用组中的资源 +## 启用 extensions/v1beta1 组中具体资源 + +在 `extensions/v1beta1` API 组中,DaemonSets,Deployments,StatefulSet, NetworkPolicies, PodSecurityPolicies 和 ReplicaSets 是默认禁用的。 +例如:要启用 deployments 和 daemonsets,请设置 `--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`。 -默认情况下 DaemonSet、Deployment、Horizo​​ntalPodAutoscalers、Ingress、Jobs 和 ReplicaSets 是被启用的。 -您可以通过在 apiserver 上设置`--runtime-config`来启用其他扩展资源。`--runtime-config`接受逗号分隔的值。 -例如,要禁用 Deployments 和 Jobs,请设置 `--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/jobs=false` +{{< note >}} + + +出于遗留原因,仅在 `extensions / v1beta1` API 组中支持各个资源的启用/禁用。 + +{{< /note >}} {{% /capture %}} diff --git a/content/zh/docs/setup/_index.md b/content/zh/docs/setup/_index.md index 8e6c97261e73c..fba9ba5d2869c 100644 --- a/content/zh/docs/setup/_index.md +++ b/content/zh/docs/setup/_index.md @@ -75,20 +75,18 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b {{< table caption="本地机器解决方案表,其中列出了社区和生态系统支持的用于部署 Kubernetes 的工具。" >}} |社区 |生态系统 | | ------------ | -------- | -| [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) | -| [kind (Kubernetes IN Docker)](https://github.com/kubernetes-sigs/kind) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| -| | [Minishift](https://docs.okd.io/latest/minishift/)| +| [Minikube](/docs/setup/learning-environment/minikube/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| +| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Minishift](https://docs.okd.io/latest/minishift/)| | | [MicroK8s](https://microk8s.io/)| -| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | -| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| -| | [k3s](https://k3s.io)| -| | [Ubuntu on LXD](/docs/getting-started-guides/ubuntu/)| -Kubernetes 集群的一些抽象概念可能是 {{< glossary_tooltip text="应用领域" term_id="applications" >}}、{{< glossary_tooltip text="数据平面" term_id="data-plane" >}}、{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}、{{< glossary_tooltip text="集群基础架构" term_id="cluster-infrastructure" >}} 和 {{< glossary_tooltip text="集群操作层面" term_id="cluster-operations" >}}。 - - -下图列出了 Kubernetes 集群的抽象概念以及抽象概念是由自我管理还是由提供商管理。 - - -生产环境解决方案![生产环境解决方案](/images/docs/KubernetesSolutions.svg) - - -{{< table caption="生产环境解决方案表列出了提供商和解决方案。" >}} -以下生产环境解决方案表列出了提供商及其提供的解决方案。 - - - -|提供商 | 管理者 | Turnkey 云 | 本地数据中心 | 自定义(云) | 自定义(本地VM)| 定制(裸机) | -| --------- | ------ | ------ | ------ | ------ | ------ | ----- | -| [Agile Stacks](https://www.agilestacks.com/products/kubernetes)| | ✔ | ✔ | | | -| [Alibaba Cloud](https://www.alibabacloud.com/product/kubernetes)| | ✔ | | | | -| [Amazon](https://aws.amazon.com) | [Amazon EKS](https://aws.amazon.com/eks/) |[Amazon EC2](https://aws.amazon.com/ec2/) | | | | -| [AppsCode](https://appscode.com/products/pharmer/) | ✔ | | | | | -| [APPUiO](https://appuio.ch/)  | ✔ | ✔ | ✔ | | | | -| [Banzai Cloud Pipeline Kubernetes Engine (PKE)](https://banzaicloud.com/products/pke/) | | ✔ | | ✔ | ✔ | ✔ | -| [CenturyLink Cloud](https://www.ctl.io/) | | ✔ | | | | -| [Cisco Container Platform](https://cisco.com/go/containers) | | | ✔ | | | -| [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) | | | | ✔ |✔ | -| [CloudStack](https://cloudstack.apache.org/) | | | | | ✔| -| [Canonical](https://ubuntu.com/kubernetes) | ✔ | ✔ | ✔ | ✔ |✔ | ✔ -| [Containership](https://containership.io/containership-platform) | ✔ |✔ | | | | -| [D2iQ](https://d2iq.com/) | | [Kommander](https://d2iq.com/solutions/ksphere) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | -| [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | ✔ -| [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | ✔ | | | | | -| [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |✔ | ✔ | | | ✔ -| [Fedora (Multi Node)](https://kubernetes.io/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)  | | | | | ✔ | ✔ -| [Fedora (Single Node)](https://kubernetes.io/docs/getting-started-guides/fedora/fedora_manual_config/)  | | | | | | ✔ -| [Gardener](https://gardener.cloud/) | ✔ | ✔ | ✔ (via OpenStack) | ✔ | | -| [Giant Swarm](https://giantswarm.io/) | ✔ | ✔ | ✔ | | -| [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | | -| [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | | -| [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | | -| [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | | -| [KubeOne](https://github.com/kubermatic/kubeone) | | ✔ | ✔ | ✔ | ✔ | ✔ | -| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | | -| [KubeSail](https://kubesail.com/) | ✔ | | | | | -| [Kubespray](https://kubespray.io/#/) | | | |✔ | ✔ | ✔ | -| [Kublr](https://kublr.com/) |✔ | ✔ |✔ |✔ |✔ |✔ | -| [Microsoft Azure](https://azure.microsoft.com) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) | | | | | -| [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) | | | ✔ | | | -| [Nirmata](https://www.nirmata.com/) | | ✔ | ✔ | | | -| [Nutanix](https://www.nutanix.com/en) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | | | [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization) | -| [OpenShift](https://www.openshift.com) |[OpenShift Dedicated](https://www.openshift.com/products/dedicated/) and [OpenShift Online](https://www.openshift.com/products/online/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) |[OpenShift Container Platform](https://www.openshift.com/products/container-platform/) -| [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | ✔ | ✔ | | | | -| [oVirt](https://www.ovirt.org/) | | | | | ✔ | -| [Pivotal](https://pivotal.io/) | | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | | | -| [Platform9](https://platform9.com/) | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | ✔ | ✔ | ✔ -| [Rancher](https://rancher.com/) | | [Rancher 2.x](https://rancher.com/docs/rancher/v2.x/en/) | | [Rancher Kubernetes Engine (RKE)](https://rancher.com/docs/rke/latest/en/) | | [k3s](https://k3s.io/) -| [StackPoint](https://stackpoint.io/)  | ✔ | ✔ | | | | -| [Supergiant](https://supergiant.io/) | |✔ | | | | -| [SUSE](https://www.suse.com/) | | ✔ | | | | -| [SysEleven](https://www.syseleven.io/) | ✔ | | | | | -| [Tencent Cloud](https://intl.cloud.tencent.com/) | [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/product/tke) | ✔ | ✔ | | | ✔ | -| [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | | -| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) -| [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | | +[Kubernetes 合作伙伴](https://kubernetes.io/partners/#conformance) 包括一个 [已认证的 Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) 提供商列表。 {{% /capture %}} diff --git a/content/zh/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/zh/docs/tasks/access-application-cluster/web-ui-dashboard.md index 31e7e20ed9b19..d9b2638c01336 100644 --- a/content/zh/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/zh/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -56,7 +56,7 @@ The Dashboard UI is not deployed by default. To deploy it, run the following com 默认情况下不会部署 Dashboard。可以通过以下命令部署: ``` -kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml ``` -* [Kubernetes 基础知识](/docs/tutorials/Kubernetes-Basics/)是一个深入的交互式教程,帮助您理解 Kubernetes 系统,并尝试一些基本的 Kubernetes 特性。 +* [Kubernetes 基础知识](/zh/docs/tutorials/Kubernetes-Basics/)是一个深入的交互式教程,帮助您理解 Kubernetes 系统,并尝试一些基本的 Kubernetes 特性。 -* [你好 Minikube](/docs/tutorials/hello-minikube/) +* [你好 Minikube](/zh/docs/tutorials/hello-minikube/) ## 配置 @@ -69,7 +69,7 @@ Before walking through each tutorial, you may want to bookmark the ## Configuration --> -* [使用一个 ConfigMap 配置 Redis](/docs/tutorials/configuration/configure-redis-using-configmap/) +* [使用一个 ConfigMap 配置 Redis](/zh/docs/tutorials/configuration/configure-redis-using-configmap/) -* [公开外部 IP 地址访问集群中的应用程序](/docs/tutorials/stateless-application/expose-external-ip-address/) +* [公开外部 IP 地址访问集群中的应用程序](/zh/docs/tutorials/stateless-application/expose-external-ip-address/) -* [示例:使用 Redis 部署 PHP 留言板应用程序](/docs/tutorials/stateless-application/guestbook/) +* [示例:使用 Redis 部署 PHP 留言板应用程序](/zh/docs/tutorials/stateless-application/guestbook/) -* [StatefulSet 基础](/docs/tutorials/stateful-application/basic-stateful-set/) +* [StatefulSet 基础](/zh/docs/tutorials/stateful-application/basic-stateful-set/) -* [示例:WordPress 和 MySQL 使用持久卷](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) +* [示例:WordPress 和 MySQL 使用持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) -* [示例:使用有状态集部署 Cassandra](/docs/tutorials/stateful-application/cassandra/) +* [示例:使用有状态集部署 Cassandra](/zh/docs/tutorials/stateful-application/cassandra/) -* [运行 ZooKeeper,CP 分布式系统](/docs/tutorials/stateful-application/zookeeper/) +* [运行 ZooKeeper,CP 分布式系统](/zh/docs/tutorials/stateful-application/zookeeper/) -2. AppArmor 内核模块已启用 -- 要使 Linux 内核强制执行 AppArmor 配置文件,必须安装并且启动 AppArmor 内核模块。默认情况下,有几个发行版支持该模块,如 Ubuntu 和 SUSE,还有许多发行版提供可选支持。要检查模块是否已启用,请检查 +2. AppArmor 内核模块已启用 -- 要使 Linux 内核强制执行 AppArmor 配置文件,必须安装并且启动 AppArmor 内核模块。默认情况下,有几个发行版支持该模块,如 Ubuntu 和 SUSE,还有许多发行版提供可选支持。要检查模块是否已启用,请检查 `/sys/module/apparmor/parameters/enabled` 文件: ```shell cat /sys/module/apparmor/parameters/enabled @@ -214,7 +214,7 @@ specifies the profile to apply. The `profile_ref` can be one of: --> -Kubernetes AppArmor 强制首先通过检查所有先决条件都已满足,然后将配置文件选择转发到容器运行时进行强制。如果未满足先决条件, Pod 将被拒绝,并且不会运行。 +Kubernetes AppArmor 强制执行方式首先通过检查所有先决条件都已满足,然后将配置文件选择转发到容器运行时进行强制执行。如果未满足先决条件, Pod 将被拒绝,并且不会运行。 要验证是否应用了配置文件,可以查找容器创建事件中列出的 AppArmor 安全选项: @@ -423,7 +423,7 @@ Kubernetes 目前不提供任何本地机制来将 AppArmor 配置文件加载 image. * By copying the profiles to each node and loading them through SSH, as demonstrated in the [Example](#example). --> -* 通过在每个节点上运行 Pod 的[DaemonSet](/docs/concepts/workloads/controllers/daemonset/)确保加载了正确的配置文件。可以找到一个示例实现[这里](https://git.k8s.io/kubernetes/test/images/apparmor-loader)。 +* 通过在每个节点上运行 Pod 的[DaemonSet](/zh/docs/concepts/workloads/controllers/daemonset/)确保加载了正确的配置文件。可以找到一个示例实现[这里](https://git.k8s.io/kubernetes/test/images/apparmor-loader)。 * 在节点初始化时,使用节点初始化脚本(例如 Salt 、Ansible 等)或镜像。 * 通过将配置文件复制到每个节点并通过 SSH 加载它们,如[示例](#example)。 @@ -432,7 +432,7 @@ must be loaded onto every node. An alternative approach is to add a node label class of profiles) on the node, and use a [node selector](/docs/concepts/configuration/assign-pod-node/) to ensure the Pod is run on a node with the required profile. --> -调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签,并使用[节点选择器](/docs/concepts/configuration/assign pod node/)确保 Pod 在具有所需配置文件的节点上运行。 +调度程序不知道哪些配置文件加载到哪个节点上,因此必须将全套配置文件加载到每个节点上。另一种方法是为节点上的每个配置文件(或配置文件类)添加节点标签,并使用[节点选择器](/zh/docs/concepts/configuration/assign pod node/)确保 Pod 在具有所需配置文件的节点上运行。 ### 使用 PodSecurityPolicy 限制配置文件 @@ -525,7 +525,7 @@ logs or through `journalctl`. More information is provided in 想要调试 AppArmor 的问题,您可以检查系统日志,查看具体拒绝了什么。AppArmor 将详细消息记录到 `dmesg` ,错误通常可以在系统日志中或通过 `journalctl` 找到。更多详细信息见[AppArmor 失败](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Failures)。 -## API 参考 +## API 参考 ### Pod 注释 diff --git a/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md index 7150b572e1e0b..8cc92c076ead3 100644 --- a/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md +++ b/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md @@ -9,9 +9,9 @@ content_template: templates/tutorial {{% capture overview %}} -这篇文档基于[使用 ConfigMap 来配置 Containers](/docs/tasks/configure-pod-container/configure-pod-configmap/) 这个任务,提供了一个使用 ConfigMap 来配置 Redis 的真实案例。 +这篇文档基于[使用 ConfigMap 来配置 Containers](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 这个任务,提供了一个使用 ConfigMap 来配置 Redis 的真实案例。 {{% /capture %}} @@ -43,7 +43,7 @@ This page provides a real world example of how to configure Redis using a Config * Understand [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/). --> * 此页面上显示的示例适用于 `kubectl` 1.14和在其以上的版本。 -* 理解[使用ConfigMap来配置Containers](/docs/tasks/configure-pod-container/configure-pod-configmap/)。 +* 理解[使用ConfigMap来配置Containers](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 {{% /capture %}} @@ -77,8 +77,8 @@ configMapGenerator: EOF ``` - 将 pod 的资源配置添加到 `kustomization.yaml` 文件中: @@ -93,8 +93,8 @@ resources: EOF ``` - 应用整个 kustomization 文件夹以创建 ConfigMap 和 Pod 对象: @@ -102,8 +102,8 @@ Apply the kustomization directory to create both the ConfigMap and Pod objects: kubectl apply -k . ``` - 使用以下命令检查创建的对象 @@ -116,20 +116,20 @@ NAME READY STATUS RESTARTS AGE pod/redis 1/1 Running 0 52s ``` - 在示例中,配置卷挂载在 `/redis-master` 下。 它使用 `path` 将 `redis-config` 密钥添加到名为 `redis.conf` 的文件中。 因此,redis配置的文件路径为 `/redis-master/redis.conf`。 这是镜像将在其中查找 redis master 的配置文件的位置。 - 使用 `kubectl exec` 进入 pod 并运行 `redis-cli` 工具来验证配置已正确应用: @@ -143,8 +143,8 @@ kubectl exec -it redis redis-cli 2) "allkeys-lru" ``` - 删除创建的 pod: ```shell @@ -155,10 +155,10 @@ kubectl delete pod redis {{% capture whatsnext %}} - -* 了解有关 [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/)的更多信息。 +* 了解有关 [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)的更多信息。 {{% /capture %}} diff --git a/content/zh/docs/tutorials/hello-minikube.md b/content/zh/docs/tutorials/hello-minikube.md index c10be6b6f98c2..340ddfcd9eba7 100644 --- a/content/zh/docs/tutorials/hello-minikube.md +++ b/content/zh/docs/tutorials/hello-minikube.md @@ -36,13 +36,13 @@ This tutorial shows you how to run a simple Hello World Node.js app on Kubernetes using [Minikube](/docs/setup/learning-environment/minikube) and Katacoda. Katacoda provides a free, in-browser Kubernetes environment. --> -本教程向您展示如何使用 [Minikube](/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。 +本教程向您展示如何使用 [Minikube](/zh/docs/setup/learning-environment/minikube) 和 Katacoda 在 Kubernetes 上运行一个简单的 “Hello World” Node.js 应用程序。Katacoda 提供免费的浏览器内 Kubernetes 环境。 {{< note >}} -如果您已在本地安装 [Minikube](/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 +如果您已在本地安装 [Minikube](/zh/docs/tasks/tools/install-minikube/),也可以按照本教程操作。 {{< /note >}} @@ -127,7 +127,7 @@ recommended way to manage the creation and scaling of Pods. ## 创建 Deployment -Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) 是由一个或多个为了管理和联网而绑定在一起的容器构成的组。本教程中的 Pod 只有一个容器。Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) 检查 Pod 的健康状况,并在 Pod 中的容器终止的情况下重新启动新的容器。Deployment 是管理 Pod 创建和扩展的推荐方法。 +Kubernetes [*Pod*](/zh/docs/concepts/workloads/pods/pod/) 是由一个或多个为了管理和联网而绑定在一起的容器构成的组。本教程中的 Pod 只有一个容器。Kubernetes [*Deployment*](/zh/docs/concepts/workloads/controllers/deployment/) 检查 Pod 的健康状况,并在 Pod 中的容器终止的情况下重新启动新的容器。Deployment 是管理 Pod 创建和扩展的推荐方法。 - {{< note >}}有关 kubectl 命令的更多信息,请参阅 [kubectl 概述](/docs/user-guide/kubectl-overview/)。{{< /note >}} + {{< note >}}有关 kubectl 命令的更多信息,请参阅 [kubectl 概述](/zh/docs/user-guide/kubectl-overview/)。{{< /note >}} - 在支持负载均衡器的云服务提供商上,将提供一个外部 IP 来访问该服务。在 Minikube 上,`LoadBalancer` 使得服务可以通过命令 `minikube service` 访问。 + 在支持负载均衡器的云服务提供商上,将提供一个外部 IP 来访问该服务。在 Minikube 上,`LoadBalancer` 使得服务可以通过命令 `minikube service` 访问。 -* 进一步了解 [Deployment 对象](/docs/concepts/workloads/controllers/deployment/)。 -* 学习更多关于 [部署应用](/docs/tasks/run-application/run-stateless-application-deployment/)。 -* 学习更多关于 [Service 对象](/docs/concepts/services-networking/service/)。 +* 进一步了解 [Deployment 对象](/zh/docs/concepts/workloads/controllers/deployment/)。 +* 学习更多关于 [部署应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)。 +* 学习更多关于 [Service 对象](/zh/docs/concepts/services-networking/service/)。 {{% /capture %}} diff --git a/content/zh/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/zh/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index a203274a6c5a8..43b0c793319e2 100644 --- a/content/zh/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/zh/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -1,5 +1,5 @@ --- -title: Using Minikube to Create a Cluster +title: 使用 Minikube 创建集群 weight: 10 --- @@ -18,38 +18,38 @@
          -

          Objectives

          +

          目标

            -
          • Learn what a Kubernetes cluster is.
          • -
          • Learn what Minikube is.
          • -
          • Start a Kubernetes cluster using an online terminal.
          • +
          • 了解 Kubernetes 集群。
          • +
          • 了解 Minikube 。
          • +
          • 使用在线终端开启一个 Kubernetes 集群。
          -

          Kubernetes Clusters

          +

          Kubernetes 集群

          - Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized. Containerized applications are more flexible and available than in past deployment models, where applications were installed directly onto specific machines as packages deeply integrated into the host. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way. Kubernetes is an open-source platform and is production-ready. + Kubernetes 协调一个高可用计算机集群,每个计算机作为独立单元互相连接工作。 Kubernetes 中的抽象允许您将容器化的应用部署到群集,而无需将它们绑定到某个特定的独立计算机。为了使用这种新的部署模型,应用需要以将应用与单个主机分离的方式打包:它们需要被容器化。与过去的那种应用直接以包的方式深度与主机集成的部署模型相比,容器化应用更灵活、更可用。 Kubernetes 以更高效的方式跨群集自动分发和调度应用容器。 Kubernetes 是一个开源平台,并且可应用于生产环境。

          -

          A Kubernetes cluster consists of two types of resources: +

          一个 Kubernetes 集群包含两种类型的资源:

            -
          • The Master coordinates the cluster
          • -
          • Nodes are the workers that run applications
          • +
          • Master 调度整个集群
          • +
          • Nodes 负责运行应用

          -

          Summary:

          +

          总结:

            -
          • Kubernetes cluster
          • -
          • Minikube
          • +
          • Kubernetes 集群
          • +
          • Minikube

          - Kubernetes is a production-grade, open-source platform that orchestrates the placement (scheduling) and execution of application containers within and across computer clusters. + Kubernetes 是一个生产级别的开源平台,可协调在计算机集群内和跨计算机集群的应用容器的部署(调度)和执行.

          @@ -58,7 +58,7 @@

          Summary:

          -

          Cluster Diagram

          +

          集群图

          @@ -71,24 +71,24 @@

          Cluster Diagram

          -

          The Master is responsible for managing the cluster. The master coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.

          -

          A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as Docker or rkt. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.

          +

          Master 负责管理整个集群。 Master 协调集群中的所有活动,例如调度应用、维护应用的所需状态、应用扩容以及推出新的更新。

          +

          Node 是一个虚拟机或者物理机,它在 Kubernetes 集群中充当工作机器的角色 每个Node都有 Kubelet , 它管理 Node 而且是 Node 与 Master 通信的代理。 Node 还应该具有用于​​处理容器操作的工具,例如 Docker 或 rkt 。处理生产级流量的 Kubernetes 集群至少应具有三个 Node 。

          -

          Masters manage the cluster and the nodes are used to host the running applications.

          +

          Master 管理集群,Node 用于托管正在运行的应用。

          -

          When you deploy applications on Kubernetes, you tell the master to start the application containers. The master schedules the containers to run on the cluster's nodes. The nodes communicate with the master using the Kubernetes API, which the master exposes. End users can also use the Kubernetes API directly to interact with the cluster.

          +

          在 Kubernetes 上部署应用时,您告诉 Master 启动应用容器。 Master 就编排容器在群集的 Node 上运行。 Node 使用 Master 暴露的 Kubernetes API 与 Master 通信。终端用户也可以使用 Kubernetes API 与集群交互。

          -

          A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this tutorial, however, you'll use a provided online terminal with Minikube pre-installed.

          +

          Kubernetes 既可以部署在物理机上也可以部署在虚拟机上。您可以使用 Minikube 开始部署 Kubernetes 集群。 Minikube 是一种轻量级的 Kubernetes 实现,可在本地计算机上创建 VM 并部署仅包含一个节点的简单集群。 Minikube 可用于 Linux , macOS 和 Windows 系统。Minikube CLI 提供了用于引导群集工作的多种操作,包括启动、停止、查看状态和删除。在本教程里,您可以使用预装有 Minikube 的在线终端进行体验。

          -

          Now that you know what Kubernetes is, let's go to the online tutorial and start our first cluster!

          +

          既然您已经知道 Kubernetes 是什么,让我们转到在线教程并启动我们的第一个 Kubernetes 集群!

          @@ -96,7 +96,7 @@

          Cluster Diagram

          diff --git a/content/zh/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/zh/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 3f4acbf331922..d846c9b40def2 100644 --- a/content/zh/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/zh/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -94,7 +94,7 @@

          部署你在 Kubernetes 上的第一个应用程序<

          您可以使用 Kubernetes 命令行界面创建和管理 Deployment,Kubectl.Kubectl 使用 Kubernetes API 与集群进行交互。在本单元中,您将学习创建在 Kubernetes 集群上运行应用程序的 Deployment 所需的最常见的 Kubectl 命令。

          -

          创建 Deployment 时,您需要指定应用程序的容器映像以及要运行的副本数。您可以稍后通过更新 Deployment 来更改该信息; 模块 56 讨论了如何扩展和更新 Deployments。

          +

          创建 Deployment 时,您需要指定应用程序的容器映像以及要运行的副本数。您可以稍后通过更新 Deployment 来更改该信息; 模块 56 讨论了如何扩展和更新 Deployments。

          @@ -115,7 +115,7 @@

          部署你在 Kubernetes 上的第一个应用程序<

          Now that you know what Deployments are, let's go to the online tutorial and deploy our first app!

          -->

          对于我们的第一次部署,我们将使用打包在 Docker 容器中的 Node.js 应用程序。 要创建 Node.js 应用程序并部署 Docker 容器,请按照 - 你好 Minikube 教程.

          + 你好 Minikube 教程.

          现在您已经了解了 Deployment 的内容,让我们转到在线教程并部署我们的第一个应用程序!

          @@ -124,7 +124,7 @@

          部署你在 Kubernetes 上的第一个应用程序< diff --git a/content/zh/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/zh/docs/tutorials/kubernetes-basics/explore/explore-intro.html index cedde295bbd84..c1d09f790d626 100644 --- a/content/zh/docs/tutorials/kubernetes-basics/explore/explore-intro.html +++ b/content/zh/docs/tutorials/kubernetes-basics/explore/explore-intro.html @@ -43,7 +43,7 @@

          Kubernetes Pods

          -

          在模块 2创建 Deployment 时, Kubernetes 添加 a Pod 托管你的应用实例。Pod 是 Kubernetes 抽象出来的,表示一组一个或多个应用程序容器(如 Docker 或 rkt ),以及这些容器的一些共享资源。这些资源包括:

          +

          在模块 2创建 Deployment 时, Kubernetes 添加了一个 Pod 来托管你的应用实例。Pod 是 Kubernetes 抽象出来的,表示一组一个或多个应用程序容器(如 Docker 或 rkt ),以及这些容器的一些共享资源。这些资源包括:

          -

          用户希望应用程序始终可用,并且开发人员应该每天多次 developers 新版本的应用程序。在 Kubernetes 中,这些是通过滚动更新(Rolling Updates)完成的。 滚动更新 允许通过使用新的实例逐步更新 Pod 实例,零停机进行 Deployment 更新。新的 Pod 将在具有可用资源的节点上进行调度。

          +

          用户希望应用程序始终可用,而开发人员则需要每天多次部署它们的新版本。在 Kubernetes 中,这些是通过滚动更新(Rolling Updates)完成的。 滚动更新 允许通过使用新的实例逐步更新 Pod 实例,零停机进行 Deployment 更新。新的 Pod 将在具有可用资源的节点上进行调度。

          -本教程介绍如何了使用 [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) 来管理应用。演示了如何创建、删除、扩容/缩容和更新 StatefulSets 的 Pods。 +本教程介绍如何了使用 [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) 来管理应用。演示了如何创建、删除、扩容/缩容和更新 StatefulSets 的 Pods。 {{% /capture %}} {{% capture prerequisites %}} - 在开始本教程之前,你应该熟悉以下 Kubernetes 的概念: -* [Pods](/docs/user-guide/pods/single-container/) -* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) -* [Headless Services](/docs/concepts/services-networking/service/#headless-services) -* [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) +* [Pods](/zh/docs/user-guide/pods/single-container/) +* [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/) +* [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services) +* [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/) -* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) -* [kubectl CLI](/docs/user-guide/kubectl/) +* [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/) +* [kubectl CLI](/zh/docs/user-guide/kubectl/) - @@ -53,11 +53,11 @@ tutorial. {{% capture objectives %}} - 下载上面的例子并保存为文件 `web.yaml`。 -你需要使用两个终端窗口。在第一个终端中,使用 [`kubectl get`](/docs/user-guide/kubectl/{{< param "version" >}}/#get) 来查看 StatefulSet 的 Pods 的创建情况。 +你需要使用两个终端窗口。在第一个终端中,使用 [`kubectl get`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 来查看 StatefulSet 的 Pods 的创建情况。 ```shell kubectl get pods -w -l app=nginx ``` -在另一个终端中,使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply)来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。 +在另一个终端中,使用 [`kubectl apply`](/zh/docs/reference/generated/kubectl/kubectl-commands/#apply)来创建定义在 `web.yaml` 中的 Headless Service 和 StatefulSet。 ```shell kubectl apply -f web.yaml @@ -123,8 +123,8 @@ statefulset.apps/web created ``` @@ -144,9 +144,9 @@ web 2 1 20s ### Ordered Pod Creation -For a StatefulSet with N replicas, when Pods are being deployed, they are -created sequentially, in order from {0..N-1}. Examine the output of the -`kubectl get` command in the first terminal. Eventually, the output will +For a StatefulSet with N replicas, when Pods are being deployed, they are +created sequentially, in order from {0..N-1}. Examine the output of the +`kubectl get` command in the first terminal. Eventually, the output will look like the example below. --> @@ -165,14 +165,14 @@ web-0 1/1 Running 0 19s web-1 0/1 Pending 0 0s web-1 0/1 Pending 0 0s web-1 0/1 ContainerCreating 0 0s -web-1 1/1 Running 0 18s +web-1 1/1 Running 0 18s ``` -请注意在 `web-0` Pod 处于 [Running和Ready](/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。 +请注意在 `web-0` Pod 处于 [Running和Ready](/zh/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。 -如同 [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) 概念中所提到的,StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为`-`。`web`StatefulSet 拥有两个副本,所以它创建了两个 Pod:`web-0`和`web-1`。 +如同 [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) 概念中所提到的,StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。这个标志基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引。Pod 的名称的形式为`-`。`web`StatefulSet 拥有两个副本,所以它创建了两个 Pod:`web-0`和`web-1`。 ### 使用稳定的网络身份标识 -每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。 +每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用[`kubectl exec`](/zh/docs/reference/generated/kubectl/kubectl-commands/#exec)在每个 Pod 中执行`hostname`。 ```shell for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done @@ -231,17 +231,17 @@ web-0 web-1 ``` - -使用 [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) 运行一个提供 `nslookup` 命令的容器,该命令来自于 `dnsutils` 包。通过对 Pod 的主机名执行 `nslookup`,你可以检查他们在集群内部的 DNS 地址。 +使用 [`kubectl run`](/zh/docs/reference/generated/kubectl/kubectl-commands/#run) 运行一个提供 `nslookup` 命令的容器,该命令来自于 `dnsutils` 包。通过对 Pod 的主机名执行 `nslookup`,你可以检查他们在集群内部的 DNS 地址。 ```shell -kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm +kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm nslookup web-0.nginx Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local @@ -258,11 +258,11 @@ Address 1: 10.244.2.6 ``` headless service 的 CNAME 指向 SRV 记录(记录每个 Running 和 Ready 状态的 Pod)。SRV 记录指向一个包含 Pod IP 地址的记录表项。 @@ -274,11 +274,11 @@ kubectl get pod -w -l app=nginx ``` -在另一个终端中使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet 中所有的 Pod。 +在另一个终端中使用 [`kubectl delete`](/zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet 中所有的 Pod。 ```shell kubectl delete pod -l app=nginx @@ -287,7 +287,7 @@ pod "web-1" deleted ``` @@ -306,7 +306,7 @@ web-1 1/1 Running 0 34s ``` @@ -317,7 +317,7 @@ for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done web-0 web-1 -kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh +kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh nslookup web-0.nginx Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local @@ -333,23 +333,23 @@ Name: web-1.nginx Address 1: 10.244.2.8 ``` @@ -381,20 +381,20 @@ www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO ``` -StatefulSet 控制器创建了两个 PersistentVolumeClaims,绑定到两个 [PersistentVolumes](/docs/concepts/storage/volumes/)。由于本教程使用的集群配置为动态提供 PersistentVolume,所有的 PersistentVolume 都是自动创建和绑定的。 +StatefulSet 控制器创建了两个 PersistentVolumeClaims,绑定到两个 [PersistentVolumes](/zh/docs/concepts/storage/volumes/)。由于本教程使用的集群配置为动态提供 PersistentVolume,所有的 PersistentVolume 都是自动创建和绑定的。 NGINX web 服务器默认会加载位于 `/usr/share/nginx/html/index.html` 的 index 文件。StatefulSets `spec` 中的 `volumeMounts` 字段保证了 `/usr/share/nginx/html` 文件夹由一个 PersistentVolume 支持。 @@ -451,7 +451,7 @@ pod "web-0" deleted pod "web-1" deleted ``` @@ -482,14 +482,14 @@ web-1 ``` 在另一个终端窗口使用 `kubectl scale` 扩展副本数为 5。 @@ -527,7 +527,7 @@ kubectl scale sts web --replicas=5 statefulset.apps/web scaled ``` @@ -556,8 +556,8 @@ web-4 1/1 Running 0 19s 在另一个终端使用 `kubectl patch` 将 StatefulSet 缩容回三个副本。 @@ -614,11 +614,11 @@ web-3 1/1 Terminating 0 42s ### 顺序终止 Pod @@ -641,16 +641,16 @@ www-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO ``` @@ -740,15 +740,15 @@ web-0 1/1 Running 0 10s ``` @@ -1026,22 +1026,22 @@ k8s.gcr.io/nginx-slim:0.7 ``` -使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet。请确保提供了 `--cascade=false` 参数给命令。这个参数告诉 Kubernetes 只删除 StatefulSet 而不要删除它的任何 Pod。 +使用 [`kubectl delete`](/zh/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 StatefulSet。请确保提供了 `--cascade=false` 参数给命令。这个参数告诉 Kubernetes 只删除 StatefulSet 而不要删除它的任何 Pod。 ```shell kubectl delete statefulset web --cascade=false @@ -1141,7 +1141,7 @@ kubectl get pods -w -l app=nginx 在另一个终端里重新创建 StatefulSet。请注意,除非你删除了 `nginx` Service (你不应该这样做),你将会看到一个错误,提示 Service 已经存在。 @@ -1154,7 +1154,7 @@ service/nginx unchanged ``` @@ -1181,14 +1181,14 @@ web-2 0/1 Terminating 0 3m ``` @@ -1204,10 +1204,10 @@ web-1 ``` @@ -1260,12 +1260,12 @@ web-1 0/1 Terminating 0 29m ``` @@ -1294,7 +1294,7 @@ statefulset.apps/web created ``` @@ -1307,8 +1307,8 @@ web-1 ``` @@ -1371,7 +1371,7 @@ Pod. @@ -1453,7 +1453,7 @@ web-3 1/1 Running 0 26s ``` @@ -1522,8 +1522,8 @@ kubectl delete svc nginx diff --git a/content/zh/docs/tutorials/stateful-application/cassandra.md b/content/zh/docs/tutorials/stateful-application/cassandra.md index 062171966f52d..98d0d6ab44016 100644 --- a/content/zh/docs/tutorials/stateful-application/cassandra.md +++ b/content/zh/docs/tutorials/stateful-application/cassandra.md @@ -28,18 +28,18 @@ title: "Example: Deploying Cassandra with Stateful Sets" 本示例也使用了Kubernetes的一些核心组件: -- [_Pods_](/docs/user-guide/pods) -- [ _Services_](/docs/user-guide/services) -- [_Replication Controllers_](/docs/user-guide/replication-controller) -- [_Stateful Sets_](/docs/concepts/workloads/controllers/statefulset/) -- [_Daemon Sets_](/docs/admin/daemons) +- [_Pods_](/zh/docs/user-guide/pods) +- [ _Services_](/zh/docs/user-guide/services) +- [_Replication Controllers_](/zh/docs/user-guide/replication-controller) +- [_Stateful Sets_](/zh/docs/concepts/workloads/controllers/statefulset/) +- [_Daemon Sets_](/zh/docs/admin/daemons) ## 准备工作 -本示例假设你已经安装运行了一个 Kubernetes集群(版本 >=1.2),并且还在某个路径下安装了 [`kubectl`](/docs/tasks/tools/install-kubectl/) 命令行工具。请查看 [getting started guides](/docs/getting-started-guides/) 获取关于你的平台的安装说明。 +本示例假设你已经安装运行了一个 Kubernetes集群(版本 >=1.2),并且还在某个路径下安装了 [`kubectl`](/zh/docs/tasks/tools/install-kubectl/) 命令行工具。请查看 [getting started guides](/zh/docs/getting-started-guides/) 获取关于你的平台的安装说明。 本示例还需要一些代码和配置文件。为了避免手动输入,你可以 `git clone` Kubernetes 源到你本地。 @@ -133,7 +133,7 @@ kubectl delete daemonset cassandra ## 步骤 1:创建 Cassandra Headless Service -Kubernetes _[Service](/docs/user-guide/services)_ 描述一组执行同样任务的 [_Pod_](/docs/user-guide/pods)。在 Kubernetes 中,一个应用的原子调度单位是一个 Pod:一个或多个_必须_调度到相同主机上的容器。 +Kubernetes _[Service](/zh/docs/user-guide/services)_ 描述一组执行同样任务的 [_Pod_](/zh/docs/user-guide/pods)。在 Kubernetes 中,一个应用的原子调度单位是一个 Pod:一个或多个_必须_调度到相同主机上的容器。 这个 Service 用于在 Kubernetes 集群内部进行 Cassandra 客户端和 Cassandra Pod 之间的 DNS 查找。 @@ -354,7 +354,7 @@ $ kubectl exec cassandra-0 -- cqlsh -e 'desc keyspaces' system_traces system_schema system_auth system system_distributed ``` -你需要使用 `kubectl edit` 来增加或减小 Cassandra StatefulSet 的大小。你可以在[文档](/docs/user-guide/kubectl/kubectl_edit) 中找到更多关于 `edit` 命令的信息。 +你需要使用 `kubectl edit` 来增加或减小 Cassandra StatefulSet 的大小。你可以在[文档](/zh/docs/user-guide/kubectl/kubectl_edit) 中找到更多关于 `edit` 命令的信息。 使用以下命令编辑 StatefulSet。 @@ -429,7 +429,7 @@ $ grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodS ## 步骤 5:使用 Replication Controller 创建 Cassandra 节点 pod -Kubernetes _[Replication Controller](/docs/user-guide/replication-controller)_ 负责复制一个完全相同的 pod 集合。像 Service 一样,它具有一个 selector query,用来识别它的集合成员。和 Service 不一样的是,它还具有一个期望的副本数,并且会通过创建或删除 Pod 来保证 Pod 的数量满足它期望的状态。 +Kubernetes _[Replication Controller](/zh/docs/user-guide/replication-controller)_ 负责复制一个完全相同的 pod 集合。像 Service 一样,它具有一个 selector query,用来识别它的集合成员。和 Service 不一样的是,它还具有一个期望的副本数,并且会通过创建或删除 Pod 来保证 Pod 的数量满足它期望的状态。 和我们刚才定义的 Service 一起,Replication Controller 能够让我们轻松的构建一个复制的、可扩展的 Cassandra 集群。 @@ -639,7 +639,7 @@ $ kubectl delete rc cassandra ## 步骤 8:使用 DaemonSet 替换 Replication Controller -在 Kubernetes中,[_DaemonSet_](/docs/admin/daemons) 能够将 pod 一对一的分布到 Kubernetes 节点上。和 _ReplicationController_ 相同的是它也有一个用于识别它的集合成员的 selector query。但和 _ReplicationController_ 不同的是,它拥有一个节点 selector,用于限制基于模板的 pod 可以调度的节点。并且 pod 的复制不是基于一个设置的数量,而是为每一个节点分配一个 pod。 +在 Kubernetes中,[_DaemonSet_](/zh/docs/admin/daemons) 能够将 pod 一对一的分布到 Kubernetes 节点上。和 _ReplicationController_ 相同的是它也有一个用于识别它的集合成员的 selector query。但和 _ReplicationController_ 不同的是,它拥有一个节点 selector,用于限制基于模板的 pod 可以调度的节点。并且 pod 的复制不是基于一个设置的数量,而是为每一个节点分配一个 pod。 示范用例:当部署到云平台时,预期情况是实例是短暂的并且随时可能终止。Cassandra 被搭建成为在各个节点间复制数据以便于实现数据冗余。这样的话,即使一个实例终止了,存储在它上面的数据却没有,并且集群会通过重新复制数据到其它运行节点来作为响应。 @@ -802,6 +802,6 @@ $ kubectl delete daemonset cassandra 查看本示例的 [image](https://github.com/kubernetes/examples/tree/master/cassandra/image) 目录,了解如何构建容器的 docker 镜像及其内容。 -你可能还注意到我们设置了一些 Cassandra 参数(`MAX_HEAP_SIZE`和`HEAP_NEWSIZE`),并且增加了关于 [namespace](/docs/user-guide/namespaces) 的信息。我们还告诉 Kubernetes 容器暴露了 `CQL` 和 `Thrift` API 端口。最后,我们告诉集群管理器我们需要 0.1 cpu(0.1 核)。 +你可能还注意到我们设置了一些 Cassandra 参数(`MAX_HEAP_SIZE`和`HEAP_NEWSIZE`),并且增加了关于 [namespace](/zh/docs/user-guide/namespaces) 的信息。我们还告诉 Kubernetes 容器暴露了 `CQL` 和 `Thrift` API 端口。最后,我们告诉集群管理器我们需要 0.1 cpu(0.1 核)。 [!Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cassandra/README.md?pixel)]() diff --git a/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index ed0e8f8f7d388..d05f07d135b52 100644 --- a/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -12,23 +12,23 @@ card: {{% capture overview %}} - 本示例描述了如何通过 Minikube 在 Kubernetes 上安装 WordPress 和 MySQL。这两个应用都使用 PersistentVolumes 和 PersistentVolumeClaims 保存数据。 -[PersistentVolume](/docs/concepts/storage/persistent-volumes/)(PV)是一块集群里由管理员手动提供,或 kubernetes 通过 [StorageClass](/docs/concepts/storage/storage-classes) 动态创建的存储。 -[PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)是一个满足对 PV 存储需要的请求。PersistentVolumes 和 PersistentVolumeClaims 是独立于 Pod 生命周期而在 Pod 重启,重新调度甚至删除过程中保存数据。 +[PersistentVolume](/zh/docs/concepts/storage/persistent-volumes/)(PV)是一块集群里由管理员手动提供,或 kubernetes 通过 [StorageClass](/zh/docs/concepts/storage/storage-classes) 动态创建的存储。 +[PersistentVolumeClaim](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)(PVC)是一个满足对 PV 存储需要的请求。PersistentVolumes 和 PersistentVolumeClaims 是独立于 Pod 生命周期而在 Pod 重启,重新调度甚至删除过程中保存数据。 {{< warning >}} deployment 在生产场景中并不适合,它使用单实例 WordPress 和 MySQL Pods。考虑使用 [WordPress Helm Chart](https://github.com/kubernetes/charts/tree/master/stable/wordpress) 在生产场景中部署 WordPress。 @@ -53,7 +53,7 @@ deployment 在生产场景中并不适合,它使用单实例 WordPress 和 MyS * MySQL resource configs * WordPress resource configs * Apply the kustomization directory by `kubectl apply -k ./` -* Clean up +* Clean up --> * 创建 PersistentVolumeClaims 和 PersistentVolumes @@ -77,7 +77,7 @@ Download the following configuration files: 1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml) -1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) +1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) --> 此例在`kubectl` 1.14 或者更高版本有效。 @@ -86,14 +86,14 @@ Download the following configuration files: 1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml) -2. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) - +2. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml) + {{% /capture %}} {{% capture lessoncontent %}} ## 创建 PersistentVolumeClaims 和 PersistentVolumes @@ -113,15 +113,15 @@ MySQL 和 Wordpress 都需要一个 PersistentVolume 来存储数据。他们的 {{< warning >}} 在本地群集中,默认的 StorageClass 使用`hostPath`供应器。 `hostPath`卷仅适用于开发和测试。使用 `hostPath` 卷,您的数据位于 Pod 调度到的节点上的`/tmp`中,并且不会在节点之间移动。如果 Pod 死亡并被调度到群集中的另一个节点,或者该节点重新启动,则数据将丢失。 {{< /warning >}} {{< note >}} - 如果要建立需要使用`hostPath`设置程序的集群,则必须在 controller-manager 组件中设置`--enable-hostpath-provisioner`标志。 @@ -133,14 +133,14 @@ If you are bringing up a cluster that needs to use the `hostPath` provisioner, t 如果你已经有运行在 Google Kubernetes Engine 的集群,请参考 [this guide](https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk)。 {{< /note >}} - ## 创建 kustomization.yaml - ### 创建 Secret 生成器 @@ -148,10 +148,10 @@ If you are bringing up a cluster that needs to use the `hostPath` provisioner, t -A [Secret](/docs/concepts/configuration/secret/) 是存储诸如密码或密钥之类的敏感数据的对象。从 1.14 开始,`kubectl`支持使用 kustomization 文件管理 Kubernetes 对象。您可以通过`kustomization.yaml`中的生成器创建一个 Secret。 +A [Secret](/zh/docs/concepts/configuration/secret/) 是存储诸如密码或密钥之类的敏感数据的对象。从 1.14 开始,`kubectl`支持使用 kustomization 文件管理 Kubernetes 对象。您可以通过`kustomization.yaml`中的生成器创建一个 Secret。 通过以下命令在`kustomization.yaml`中添加一个 Secret 生成器。您需要用您要使用的密码替换`YOUR_PASSWORD`。 @@ -164,13 +164,13 @@ secretGenerator: EOF ``` - ## 补充 MySQL 和 WordPress 的资源配置 - @@ -178,11 +178,11 @@ The following manifest describes a single-instance MySQL Deployment. The MySQL c {{< codenew file="application/wordpress/mysql-deployment.yaml" >}} - 以下 manifest 文件描述了单实例 WordPress 部署。WordPress 容器将网站数据文件位于`/var/www/html`的 PersistentVolume。`WORDPRESS_DB_HOST`环境变量集上面定义的 MySQL Service 的名称,WordPress 将通过 Service 访问数据库。`WORDPRESS_DB_PASSWORD`环境变量设置从 Secret kustomize 生成的数据库密码。 @@ -234,8 +234,8 @@ the name of the MySQL Service defined above, and WordPress will access the datab ``` - ## 应用和验证 @@ -348,7 +348,7 @@ kubectl apply -k ./ ``` 响应应如下所示: - + ```shell NAME TYPE DATA AGE mysql-pass-c57bb4t7mf Opaque 1 9s @@ -419,7 +419,7 @@ kubectl apply -k ./ ``` 6. 复制 IP 地址,然后将页面加载到浏览器中来查看您的站点。 - + 您应该看到类似于以下屏幕截图的 WordPress 设置页面。 ![wordpress-init](https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/WordPress.png) @@ -427,8 +427,8 @@ kubectl apply -k ./ {{% /capture %}} {{< warning >}} - 不要在此页面上保留 WordPress 安装。如果其他用户找到了它,他们可以在您的实例上建立一个网站并使用它来提供恶意内容。

          通过创建用户名和密码来安装 WordPress 或删除您的实例。 @@ -453,24 +453,16 @@ Do not leave your WordPress installation on this page. If another user finds it, {{% capture whatsnext %}} + -1. 运行以下命令以删除您的 Secret,Deployments,Services 和 PersistentVolumeClaims: - - ```shell - kubectl delete -k ./ - ``` - -{{% /capture %}} - -{{% capture whatsnext %}} -* 了解更多关于 [Introspection and Debugging](/docs/tasks/debug-application-cluster/debug-application-introspection/) -* 了解更多关于 [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) -* 了解更多关于 [Port Forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) -* 了解如何 [Get a Shell to a Container](/docs/tasks/debug-application-cluster/get-shell-running-container/) +* 了解更多关于 [Introspection and Debugging](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/) +* 了解更多关于 [Jobs](/zh/docs/concepts/workloads/controllers/jobs-run-to-completion/) +* 了解更多关于 [Port Forwarding](/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) +* 了解如何 [Get a Shell to a Container](/zh/docs/tasks/debug-application-cluster/get-shell-running-container/) {{% /capture %}} diff --git a/content/zh/docs/tutorials/stateful-application/zookeeper.md b/content/zh/docs/tutorials/stateful-application/zookeeper.md index 8e76d2aa087c5..2c5189ac4aa4b 100644 --- a/content/zh/docs/tutorials/stateful-application/zookeeper.md +++ b/content/zh/docs/tutorials/stateful-application/zookeeper.md @@ -14,27 +14,27 @@ content_template: templates/tutorial {{% capture overview %}} -本教程展示了在 Kubernetes 上使用 [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget) 和 [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。 +本教程展示了在 Kubernetes 上使用 [PodDisruptionBudgets](/zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) 和 [PodAntiAffinity](/zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。 {{% /capture %}} {{% capture prerequisites %}} 在开始本教程前,你应该熟悉以下 Kubernetes 概念。 -* [Pods](/docs/user-guide/pods/single-container/) -* [Cluster DNS](/docs/concepts/services-networking/dns-pod-service/) -* [Headless Services](/docs/concepts/services-networking/service/#headless-services) -* [PersistentVolumes](/docs/concepts/storage/volumes/) +* [Pods](/zh/docs/user-guide/pods/single-container/) +* [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/) +* [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services) +* [PersistentVolumes](/zh/docs/concepts/storage/volumes/) * [PersistentVolume Provisioning](http://releases.k8s.io/{{< param "githubbranch" >}}/examples/persistent-volume-provisioning/) -* [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) -* [StatefulSets](/docs/concepts/abstractions/controllers/statefulsets/) -* [PodDisruptionBudgets](/docs/admin/disruptions/#specifying-a-poddisruptionbudget) -* [PodAntiAffinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) -* [kubectl CLI](/docs/user-guide/kubectl) +* [ConfigMaps](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) +* [StatefulSets](/zh/docs/concepts/abstractions/controllers/statefulsets/) +* [PodDisruptionBudgets](/zh/docs/admin/disruptions/#specifying-a-poddisruptionbudget) +* [PodAntiAffinity](/zh/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature) +* [kubectl CLI](/zh/docs/user-guide/kubectl) -你需要一个至少包含四个节点的集群,每个节点至少 2 CPUs 和 4 GiB 内存。在本教程中你将会 cordon 和 drain 集群的节点。**这意味着集群节点上所有的 Pods 将会被终止并移除。这些节点也会暂时变为不可调度。**在本教程中你应该使用一个独占的集群,或者保证你造成的干扰不会影响其它租户。 +你需要一个至少包含四个节点的集群,每个节点至少 2 CPUs 和 4 GiB 内存。在本教程中你将会 cordon 和 drain 集群的节点。**这意味着集群节点上所有的 Pods 将会被终止并移除**。**这些节点也会暂时变为不可调度**。在本教程中你应该使用一个独占的集群,或者保证你造成的干扰不会影响其它租户。 本教程假设你的集群配置为动态的提供 PersistentVolumes。如果你的集群没有配置成这样,在开始本教程前,你需要手动准备三个 20 GiB 的卷。 @@ -69,14 +69,14 @@ ZooKeeper 在内存中保存它们的整个状态机,但是每个改变都被 下面的清单包含一个 -[Headless Service](/docs/concepts/services-networking/service/#headless-services), -一个 [Service](/docs/concepts/services-networking/service/), -一个 [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), -和一个 [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)。 +[Headless Service](/zh/docs/concepts/services-networking/service/#headless-services), +一个 [Service](/zh/docs/concepts/services-networking/service/), +一个 [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions//#specifying-a-poddisruptionbudget), +和一个 [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/)。 {{< codenew file="application/zookeeper/zookeeper.yaml" >}} -打开一个命令行终端,使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) +打开一个命令行终端,使用 [`kubectl apply`](/zh/docs/reference/generated/kubectl/kubectl-commands/#apply) 创建这个清单。 ```shell @@ -92,7 +92,7 @@ poddisruptionbudget.policy/zk-pdb created statefulset.apps/zk created ``` -使用 [`kubectl get`](/docs/user-guide/kubectl/{{< param "version" >}}/#get) 查看 StatefulSet 控制器创建的 Pods。 +使用 [`kubectl get`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#get) 查看 StatefulSet 控制器创建的 Pods。 ```shell kubectl get pods -w -l app=zk @@ -130,7 +130,7 @@ StatefulSet 控制器创建了3个 Pods,每个 Pod 包含一个 [ZooKeeper 3.4 由于在匿名网络中没有用于选举 leader 的终止算法,Zab 要求显式的进行成员关系配置,以执行 leader 选举。Ensemble 中的每个服务都需要具有一个独一无二的标识符,所有的服务均需要知道标识符的全集,并且每个标志都需要和一个网络地址相关联。 -使用 [`kubectl exec`](/docs/user-guide/kubectl/{{< param "version" >}}/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。 +使用 [`kubectl exec`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。 ```shell for i in 0 1 2; do kubectl exec zk-$i -- hostname; done @@ -184,7 +184,7 @@ zk-2.zk-headless.default.svc.cluster.local ``` -[Kubernetes DNS](/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。 +[Kubernetes DNS](/zh/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。 ZooKeeper 在一个名为 `zoo.cfg` 的文件中保存它的应用配置。使用 `kubectl exec` 在 `zk-0` Pod 中查看 `zoo.cfg` 文件的内容。 @@ -320,7 +320,7 @@ numChildren = 0 如同在 [ZooKeeper 基础](#zookeeper-basics) 一节所提到的,ZooKeeper 提交所有的条目到一个持久 WAL,并周期性的将内存快照写入存储介质。对于使用一致性协议实现一个复制状态机的应用来说,使用 WALs 提供持久化是一种常用的技术,对于普通的存储应用也是如此。 -使用 [`kubectl delete`](/docs/user-guide/kubectl/{{< param "version" >}}/#delete) 删除 `zk` StatefulSet。 +使用 [`kubectl delete`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#delete) 删除 `zk` StatefulSet。 ```shell kubectl delete statefulset zk @@ -641,7 +641,7 @@ log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %- 这是在容器里安全记录日志的最简单的方法。由于应用的日志被写入标准输出,Kubernetes 将会为你处理日志轮转。Kubernetes 还实现了一个智能保存策略,保证写入标准输出和标准错误流的应用日志不会耗尽本地存储媒介。 -使用 [`kubectl logs`](/docs/user-guide/kubectl/{{< param "version" >}}/#logs) 从一个 Pod 中取回最后几行日志。 +使用 [`kubectl logs`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#logs) 从一个 Pod 中取回最后几行日志。 ```shell kubectl logs zk-0 --tail 20 @@ -679,7 +679,7 @@ kubectl logs zk-0 --tail 20 ### 配置非特权用户 -在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。 +在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。 `zk` StatefulSet 的 Pod 的 `template` 包含了一个 SecurityContext。 @@ -736,7 +736,7 @@ drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data ### 处理进程故障 -[Restart Policies](/docs/user-guide/pod-states/#restartpolicy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。 +[Restart Policies](/zh/docs/user-guide/pod-states/#restartpolicy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。 检查 `zk-0` Pod 中运行的 ZooKeeper 服务的进程树。 @@ -947,7 +947,7 @@ kubectl get nodes ``` -使用 [`kubectl cordon`](/docs/user-guide/kubectl/{{< param "version" >}}/#cordon) cordon 你的集群中除4个节点以外的所有节点。 +使用 [`kubectl cordon`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#cordon) cordon 你的集群中除4个节点以外的所有节点。 ```shell kubectl cordon < node name > @@ -987,7 +987,7 @@ kubernetes-minion-group-i4c4 ``` -使用 [`kubectl drain`](/docs/user-guide/kubectl/{{< param "version" >}}/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。 +使用 [`kubectl drain`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。 ```shell kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data @@ -1102,7 +1102,7 @@ numChildren = 0 ``` -使用 [`kubectl uncordon`](/docs/user-guide/kubectl/{{< param "version" >}}/#uncordon) 来取消对第一个节点的隔离。 +使用 [`kubectl uncordon`](/zh/docs/user-guide/kubectl/{{< param "version" >}}/#uncordon) 来取消对第一个节点的隔离。 ```shell kubectl uncordon kubernetes-minion-group-pb41 diff --git a/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md index b22d2a365a529..655f673b069f5 100644 --- a/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -37,10 +37,10 @@ external IP address. instructions, see the documentation for your cloud provider. --> - * 安装 [kubectl](/docs/tasks/tools/install-kubectl/). + * 安装 [kubectl](/zh/docs/tasks/tools/install-kubectl/). * 使用 Google Kubernetes Engine 或 Amazon Web Services 等云供应商创建 Kubernetes 群集。 - 本教程创建了一个[外部负载均衡器](/docs/tasks/access-application-cluster/create-external-load-balancer/),需要云供应商。 + 本教程创建了一个[外部负载均衡器](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/),需要云供应商。 * 配置 `kubectl` 与 Kubernetes API 服务器通信。有关说明,请参阅云供应商文档。 @@ -86,9 +86,9 @@ external IP address. [Pods](/docs/concepts/workloads/pods/pod/), each of which runs the Hello World application. --> - 前面的命令创建一个 [Deployment](/docs/concepts/workloads/controllers/deployment/) - 对象和一个关联的 [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)对象。 - ReplicaSet 有五个 [Pod](/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。 + 前面的命令创建一个 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/) + 对象和一个关联的 [ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/)对象。 + ReplicaSet 有五个 [Pod](/zh/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。 成功请求的响应是一条问候消息: @@ -252,6 +252,6 @@ Learn more about [connecting applications with services](/docs/concepts/services-networking/connect-applications-service/). --> -了解更多关于[将应用程序与服务连接](/docs/concepts/services-networking/connect-applications-service/)。 +了解更多关于[将应用程序与服务连接](/zh/docs/concepts/services-networking/connect-applications-service/)。 {{% /capture %}} diff --git a/content/zh/docs/tutorials/stateless-application/guestbook.md b/content/zh/docs/tutorials/stateless-application/guestbook.md index 0dac40d6550af..8eea7b79225d2 100644 --- a/content/zh/docs/tutorials/stateless-application/guestbook.md +++ b/content/zh/docs/tutorials/stateless-application/guestbook.md @@ -25,8 +25,8 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica 一个简单的多层 web 应用程序。本例由以下组件组成: @@ -145,12 +145,12 @@ Replace POD-NAME with the name of your Pod. -留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。 +留言板应用程序需要往 Redis 主节点中写数据。因此,需要创建 [Service](/zh/docs/concepts/services-networking/service/) 来代理 Redis 主节点 Pod 的流量。Service 定义了访问 Pod 的策略。 {{< codenew file="application/guestbook/redis-master-service.yaml" >}} 1. 使用下面的 `redis-master-service.yaml` 文件创建 Redis 主节点的服务: @@ -181,7 +181,7 @@ The guestbook applications needs to communicate to the Redis master to write its {{< note >}} 这个清单文件创建了一个名为 `Redis-master` 的 Service,其中包含一组与前面定义的标签匹配的标签,因此服务将网络流量路由到 Redis 主节点 Pod 上。 @@ -205,12 +205,12 @@ Although the Redis master is a single pod, you can make it highly available to m ### 创建 Redis 从节点 Deployment Deployments 根据清单文件中设置的配置进行伸缩。在这种情况下,Deployment 对象指定两个副本。 如果没有任何副本正在运行,则此 Deployment 将启动容器集群上的两个副本。相反, 如果有两个以上的副本在运行,那么它的规模就会缩小,直到运行两个副本为止。 @@ -352,17 +352,17 @@ The guestbook application has a web frontend serving the HTTP requests written i The `redis-slave` and `redis-master` Services you applied are only accessible within the container cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster. --> 应用的 `redis-slave` 和 `redis-master` 服务只能在容器集群中访问,因为服务的默认类型是 -[ClusterIP](/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 +[ClusterIP](/zh/docs/concepts/Services-networking/Service/#publishingservices-Service-types)。`ClusterIP` 为服务指向的 Pod 集提供一个 IP 地址。这个 IP 地址只能在集群中访问。 如果您希望客人能够访问您的留言板,您必须将前端服务配置为外部可见的,以便客户机可以从容器集群之外请求服务。Minikube 只能通过 `NodePort` 公开服务。 {{< note >}} 一些云提供商,如 Google Compute Engine 或 Google Kubernetes Engine,支持外部负载均衡器。如果您的云提供商支持负载均衡器,并且您希望使用它, 只需删除或注释掉 `type: NodePort`,并取消注释 `type: LoadBalancer` 即可。 @@ -386,14 +386,14 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su 2. 查询服务列表以验证前端服务正在运行: ```shell - kubectl get services + kubectl get services ``` 响应应该与此类似: - + ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend ClusterIP 10.0.0.112 80:31323/TCP 6s @@ -472,7 +472,7 @@ If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` y 2. 复制外部 IP 地址,然后在浏览器中加载页面以查看留言板。 ## 扩展 Web 前端 @@ -501,7 +501,7 @@ Scaling up or down is easy because your servers are defined as a Service that us ``` 响应应该类似于这样: @@ -548,7 +548,7 @@ Scaling up or down is easy because your servers are defined as a Service that us redis-slave-2005841000-fpvqc 1/1 Running 0 1h redis-slave-2005841000-phfv9 1/1 Running 0 1h ``` - + {{% /capture %}} {{% capture cleanup %}} @@ -580,7 +580,7 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels deployment.apps "redis-slave" deleted service "redis-master" deleted service "redis-slave" deleted - deployment.apps "frontend" deleted + deployment.apps "frontend" deleted service "frontend" deleted ``` @@ -594,9 +594,9 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels ``` - 响应应该是: + 响应应该是: ``` No resources found. @@ -608,15 +608,15 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels -* 完成 [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) 交互式教程 -* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) -* 阅读更多关于[连接应用程序](/docs/concepts/services-networking/connect-applications-service/) -* 阅读更多关于[管理资源](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) +* 完成 [Kubernetes Basics](/zh/docs/tutorials/kubernetes-basics/) 交互式教程 +* 使用 Kubernetes 创建一个博客,使用 [MySQL 和 Wordpress 的持久卷](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog) +* 阅读更多关于[连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/) +* 阅读更多关于[管理资源](/zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) {{% /capture %}} diff --git a/content/zh/examples/application/hpa/php-apache.yaml b/content/zh/examples/application/hpa/php-apache.yaml index c73ae7d631b58..f3f1ef5d4f912 100644 --- a/content/zh/examples/application/hpa/php-apache.yaml +++ b/content/zh/examples/application/hpa/php-apache.yaml @@ -2,7 +2,6 @@ apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: php-apache - namespace: default spec: scaleTargetRef: apiVersion: apps/v1 diff --git a/content/zh/includes/user-guide-migration-notice.md b/content/zh/includes/user-guide-migration-notice.md deleted file mode 100644 index 235b63f00d334..0000000000000 --- a/content/zh/includes/user-guide-migration-notice.md +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - -
          -

          请注意

          -

          截至2017年3月14日, Kubernetes SIG-Docs-Maintainers 小组已经开始通过 kubernetes-sig-docs 小组和 kubernetes.slack.com#sig-docs频道向 SIG Docs 社区迁移用户指南内容。

          -

          本节中的用户指南将被重构为教程、任务和概念中的主题。所有被移动的内容都会在它之前的位置上放置一个通知,以及指向其新位置的链接。 - 重组实现了一个新的目录,并且应该更为广泛的受众群体提高文档的可查找性和可读性。

          -

          如有任何疑问,请联系 kubernetes-sig-docs@googlegroups.com

          -
          diff --git a/content/zh/partners/_index.html b/content/zh/partners/_index.html index 8c418dd4eaa2c..2dc39fee0e05b 100644 --- a/content/zh/partners/_index.html +++ b/content/zh/partners/_index.html @@ -17,14 +17,16 @@ -->
          -
          +
          Kubernetes 与合作伙伴携手打造一个强大、活跃的代码库,支持一系列互补平台。
          - -
          Kubernetes 认证服务提供商
          +
          + + Kubernetes 认证服务提供商 +

          经过审核的服务提供商在帮助企业成功采用 Kubernetes 方面有深厚的经验。


          @@ -34,38 +36,40 @@
          Kubernetes 认证服务提供商


          想要成为KCSP吗?
          -
          -
          - - Kubernetes 认证的发行版本、托管平台以及安装工具 - -
          软件合规性确保各厂商的 Kubernetes 版本都支持必需的 API。 -


          - - - +
          +
          +
          + + Kubernetes 认证的发行版本、托管平台以及安装工具 + +
          软件合规性确保各厂商的 Kubernetes 版本都支持必需的 API。 +


          + + +

          想要成为Kubernetes 认证的厂商吗? -
          -
          -
          - - Kubernetes 培训合作伙伴 -
          - -
          经过审核的培训机构在云原生技术培训方面有深厚的经验。 -



          - - - -

          想要成为KTP吗? -
          +
          +
          +
          +
          + +
          Kubernetes 培训合作伙伴
          + +
          经过审核的培训机构在云原生技术培训方面有深厚的经验。 +



          + + + +

          想要成为KTP吗? +
          +
          - - - + +
          diff --git a/content/zh/training/_index.html b/content/zh/training/_index.html new file mode 100644 index 0000000000000..69a9bac043377 --- /dev/null +++ b/content/zh/training/_index.html @@ -0,0 +1,153 @@ +--- +title: 培训 +bigheader: Kubernetes 培训与认证 +abstract: 培训计划、认证和合作伙伴 +layout: basic +cid: training +class: training +--- + + + +
          +
          +
          +
          + +
          +
          + +
          +
          + +

          打造你的云原生职业

          + +

          Kubernetes 是云原生运动的核心。来自 Linux 基金会和我们培训合作伙伴的培训、认证让您可以投资您的职业生涯,学习 Kubernetes,并使您的云原生项目获得成功。

          +
          +
          +
          +
          + +
          +
          +
          + +

          在 edX 上参加免费的课程

          +
          +
          +
          +
          +
          + + Kubernetes 简介
           
          +
          + +

          想学习 Kubernetes 吗?深入了解这个用于管理容器化应用程序的强大系统。

          +
          + + 前往课程 +
          +
          +
          +
          +
          + + 云基础设施技术简介 +
          + +

          直接从开源领域的领导者 Linux 基金会学习构建和管理云技术的基本原理。

          +
          + + 前往课程 +
          +
          +
          +
          +
          + + Linux 简介 +
          + +

          从没学过 Linux?想要充充电吗?通过图形界面和命令行在 Linux 发行版下提升学习 Linux 的工作知识。

          +
          + + 前往课程 +
          +
          +
          +
          + +
          +
          +
          + +

          通过 Linux 基金会学习

          + +

          Linux 基金会为 Kubernetes 应用程序开发和操作生命周期的所有方面提供了讲师指导的、自定进度的课程。

          +

          + + 查看课程 +
          +
          +
          + +
          +
          +
          + +

          获取 Kubernetes 认证

          +
          +
          +
          +
          +
          + + Certified Kubernetes Application Developer (CKAD) +
          + +

          CKAD 考试证明用户可以为 Kubernetes 设计、构建、配置和发布云原生应用。

          +
          + + 前往认证 +
          +
          +
          +
          +
          + + Certified Kubernetes Administrator (CKA) +
          + +

          CKA 认证确保 CKA 认证人员具有履行 Kubernetes 管理员职责的技能、知识和能力。

          +
          + + 前往认证 +
          +
          +
          +
          +
          + +
          +
          +
          + +

          Kubernetes 培训合作伙伴

          + +

          我们的 Kubernetes 培训合作伙伴为 Kubernetes 和云原生项目提供培训服务。

          +
          +
          +
          + + +
          +
          diff --git a/data/concepts.yml b/data/concepts.yml index 51974d25c14f4..be47572d3a18b 100644 --- a/data/concepts.yml +++ b/data/concepts.yml @@ -118,7 +118,6 @@ toc: - docs/concepts/cluster-administration/logging.md - docs/concepts/cluster-administration/monitoring.md - docs/concepts/cluster-administration/kubelet-garbage-collection.md - - docs/concepts/cluster-administration/federation.md - docs/concepts/cluster-administration/sysctl-cluster.md - docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig.md - docs/concepts/cluster-administration/master-node-communication.md diff --git a/data/docs-home.yml b/data/docs-home.yml index a0b66d11e29d4..42d3cad567f69 100644 --- a/data/docs-home.yml +++ b/data/docs-home.yml @@ -26,4 +26,3 @@ toc: - docs/home/contribute/generated-reference/kubernetes-components.md - docs/home/contribute/generated-reference/kubectl.md - docs/home/contribute/generated-reference/kubernetes-api.md - - docs/home/contribute/generated-reference/federation-api.md diff --git a/data/overrides.yml b/data/overrides.yml index 00eb2db179e6a..a8b503e14657c 100644 --- a/data/overrides.yml +++ b/data/overrides.yml @@ -1,10 +1,7 @@ overrides: - path: docs/admin/cloud-controller-manager.md -- path: docs/admin/federation-apiserver.md -- path: docs/admin/federation-controller-manager.md - path: docs/admin/kube-apiserver.md - path: docs/admin/kube-controller-manager.md - path: docs/admin/kube-proxy.md - path: docs/admin/kube-scheduler.md - path: docs/admin/kubelet.md -- copypath: k8s/federation/docs/api-reference/ docs/federation/ diff --git a/data/reference.yml b/data/reference.yml index 81895399965b1..26eee2d91fde4 100644 --- a/data/reference.yml +++ b/data/reference.yml @@ -45,14 +45,6 @@ toc: - title: Swagger Spec path: https://git.k8s.io/kubernetes/api/swagger-spec/ -- title: Federation API - landing_page: /docs/reference/federation/v1/operations/ - section: - - docs/reference/generated/federation/v1/operations.html - - docs/reference/generated/federation/v1/definitions.html - - docs/reference/generated/federation/extensions/v1beta1/operations.html - - docs/reference/generated/federation/extensions/v1beta1/definitions.html - - title: kubectl CLI landing_page: /docs/user-guide/kubectl-overview/ section: @@ -80,14 +72,6 @@ toc: - docs/reference/setup-tools/kubeadm/kubeadm-version.md - docs/reference/setup-tools/kubeadm/kubeadm-alpha.md - docs/reference/setup-tools/kubeadm/implementation-details.md - - title: Kubefed - section: - - docs/reference/generated/kubefed.md - - docs/reference/generated/kubefed_options.md - - docs/reference/generated/kubefed_init.md - - docs/reference/generated/kubefed_join.md - - docs/reference/generated/kubefed_unjoin.md - - docs/reference/generated/kubefed_version.md - title: Command-line Tools Reference landing_page: /docs/admin/kubelet/ @@ -101,8 +85,6 @@ toc: - docs/reference/generated/kube-scheduler.md - docs/admin/kubelet-tls-bootstrapping.md - docs/reference/generated/cloud-controller-manager.md - - docs/reference/generated/federation-apiserver.md - - docs/reference/generated/federation-controller-manager.md - title: Kubernetes Issues and Security landing_page: https://github.com/kubernetes/kubernetes/issues/ diff --git a/i18n/es.toml b/i18n/es.toml index c70d1a18d4725..5e355c5efcf9b 100644 --- a/i18n/es.toml +++ b/i18n/es.toml @@ -189,6 +189,8 @@ other = "Stack Overflow" other = "Foro" [community_events_calendar] other = "Calendario de eventos" +[community_youtube_name] +other = "YouTube" # UI elements [ui_search_placeholder] diff --git a/i18n/pl.toml b/i18n/pl.toml index ede94acd3af2e..22303323823b7 100644 --- a/i18n/pl.toml +++ b/i18n/pl.toml @@ -58,6 +58,9 @@ other = "Czy ta strona była przydatna?" [feedback_yes] other = "Tak" +[input_placeholder_email_address] +other = "adres e-mail" + [latest_version] other = "to najnowsza wersja." diff --git a/i18n/pt.toml b/i18n/pt.toml index 3c740a8774a89..833e4ba627b9e 100644 --- a/i18n/pt.toml +++ b/i18n/pt.toml @@ -27,6 +27,9 @@ other = "Esta página foi útil?" [feedback_yes] other = "Sim" +[input_placeholder_email_address] +other = "endereço de e-mail" + [feedback_no] other = "Não" diff --git a/i18n/uk.toml b/i18n/uk.toml new file mode 100644 index 0000000000000..ef59e46cfa39c --- /dev/null +++ b/i18n/uk.toml @@ -0,0 +1,251 @@ +# i18n strings for the Ukrainian (main) site. + +[caution] +# other = "Caution:" +other = "Увага:" + +[cleanup_heading] +# other = "Cleaning up" +other = "Очистка" + +[community_events_calendar] +# other = "Events Calendar" +other = "Календар подій" + +[community_forum_name] +# other = "Forum" +other = "Форум" + +[community_github_name] +other = "GitHub" + +[community_slack_name] +other = "Slack" + +[community_stack_overflow_name] +other = "Stack Overflow" + +[community_twitter_name] +other = "Twitter" + +[community_youtube_name] +other = "YouTube" + +[deprecation_warning] +# other = " documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the " +other = " документація більше не підтримується. Версія, яку ви зараз переглядаєте, є статичною. Для перегляду актуальної документації дивіться " + +[deprecation_file_warning] +# other = "Deprecated" +other = "Застаріла версія" + +[docs_label_browse] +# other = "Browse Docs" +other = "Переглянути документацію" + +[docs_label_contributors] +# other = "Contributors" +other = "Контриб'ютори" + +[docs_label_i_am] +# other = "I AM..." +other = "Я..." + +[docs_label_users] +# other = "Users" +other = "Користувачі" + +[feedback_heading] +# other = "Feedback" +other = "Ваша думка" + +[feedback_no] +# other = "No" +other = "Ні" + +[feedback_question] +# other = "Was this page helpful?" +other = "Чи була ця сторінка корисною?" + +[feedback_yes] +# other = "Yes" +other = "Так" + +[input_placeholder_email_address] +# other = "email address" +other = "електронна адреса" + +[latest_version] +# other = "latest version." +other = "остання версія." + +[layouts_blog_pager_prev] +# other = "<< Prev" +other = "<< Назад" + +[layouts_blog_pager_next] +# other = "Next >>" +other = "Далі >>" + +[layouts_case_studies_list_tell] +# other = "Tell your story" +other = "Розкажіть свою історію" + +[layouts_docs_glossary_aka] +# other = "Also known as" +other = "Також відомий як" + +[layouts_docs_glossary_description] +# other = "This glossary is intended to be a comprehensive, standardized list of Kubernetes terminology. It includes technical terms that are specific to Kubernetes, as well as more general terms that provide useful context." +other = "Даний словник створений як повний стандартизований список термінології Kubernetes. Він включає в себе технічні терміни, специфічні для Kubernetes, а також більш загальні терміни, необхідні для кращого розуміння контексту." + +[layouts_docs_glossary_deselect_all] +# other = "Deselect all" +other = "Очистити вибір" + +[layouts_docs_glossary_click_details_after] +# other = "indicators below to get a longer explanation for any particular term." +other = "для отримання розширеного пояснення конкретного терміна." + +[layouts_docs_glossary_click_details_before] +# other = "Click on the" +other = "Натисність на" + +[layouts_docs_glossary_filter] +# other = "Filter terms according to their tags" +other = "Відфільтрувати терміни за тегами" + +[layouts_docs_glossary_select_all] +# other = "Select all" +other = "Вибрати все" + +[layouts_docs_partials_feedback_improvement] +# other = "suggest an improvement" +other = "запропонувати покращення" + +[layouts_docs_partials_feedback_issue] +# other = "Open an issue in the GitHub repo if you want to " +other = "Створіть issue в GitHub репозиторії, якщо ви хочете " + +[layouts_docs_partials_feedback_or] +# other = "or" +other = "або" + +[layouts_docs_partials_feedback_problem] +# other = "report a problem" +other = "повідомити про проблему" + +[layouts_docs_partials_feedback_thanks] +# other = "Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on" +other = "Дякуємо за ваш відгук. Якщо ви маєте конкретне запитання щодо використання Kubernetes, ви можете поставити його" + +[layouts_docs_search_fetching] +# other = "Fetching results..." +other = "Отримання результатів..." + +[main_by] +other = "by" + +[main_cncf_project] +# other = """We are a CNCF graduated project

          """ +other = """Ми є проектом CNCF

          """ + +[main_community_explore] +# other = "Explore the community" +other = "Познайомитись із спільнотою" + +[main_contribute] +# other = "Contribute" +other = "Допомогти проекту" + +[main_copyright_notice] +# other = """The Linux Foundation ®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page""" +other = """The Linux Foundation ®. Всі права застережено. The Linux Foundation є зареєстрованою торговою маркою. Перелік торгових марок The Linux Foundation ви знайдете на нашій сторінці Використання торгових марок""" + +[main_documentation_license] +# other = """The Kubernetes Authors | Documentation Distributed under CC BY 4.0""" +other = """Автори Kubernetes | Документація розповсюджується під ліцензією CC BY 4.0""" + +[main_edit_this_page] +# other = "Edit This Page" +other = "Редагувати цю сторінку" + +[main_github_create_an_issue] +# other = "Create an Issue" +other = "Створити issue" + +[main_github_invite] +# other = "Interested in hacking on the core Kubernetes code base?" +other = "Хочете зламати основну кодову базу Kubernetes?" + +[main_github_view_on] +# other = "View On GitHub" +other = "Переглянути у GitHub" + +[main_kubernetes_features] +# other = "Kubernetes Features" +other = "Функціональні можливості Kubernetes" + +[main_kubeweekly_baseline] +# other = "Interested in receiving the latest Kubernetes news? Sign up for KubeWeekly." +other = "Хочете отримувати останні новини Kubernetes? Підпишіться на KubeWeekly." + +[main_kubernetes_past_link] +# other = "View past newsletters" +other = "Переглянути попередні інформаційні розсилки" + +[main_kubeweekly_signup] +# other = "Subscribe" +other = "Підписатися" + +[main_page_history] +# other ="Page History" +other ="Історія сторінки" + +[main_page_last_modified_on] +# other = "Page last modified on" +other = "Сторінка востаннє редагувалася" + +[main_read_about] +# other = "Read about" +other = "Прочитати про" + +[main_read_more] +# other = "Read more" +other = "Прочитати більше" + +[note] +# other = "Note:" +other = "Примітка:" + +[objectives_heading] +# other = "Objectives" +other = "Цілі" + +[prerequisites_heading] +# other = "Before you begin" +other = "Перш ніж ви розпочнете" + +[ui_search_placeholder] +# other = "Search" +other = "Пошук" + +[version_check_mustbe] +# other = "Your Kubernetes server must be version " +other = "Версія вашого Kubernetes сервера має бути " + +[version_check_mustbeorlater] +# other = "Your Kubernetes server must be at or later than version " +other = "Версія вашого Kubernetes сервера має дорівнювати або бути молодшою ніж " + +[version_check_tocheck] +# other = "To check the version, enter " +other = "Для перевірки версії введіть " + +[warning] +# other = "Warning:" +other = "Попередження:" + +[whatsnext_heading] +# other = "What's next" +other = "Що далі" diff --git a/i18n/vi.toml b/i18n/vi.toml index d58929efb0bf5..e2e2c5ee04c38 100644 --- a/i18n/vi.toml +++ b/i18n/vi.toml @@ -24,6 +24,9 @@ other = "Stack Overflow" [community_twitter_name] other = "Twitter" +[community_youtube_name] +other = "YouTube" + [deprecation_warning] other = " tài liệu này không còn được duy trì. Phiên bản đang xem hiện tại là một snapshot tĩnh. Để rõ hơn về tài liệu bản cập nhật, xem " @@ -54,6 +57,9 @@ other = "Trang này có hữu ích?" [feedback_yes] other = "Có" +[input_placeholder_email_address] +other = "địa chỉ email" + [latest_version] other = "phiên bản mới nhất." diff --git a/i18n/zh.toml b/i18n/zh.toml index c42cb6c230af6..83e4399cde636 100644 --- a/i18n/zh.toml +++ b/i18n/zh.toml @@ -25,6 +25,9 @@ other = "Stack Overflow" [community_twitter_name] other = "Twitter" +[community_youtube_name] +other = "YouTube" + [deprecation_warning] other = " 版本的文档已不再维护。您现在看到的版本来自于一份静态的快照。如需查阅最新文档,请点击" @@ -188,4 +191,4 @@ other = "警告:" other = "接下来" [input_placeholder_email_address] -other = "电子邮件地址" \ No newline at end of file +other = "电子邮件地址" diff --git a/layouts/docs/glossary.html b/layouts/docs/glossary.html index 14124421528ab..f969c402fd2ef 100644 --- a/layouts/docs/glossary.html +++ b/layouts/docs/glossary.html @@ -43,7 +43,7 @@

          {{ .Title }}

          {{ .Title }}
          {{ with .Params.aka }} - {{ T "layouts_docs_glossary_aka" }}:{{ delimit . ", " }} + {{ T "layouts_docs_glossary_aka" }}: {{ delimit . ", " }}
          {{ end }} {{ .Summary }} [+] diff --git a/layouts/partials/head.html b/layouts/partials/head.html index e710c60af71ad..62696960ca71a 100644 --- a/layouts/partials/head.html +++ b/layouts/partials/head.html @@ -11,6 +11,14 @@ {{ if .Title }}{{ .Title }} - {{ end }}{{ site.Title }} + @@ -40,7 +48,14 @@ +{{ if eq .Params.mermaid true }} + + +{{ end }} + {{ with .Params.js }}{{ range (split . ",") }} {{ end }}{{ else }}{{ end }} + + diff --git a/layouts/partials/header.html b/layouts/partials/header.html index ad6376486c6e6..ac6c6f0d1f2d6 100644 --- a/layouts/partials/header.html +++ b/layouts/partials/header.html @@ -60,7 +60,7 @@
          {{ T "main_community_explore" }}