From abcee2dccd17c94376595b34cc05e56345a7f06c Mon Sep 17 00:00:00 2001 From: Zach Corleissen Date: Fri, 12 Oct 2018 14:25:01 -0700 Subject: [PATCH] Update localization guidelines (#10485) * Update localization guidelines for language labels Continuing work Continuing work Continuing work More work in progress Add local OWNERS folders Add an OWNERS file to Chinese Remove shortcode for repos Add Japanese Alphabetize languages, change weights accordingly More updates Add Korean in Korean Add English to languageName Feedback from gochist Move Chinese content from cn/ to zh/ Move OWNERS from cn/ to zh/ Resolve merge conflicts by updating from master Add files back in to prep for resolution After rebase on upstream/master, remove files Review and update localization guidelines Feedback from gochist, tnir, cstoku Add a trailing newline to content/ja/OWNERS Add a trailing newline to content/zh/OWNERS Drop requirement for GH repo project Clarify language about forks/branches Edits and typos Remove a shortcode specific to a multi-repo language setup Update aliases and owners Add explicit OWNERS for content/en Migrate content from Chinese repo, update regex in config.toml Remove untranslated strings Add trailing newline to content/en/OWNERS Add trailing newlines to OWNERS files add Jaguar project description (#10433) * add Jaguar project description [Jaguar](https://gitlab.com/sdnlab/jaguar) is an open source solution for Kubernetes's network based on OpenDaylight. Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod. * Minor newline tweak blog post for azure vmss (#10538) Add microk8s to pick-right-solution.md (#10542) * Add microk8s to pick-right-solution.md Microk8s is a single-command installation of upstream Kubernetes on any Linux and should be included in the list of local-machine solutions. * capitalized Istio Add microk8s to foundational.md (#10543) * Add microk8s to foundational.md Adding microk8s as credible and stable alternative to get started with Kubernetes on a local machine. This is especially attractive for those not wanting to incur the overhead of running a VM for a local cluster. * Update foundational.md Thank you for your suggestions! LMK if this works now? * Rewrote first paragraph And included a bullet list of features of microk8s * Copyedit fix typo (#10545) Fix the kubectl subcommands links. (#10550) Signed-off-by: William Zhang Fix command issue (#10515) Signed-off-by: mooncake remove imported community files per issue 10184 (#10501) networking.md: Markdown fix (#10498) Fix front matter, federation command-line tools (#10500) Clean up glossary entry (#10399) update slack link (#10536) typo in StatefulSet docs (#10558) fix discription about horizontal pod autoscale (#10557) Remove redundant symbols (#10556) Fix issue #10520 (#10554) Signed-off-by: William Zhang Update api-concepts.md (#10534) Revert "Fix command issue (#10515)" This reverts commit c02a7fb9f9d19872d9227814b3e9ffaaa28d85f0. Update memory-constraint-namespace.md (#10530) update memory request to 100MiB corresponding the yaml content Blog: Introducing Volume Snapshot Alpha for Kubernetes (#10562) * blog post for azure vmss * snapshot blog post Resolve merge conflicts in OWNERS* Minor typo fix (#10567) Not sure what's supposed to be here, proposing removing it. * Feedback from gochist Tweaks to feedback * Feedback from ClaudiaJKang --- OWNERS_ALIASES | 30 + config.toml | 26 +- .../includes/default-storage-class-prereqs.md | 6 - .../federated-task-tutorial-prereqs.md | 8 - .../cn/includes/federation-content-moved.md | 2 - .../cn/includes/federation-current-state.md | 7 - content/cn/includes/index.md | 3 - content/cn/includes/task-tutorial-prereqs.md | 8 - .../cn/includes/user-guide-content-moved.md | 3 - .../includes/user-guide-migration-notice.md | 12 - content/en/OWNERS | 11 + content/en/docs/contribute/localization.md | 173 +++-- content/ja/OWNERS | 11 + content/ko/OWNERS | 11 + content/zh/OWNERS | 11 + content/{cn => zh}/_index.html | 0 ...16-04-00-Kubernetes-Network-Policy-APIs.md | 182 +++++ .../_posts/2016-04-00-Kubernetes-On-Aws_15.md | 129 ++++ ...3-00-Principles-Of-Container-App-Design.md | 81 +++ .../2018-06-28-Airflow-Kubernetes-Operator.md | 676 ++++++++++++++++++ ...18-07-09-IPVS-In-Cluster-Load-Balancing.md | 401 +++++++++++ content/{cn => zh}/docs/.gitkeep | 0 content/{cn => zh}/docs/_index.md | 0 .../docs/admin/accessing-the-api.md | 0 .../docs/admin/authorization/_index.md | 0 .../docs/admin/authorization/abac.md | 0 .../docs/admin/authorization/webhook.md | 0 .../{cn => zh}/docs/admin/bootstrap-tokens.md | 0 .../{cn => zh}/docs/admin/cluster-large.md | 0 content/{cn => zh}/docs/admin/daemon.yaml | 0 .../docs/admin/high-availability/_index.md | 0 .../{cn => zh}/docs/admin/kube-apiserver.md | 0 .../kubelet-authentication-authorization.md | 0 .../docs/admin/kubelet-tls-bootstrapping.md | 0 .../{cn => zh}/docs/admin/multiple-zones.md | 0 .../{cn => zh}/docs/admin/node-conformance.md | 0 .../{cn => zh}/docs/admin/ovs-networking.md | 0 .../docs/admin/service-accounts-admin.md | 0 .../concepts/architecture/cloud-controller.md | 0 .../architecture/master-node-communication.md | 0 .../docs/concepts/architecture/nodes.md | 0 .../concepts/cluster-administration/addons.md | 0 .../cluster-administration/certificates.md | 0 .../cluster-administration/cloud-providers.md | 0 .../cluster-administration-overview.md | 0 .../cluster-administration/device-plugins.md | 0 .../cluster-administration/federation.md | 0 .../cluster-administration/proxies.md | 0 .../cluster-administration/sysctl-cluster.md | 0 .../docs/concepts/configuration/commands.yaml | 0 .../manage-compute-resources-container.md | 0 .../configuration/pod-with-node-affinity.yaml | 0 .../configuration/pod-with-pod-affinity.yaml | 0 .../docs/concepts/configuration/pod.yaml | 0 .../docs/concepts/configuration/secret.md | 2 +- .../configuration/taint-and-toleration.md | 464 ++++++++++++ .../container-environment-variables.md | 0 .../docs/concepts/containers/images.md | 2 +- .../docs/concepts/example-concept-template.md | 0 .../docs/concepts/overview/components.md | 0 .../docs/concepts/overview/kubernetes-api.md | 0 .../concepts/overview/what-is-kubernetes.md | 0 .../kubernetes-objects.md | 0 .../nginx-deployment.yaml | 0 .../concepts/policy/pod-security-policy.md | 0 .../{cn => zh}/docs/concepts/policy/psp.yaml | 0 .../docs/concepts/policy/resource-quotas.md | 0 ...ries-to-pod-etc-hosts-with-host-aliases.md | 0 .../connect-applications-service.md | 0 .../concepts/services-networking/curlpod.yaml | 0 .../services-networking/dns-pod-service.md | 0 .../services-networking/hostaliases-pod.yaml | 0 .../concepts/services-networking/ingress.yaml | 0 .../services-networking/network-policies.md | 0 .../services-networking/nginx-secure-app.yaml | 0 .../services-networking/nginx-svc.yaml | 0 .../services-networking/run-my-nginx.yaml | 0 .../concepts/services-networking/service.md | 0 .../workloads/controllers/cron-jobs.md | 0 .../workloads/controllers/daemonset.md | 0 .../workloads/controllers/daemonset.yaml | 0 .../workloads/controllers/deployment.md | 0 .../workloads/controllers/frontend.yaml | 0 .../controllers/garbage-collection.md | 0 .../workloads/controllers/hpa-rs.yaml | 0 .../concepts/workloads/controllers/job.yaml | 0 .../workloads/controllers/my-repset.yaml | 0 .../controllers/nginx-deployment.yaml | 0 .../workloads/controllers/replication.yaml | 0 .../workloads/pods/init-containers.md | 0 .../concepts/workloads/pods/pod-lifecycle.md | 0 .../docs/concepts/workloads/pods/podpreset.md | 0 .../getting-started-guides/ubuntu/security.md | 68 ++ .../admission-controllers.md | 563 +++++++++++++++ .../access-authn-authz/authorization.md | 304 ++++++++ .../kube-proxy.md | 491 +++++++++++++ .../kube-scheduler.md | 380 ++++++++++ .../command-line-tools-reference/kubelet.md | 112 +++ content/zh/docs/reference/kubectl/kubectl.md | 178 +++++ .../reference/labels-annotations-taints.md | 0 content/zh/docs/reference/tools.md | 112 +++ content/zh/docs/setup/salt.md | 244 +++++++ .../access-cluster.md | 573 +++++++++++++++ ...icate-containers-same-pod-shared-volume.md | 0 .../configure-access-multiple-clusters.md | 0 .../configure-cloud-provider-firewall.md | 0 .../configure-dns-cluster.md | 26 + .../connecting-frontend-backend.md | 2 +- .../create-external-load-balancer.md | 322 +++++++++ .../access-application-cluster/frontend.yaml | 0 .../frontend/frontend.conf | 0 .../hello-service.yaml | 0 .../access-application-cluster/hello.yaml | 0 .../list-all-running-container-images.md | 193 +++++ ...load-balance-access-application-cluster.md | 190 +++++ ...port-forward-access-application-cluster.md | 222 ++++++ .../redis-master.yaml | 0 .../service-access-application-cluster.md | 225 ++++++ .../two-container-pod.yaml | 0 .../setup-extension-api-server.md | 112 +++ .../access-cluster-services.md | 0 .../apply-resource-quota-limit.md | 0 .../calico-network-policy.md | 0 .../change-default-storage-class.md | 0 .../change-pv-reclaim-policy.md | 0 .../administer-cluster/cluster-management.md | 0 .../cpu-constraints-pod-2.yaml | 0 .../cpu-constraints-pod-3.yaml | 0 .../cpu-constraints-pod-4.yaml | 0 .../cpu-constraints-pod.yaml | 0 .../administer-cluster/cpu-constraints.yaml | 0 .../cpu-defaults-pod-2.yaml | 0 .../cpu-defaults-pod-3.yaml | 0 .../administer-cluster/cpu-defaults-pod.yaml | 0 .../administer-cluster/cpu-defaults.yaml | 0 .../cpu-management-policies.md | 0 .../administer-cluster/cpu-memory-limit.md | 2 +- .../declare-network-policy.md | 0 .../dns-custom-nameservers.md | 0 .../dns-horizontal-autoscaler.yaml | 0 .../tasks/administer-cluster/encrypt-data.md | 323 +++++++++ ...aranteed-scheduling-critical-addon-pods.md | 0 .../kubeadm/kubeadm-upgrade-1-9.md | 453 ++++++++++++ .../administer-cluster/kubelet-config-file.md | 0 .../memory-constraints-pod-2.yaml | 0 .../memory-constraints-pod-3.yaml | 0 .../memory-constraints-pod-4.yaml | 0 .../memory-constraints-pod.yaml | 0 .../memory-constraints.yaml | 0 .../memory-defaults-pod-2.yaml | 2 +- .../memory-defaults-pod-3.yaml | 0 .../memory-defaults-pod.yaml | 0 .../administer-cluster/memory-defaults.yaml | 0 .../administer-cluster/my-scheduler.yaml | 0 .../calico-network-policy.md | 86 +++ .../cilium-network-policy.md | 133 ++++ .../kube-router-network-policy.md | 31 + .../romana-network-policy.md | 57 ++ .../weave-network-policy.md | 78 ++ .../docs/tasks/administer-cluster/pod1.yaml | 0 .../docs/tasks/administer-cluster/pod2.yaml | 0 .../docs/tasks/administer-cluster/pod3.yaml | 0 .../quota-mem-cpu-pod-2.yaml | 0 .../administer-cluster/quota-mem-cpu-pod.yaml | 0 .../administer-cluster/quota-mem-cpu.yaml | 0 .../quota-objects-pvc-2.yaml | 0 .../administer-cluster/quota-objects-pvc.yaml | 0 .../administer-cluster/quota-objects.yaml | 0 .../quota-pod-deployment.yaml | 0 .../administer-cluster/quota-pod-namespace.md | 0 .../tasks/administer-cluster/quota-pod.yaml | 0 .../tasks/administer-cluster/quota-pvc-2.yaml | 0 .../romana-network-policy.md | 0 .../tasks/administer-cluster/static-pod.md | 0 .../administer-cluster/sysctl-cluster.md | 329 +++++++++ .../weave-network-policy.md | 0 .../assign-pods-nodes.md | 135 ++++ .../cpu-request-limit-2.yaml | 0 .../cpu-request-limit.yaml | 0 .../exec-liveness.yaml | 0 .../http-liveness.yaml | 0 .../init-containers.yaml | 0 .../lifecycle-events.yaml | 0 .../mem-limit-range.yaml | 0 .../memory-request-limit-2.yaml | 0 .../memory-request-limit-3.yaml | 0 .../memory-request-limit.yaml | 0 .../configure-pod-container/oir-pod-2.yaml | 0 .../configure-pod-container/oir-pod.yaml | 0 .../opaque-integer-resource.md | 0 .../configure-pod-container/pod-redis.yaml | 0 .../tasks/configure-pod-container/pod.yaml | 0 .../private-reg-pod.yaml | 0 .../projected-volume.yaml | 0 .../configure-pod-container/qos-pod-2.yaml | 0 .../configure-pod-container/qos-pod-3.yaml | 0 .../configure-pod-container/qos-pod-4.yaml | 0 .../configure-pod-container/qos-pod.yaml | 0 .../rq-compute-resources.yaml | 0 .../security-context-2.yaml | 0 .../security-context-3.yaml | 0 .../security-context-4.yaml | 0 .../security-context.yaml | 0 .../task-pv-claim.yaml | 0 .../configure-pod-container/task-pv-pod.yaml | 0 .../task-pv-volume.yaml | 0 .../tcp-liveness-readiness.yaml | 0 .../tasks/debug-application-cluster/audit.md | 612 ++++++++++++++++ .../debug-application.md | 0 .../debug-cluster.md | 0 .../debug-pod-replication-controller.md | 0 .../debug-stateful-set.md | 0 .../inject-data-application/commands.yaml | 0 .../dapi-envars-container.yaml | 0 .../dapi-envars-pod.yaml | 0 .../dapi-volume-resources.yaml | 0 .../inject-data-application/dapi-volume.yaml | 0 .../define-command-argument-container.md | 0 .../define-environment-variable-container.md | 2 +- .../distribute-credentials-secure.md | 0 ...nward-api-volume-expose-pod-information.md | 2 +- .../tasks/inject-data-application/envars.yaml | 0 ...ronment-variable-expose-pod-information.md | 0 .../podpreset-allow-db-merged.yaml | 0 .../podpreset-allow-db.yaml | 0 .../podpreset-configmap.yaml | 0 .../podpreset-conflict-pod.yaml | 0 .../podpreset-conflict-preset.yaml | 0 .../podpreset-merged.yaml | 0 .../podpreset-multi-merged.yaml | 0 .../podpreset-pod.yaml | 0 .../podpreset-preset.yaml | 0 .../podpreset-proxy.yaml | 0 .../podpreset-replicaset-merged.yaml | 0 .../podpreset-replicaset.yaml | 0 .../inject-data-application/podpreset.md | 0 .../secret-envars-pod.yaml | 0 .../inject-data-application/secret-pod.yaml | 0 .../tasks/inject-data-application/secret.yaml | 0 .../fine-parallel-processing-work-queue.md | 380 ++++++++++ .../manage-daemon/rollback-daemon-set.md | 0 .../docs/tasks/manage-gpus/scheduling-gpus.md | 0 .../manage-hugepages/scheduling-hugepages.md | 0 .../deployment-patch-demo.yaml | 0 .../run-application/deployment-scale.yaml | 0 .../run-application/deployment-update.yaml | 0 .../tasks/run-application/deployment.yaml | 0 .../tasks/run-application/gce-volume.yaml | 0 .../run-application/mysql-configmap.yaml | 0 .../run-application/mysql-deployment.yaml | 0 .../tasks/run-application/mysql-services.yaml | 0 .../run-application/mysql-statefulset.yaml | 0 .../rolling-update-replication-controller.md | 0 ...un-single-instance-stateful-application.md | 0 .../run-stateless-application-deployment.md | 0 .../run-application/scale-stateful-set.md | 0 .../docs/tasks/tls/certificate-rotation.md | 0 content/{cn => zh}/docs/templates/index.md | 0 .../configure-redis-using-configmap.md | 0 .../tutorials/kubernetes-basics/_index.html | 0 .../cluster-interactive.html | 0 .../kubernetes-basics/cluster-intro.html | 0 .../kubernetes-basics/deploy-interactive.html | 0 .../kubernetes-basics/deploy-intro.html | 0 .../explore-interactive.html | 0 .../kubernetes-basics/explore-intro.html | 0 .../kubernetes-basics/expose-interactive.html | 0 .../kubernetes-basics/expose-intro.html | 0 .../kubernetes-basics/scale-interactive.html | 0 .../kubernetes-basics/scale-intro.html | 0 .../kubernetes-basics/scale/scale-intro.html | 136 ++++ .../kubernetes-basics/update-interactive.html | 0 .../kubernetes-basics/update-intro.html | 0 .../imperative-object-management-command.md | 0 .../object-management.md | 0 .../docs/tutorials/services/source-ip.md | 4 +- .../tutorials/stateful-application/Dockerfile | 0 .../tutorials/stateful-application/FETCH_HEAD | 0 .../basic-stateful-set.md | 3 +- .../cassandra-service.yaml | 0 .../cassandra-statefulset.yaml | 0 .../stateful-application/cassandra.md | 0 .../docs/tutorials/stateful-application/dev | 0 .../mysql-wordpress-persistent-volume.md | 4 +- .../local-volumes.yaml | 0 .../mysql-deployment.yaml | 0 .../wordpress-deployment.yaml | 0 .../tutorials/stateful-application/web.yaml | 0 .../tutorials/stateful-application/webp.yaml | 0 .../stateful-application/zookeeper.md | 0 .../stateful-application/zookeeper.yaml | 0 .../docs/user-guide/bad-nginx-deployment.yaml | 0 .../{cn => zh}/docs/user-guide/curlpod.yaml | 0 .../docs/user-guide/deployment.yaml | 0 .../docs/user-guide/docker-cli-to-kubectl.md | 0 .../{cn => zh}/docs/user-guide/ingress.yaml | 0 content/{cn => zh}/docs/user-guide/job.yaml | 0 .../{cn => zh}/docs/user-guide/jsonpath.md | 0 .../docs/user-guide/kubectl-overview.md | 0 .../{cn => zh}/docs/user-guide/multi-pod.yaml | 0 .../docs/user-guide/new-nginx-deployment.yaml | 0 .../{cn => zh}/docs/user-guide/nginx-app.yaml | 0 .../docs/user-guide/nginx-deployment.yaml | 0 .../user-guide/nginx-init-containers.yaml | 0 .../nginx-lifecycle-deployment.yaml | 0 .../user-guide/nginx-probe-deployment.yaml | 0 .../docs/user-guide/nginx-secure-app.yaml | 0 .../{cn => zh}/docs/user-guide/nginx-svc.yaml | 0 .../docs/user-guide/pod-w-message.yaml | 0 content/{cn => zh}/docs/user-guide/pod.yaml | 0 .../docs/user-guide/redis-deployment.yaml | 0 .../user-guide/redis-resource-deployment.yaml | 0 .../user-guide/redis-secret-deployment.yaml | 0 .../docs/user-guide/run-my-nginx.yaml | 0 content/{cn => zh}/docs/whatisk8s.md | 0 layouts/shortcodes/language-repos-list.html | 38 - 316 files changed, 9222 insertions(+), 164 deletions(-) delete mode 100644 content/cn/includes/default-storage-class-prereqs.md delete mode 100644 content/cn/includes/federated-task-tutorial-prereqs.md delete mode 100644 content/cn/includes/federation-content-moved.md delete mode 100644 content/cn/includes/federation-current-state.md delete mode 100644 content/cn/includes/index.md delete mode 100644 content/cn/includes/task-tutorial-prereqs.md delete mode 100644 content/cn/includes/user-guide-content-moved.md delete mode 100644 content/cn/includes/user-guide-migration-notice.md create mode 100644 content/en/OWNERS create mode 100644 content/ja/OWNERS create mode 100644 content/ko/OWNERS create mode 100644 content/zh/OWNERS rename content/{cn => zh}/_index.html (100%) create mode 100644 content/zh/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md create mode 100644 content/zh/blog/_posts/2016-04-00-Kubernetes-On-Aws_15.md create mode 100644 content/zh/blog/_posts/2018-03-00-Principles-Of-Container-App-Design.md create mode 100644 content/zh/blog/_posts/2018-06-28-Airflow-Kubernetes-Operator.md create mode 100644 content/zh/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md rename content/{cn => zh}/docs/.gitkeep (100%) rename content/{cn => zh}/docs/_index.md (100%) rename content/{cn => zh}/docs/admin/accessing-the-api.md (100%) rename content/{cn => zh}/docs/admin/authorization/_index.md (100%) rename content/{cn => zh}/docs/admin/authorization/abac.md (100%) rename content/{cn => zh}/docs/admin/authorization/webhook.md (100%) rename content/{cn => zh}/docs/admin/bootstrap-tokens.md (100%) rename content/{cn => zh}/docs/admin/cluster-large.md (100%) rename content/{cn => zh}/docs/admin/daemon.yaml (100%) rename content/{cn => zh}/docs/admin/high-availability/_index.md (100%) rename content/{cn => zh}/docs/admin/kube-apiserver.md (100%) rename content/{cn => zh}/docs/admin/kubelet-authentication-authorization.md (100%) rename content/{cn => zh}/docs/admin/kubelet-tls-bootstrapping.md (100%) rename content/{cn => zh}/docs/admin/multiple-zones.md (100%) rename content/{cn => zh}/docs/admin/node-conformance.md (100%) rename content/{cn => zh}/docs/admin/ovs-networking.md (100%) rename content/{cn => zh}/docs/admin/service-accounts-admin.md (100%) rename content/{cn => zh}/docs/concepts/architecture/cloud-controller.md (100%) rename content/{cn => zh}/docs/concepts/architecture/master-node-communication.md (100%) rename content/{cn => zh}/docs/concepts/architecture/nodes.md (100%) rename content/{cn => zh}/docs/concepts/cluster-administration/addons.md (100%) rename content/{cn => zh}/docs/concepts/cluster-administration/certificates.md (100%) rename content/{cn => zh}/docs/concepts/cluster-administration/cloud-providers.md (100%) rename content/{cn => zh}/docs/concepts/cluster-administration/cluster-administration-overview.md (100%) rename content/{cn => zh}/docs/concepts/cluster-administration/device-plugins.md (100%) rename content/{cn => zh}/docs/concepts/cluster-administration/federation.md (100%) rename content/{cn => zh}/docs/concepts/cluster-administration/proxies.md (100%) rename content/{cn => zh}/docs/concepts/cluster-administration/sysctl-cluster.md (100%) rename content/{cn => zh}/docs/concepts/configuration/commands.yaml (100%) rename content/{cn => zh}/docs/concepts/configuration/manage-compute-resources-container.md (100%) rename content/{cn => zh}/docs/concepts/configuration/pod-with-node-affinity.yaml (100%) rename content/{cn => zh}/docs/concepts/configuration/pod-with-pod-affinity.yaml (100%) rename content/{cn => zh}/docs/concepts/configuration/pod.yaml (100%) rename content/{cn => zh}/docs/concepts/configuration/secret.md (98%) create mode 100755 content/zh/docs/concepts/configuration/taint-and-toleration.md rename content/{cn => zh}/docs/concepts/containers/container-environment-variables.md (100%) rename content/{cn => zh}/docs/concepts/containers/images.md (99%) rename content/{cn => zh}/docs/concepts/example-concept-template.md (100%) rename content/{cn => zh}/docs/concepts/overview/components.md (100%) rename content/{cn => zh}/docs/concepts/overview/kubernetes-api.md (100%) rename content/{cn => zh}/docs/concepts/overview/what-is-kubernetes.md (100%) rename content/{cn => zh}/docs/concepts/overview/working-with-objects/kubernetes-objects.md (100%) rename content/{cn => zh}/docs/concepts/overview/working-with-objects/nginx-deployment.yaml (100%) rename content/{cn => zh}/docs/concepts/policy/pod-security-policy.md (100%) rename content/{cn => zh}/docs/concepts/policy/psp.yaml (100%) rename content/{cn => zh}/docs/concepts/policy/resource-quotas.md (100%) rename content/{cn => zh}/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md (100%) rename content/{cn => zh}/docs/concepts/services-networking/connect-applications-service.md (100%) rename content/{cn => zh}/docs/concepts/services-networking/curlpod.yaml (100%) rename content/{cn => zh}/docs/concepts/services-networking/dns-pod-service.md (100%) rename content/{cn => zh}/docs/concepts/services-networking/hostaliases-pod.yaml (100%) rename content/{cn => zh}/docs/concepts/services-networking/ingress.yaml (100%) rename content/{cn => zh}/docs/concepts/services-networking/network-policies.md (100%) rename content/{cn => zh}/docs/concepts/services-networking/nginx-secure-app.yaml (100%) rename content/{cn => zh}/docs/concepts/services-networking/nginx-svc.yaml (100%) rename content/{cn => zh}/docs/concepts/services-networking/run-my-nginx.yaml (100%) rename content/{cn => zh}/docs/concepts/services-networking/service.md (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/cron-jobs.md (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/daemonset.md (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/daemonset.yaml (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/deployment.md (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/frontend.yaml (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/garbage-collection.md (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/hpa-rs.yaml (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/job.yaml (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/my-repset.yaml (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/nginx-deployment.yaml (100%) rename content/{cn => zh}/docs/concepts/workloads/controllers/replication.yaml (100%) rename content/{cn => zh}/docs/concepts/workloads/pods/init-containers.md (100%) rename content/{cn => zh}/docs/concepts/workloads/pods/pod-lifecycle.md (100%) rename content/{cn => zh}/docs/concepts/workloads/pods/podpreset.md (100%) create mode 100644 content/zh/docs/getting-started-guides/ubuntu/security.md create mode 100755 content/zh/docs/reference/access-authn-authz/admission-controllers.md create mode 100644 content/zh/docs/reference/access-authn-authz/authorization.md create mode 100644 content/zh/docs/reference/command-line-tools-reference/kube-proxy.md create mode 100644 content/zh/docs/reference/command-line-tools-reference/kube-scheduler.md create mode 100644 content/zh/docs/reference/command-line-tools-reference/kubelet.md create mode 100644 content/zh/docs/reference/kubectl/kubectl.md rename content/{cn => zh}/docs/reference/labels-annotations-taints.md (100%) create mode 100644 content/zh/docs/reference/tools.md create mode 100755 content/zh/docs/setup/salt.md create mode 100644 content/zh/docs/tasks/access-application-cluster/access-cluster.md rename content/{cn => zh}/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md (100%) rename content/{cn => zh}/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md (100%) rename content/{cn => zh}/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md (100%) create mode 100644 content/zh/docs/tasks/access-application-cluster/configure-dns-cluster.md rename content/{cn => zh}/docs/tasks/access-application-cluster/connecting-frontend-backend.md (98%) create mode 100644 content/zh/docs/tasks/access-application-cluster/create-external-load-balancer.md rename content/{cn => zh}/docs/tasks/access-application-cluster/frontend.yaml (100%) rename content/{cn => zh}/docs/tasks/access-application-cluster/frontend/frontend.conf (100%) rename content/{cn => zh}/docs/tasks/access-application-cluster/hello-service.yaml (100%) rename content/{cn => zh}/docs/tasks/access-application-cluster/hello.yaml (100%) create mode 100644 content/zh/docs/tasks/access-application-cluster/list-all-running-container-images.md create mode 100644 content/zh/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md create mode 100644 content/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md rename content/{cn => zh}/docs/tasks/access-application-cluster/redis-master.yaml (100%) create mode 100644 content/zh/docs/tasks/access-application-cluster/service-access-application-cluster.md rename content/{cn => zh}/docs/tasks/access-application-cluster/two-container-pod.yaml (100%) create mode 100644 content/zh/docs/tasks/access-kubernetes-api/setup-extension-api-server.md rename content/{cn => zh}/docs/tasks/administer-cluster/access-cluster-services.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/apply-resource-quota-limit.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/calico-network-policy.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/change-default-storage-class.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/change-pv-reclaim-policy.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cluster-management.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-constraints-pod-2.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-constraints-pod-3.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-constraints-pod-4.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-constraints-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-constraints.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-defaults-pod-2.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-defaults-pod-3.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-defaults-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-defaults.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-management-policies.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/cpu-memory-limit.md (99%) rename content/{cn => zh}/docs/tasks/administer-cluster/declare-network-policy.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/dns-custom-nameservers.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml (100%) create mode 100644 content/zh/docs/tasks/administer-cluster/encrypt-data.md rename content/{cn => zh}/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md (100%) create mode 100644 content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-9.md rename content/{cn => zh}/docs/tasks/administer-cluster/kubelet-config-file.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-constraints-pod-2.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-constraints-pod-3.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-constraints-pod-4.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-constraints-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-constraints.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml (81%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-defaults-pod-3.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-defaults-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/memory-defaults.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/my-scheduler.yaml (100%) create mode 100644 content/zh/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md create mode 100644 content/zh/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md create mode 100644 content/zh/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md create mode 100644 content/zh/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md create mode 100644 content/zh/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md rename content/{cn => zh}/docs/tasks/administer-cluster/pod1.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/pod2.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/pod3.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-mem-cpu-pod-2.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-mem-cpu-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-mem-cpu.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-objects-pvc-2.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-objects-pvc.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-objects.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-pod-deployment.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-pod-namespace.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/quota-pvc-2.yaml (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/romana-network-policy.md (100%) rename content/{cn => zh}/docs/tasks/administer-cluster/static-pod.md (100%) create mode 100644 content/zh/docs/tasks/administer-cluster/sysctl-cluster.md rename content/{cn => zh}/docs/tasks/administer-cluster/weave-network-policy.md (100%) create mode 100644 content/zh/docs/tasks/configure-pod-container/assign-pods-nodes.md rename content/{cn => zh}/docs/tasks/configure-pod-container/cpu-request-limit-2.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/cpu-request-limit.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/exec-liveness.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/http-liveness.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/init-containers.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/lifecycle-events.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/mem-limit-range.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/memory-request-limit-2.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/memory-request-limit-3.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/memory-request-limit.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/oir-pod-2.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/oir-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/opaque-integer-resource.md (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/pod-redis.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/pod.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/private-reg-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/projected-volume.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/qos-pod-2.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/qos-pod-3.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/qos-pod-4.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/qos-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/rq-compute-resources.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/security-context-2.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/security-context-3.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/security-context-4.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/security-context.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/task-pv-claim.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/task-pv-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/task-pv-volume.yaml (100%) rename content/{cn => zh}/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml (100%) create mode 100644 content/zh/docs/tasks/debug-application-cluster/audit.md rename content/{cn => zh}/docs/tasks/debug-application-cluster/debug-application.md (100%) rename content/{cn => zh}/docs/tasks/debug-application-cluster/debug-cluster.md (100%) rename content/{cn => zh}/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md (100%) rename content/{cn => zh}/docs/tasks/debug-application-cluster/debug-stateful-set.md (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/commands.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/dapi-envars-container.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/dapi-envars-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/dapi-volume-resources.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/dapi-volume.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/define-command-argument-container.md (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/define-environment-variable-container.md (96%) rename content/{cn => zh}/docs/tasks/inject-data-application/distribute-credentials-secure.md (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md (98%) rename content/{cn => zh}/docs/tasks/inject-data-application/envars.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-allow-db.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-configmap.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-conflict-preset.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-merged.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-multi-merged.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-preset.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-proxy.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset-replicaset.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/podpreset.md (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/secret-envars-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/secret-pod.yaml (100%) rename content/{cn => zh}/docs/tasks/inject-data-application/secret.yaml (100%) create mode 100755 content/zh/docs/tasks/job/fine-parallel-processing-work-queue.md rename content/{cn => zh}/docs/tasks/manage-daemon/rollback-daemon-set.md (100%) rename content/{cn => zh}/docs/tasks/manage-gpus/scheduling-gpus.md (100%) rename content/{cn => zh}/docs/tasks/manage-hugepages/scheduling-hugepages.md (100%) rename content/{cn => zh}/docs/tasks/run-application/deployment-patch-demo.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/deployment-scale.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/deployment-update.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/deployment.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/gce-volume.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/mysql-configmap.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/mysql-deployment.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/mysql-services.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/mysql-statefulset.yaml (100%) rename content/{cn => zh}/docs/tasks/run-application/rolling-update-replication-controller.md (100%) rename content/{cn => zh}/docs/tasks/run-application/run-single-instance-stateful-application.md (100%) rename content/{cn => zh}/docs/tasks/run-application/run-stateless-application-deployment.md (100%) rename content/{cn => zh}/docs/tasks/run-application/scale-stateful-set.md (100%) rename content/{cn => zh}/docs/tasks/tls/certificate-rotation.md (100%) rename content/{cn => zh}/docs/templates/index.md (100%) rename content/{cn => zh}/docs/tutorials/configuration/configure-redis-using-configmap.md (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/_index.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/cluster-interactive.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/cluster-intro.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/deploy-interactive.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/deploy-intro.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/explore-interactive.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/explore-intro.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/expose-interactive.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/expose-intro.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/scale-interactive.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/scale-intro.html (100%) create mode 100644 content/zh/docs/tutorials/kubernetes-basics/scale/scale-intro.html rename content/{cn => zh}/docs/tutorials/kubernetes-basics/update-interactive.html (100%) rename content/{cn => zh}/docs/tutorials/kubernetes-basics/update-intro.html (100%) rename content/{cn => zh}/docs/tutorials/object-management-kubectl/imperative-object-management-command.md (100%) rename content/{cn => zh}/docs/tutorials/object-management-kubectl/object-management.md (100%) rename content/{cn => zh}/docs/tutorials/services/source-ip.md (95%) rename content/{cn => zh}/docs/tutorials/stateful-application/Dockerfile (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/FETCH_HEAD (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/basic-stateful-set.md (99%) rename content/{cn => zh}/docs/tutorials/stateful-application/cassandra-service.yaml (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/cassandra-statefulset.yaml (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/cassandra.md (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/dev (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md (98%) rename content/{cn => zh}/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/local-volumes.yaml (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/mysql-deployment.yaml (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/wordpress-deployment.yaml (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/web.yaml (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/webp.yaml (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/zookeeper.md (100%) rename content/{cn => zh}/docs/tutorials/stateful-application/zookeeper.yaml (100%) rename content/{cn => zh}/docs/user-guide/bad-nginx-deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/curlpod.yaml (100%) rename content/{cn => zh}/docs/user-guide/deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/docker-cli-to-kubectl.md (100%) rename content/{cn => zh}/docs/user-guide/ingress.yaml (100%) rename content/{cn => zh}/docs/user-guide/job.yaml (100%) rename content/{cn => zh}/docs/user-guide/jsonpath.md (100%) rename content/{cn => zh}/docs/user-guide/kubectl-overview.md (100%) rename content/{cn => zh}/docs/user-guide/multi-pod.yaml (100%) rename content/{cn => zh}/docs/user-guide/new-nginx-deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/nginx-app.yaml (100%) rename content/{cn => zh}/docs/user-guide/nginx-deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/nginx-init-containers.yaml (100%) rename content/{cn => zh}/docs/user-guide/nginx-lifecycle-deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/nginx-probe-deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/nginx-secure-app.yaml (100%) rename content/{cn => zh}/docs/user-guide/nginx-svc.yaml (100%) rename content/{cn => zh}/docs/user-guide/pod-w-message.yaml (100%) rename content/{cn => zh}/docs/user-guide/pod.yaml (100%) rename content/{cn => zh}/docs/user-guide/redis-deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/redis-resource-deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/redis-secret-deployment.yaml (100%) rename content/{cn => zh}/docs/user-guide/run-my-nginx.yaml (100%) rename content/{cn => zh}/docs/whatisk8s.md (100%) delete mode 100644 layouts/shortcodes/language-repos-list.html diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index d2968dbe7c866..22150b331bf5e 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -130,6 +130,36 @@ aliases: - rajakavitha1 - stewart-yu - xiangpengzhao + - zhangxiaoyu + sig-docs-ja-owners: #Team: Japanese docs localization; GH: sig-docs-ja-owners + - cstoku + - nasa9084 + - tnir + sig-docs-ja-reviews: #Team: Japanese docs PR reviews; GH:sig-docs-ja-reviews + - cstoku + - makocchi-git + - MasayaAoyama + - nasa9084 + - tnir + sig-docs-ko-owners: #Team Korean docs localization; GH: sig-docs-ko-owners + - ClaudiaJKang + - gochist + sig-docs-ko-reviews: #Team Korean docs reviews; GH: sig-docs-ko-reviews + - ClaudiaJKang + - gochist + - ianychoi + sig-docs-zh-owners: #Team Chinese docs localization; GH: sig-docs-zh-owners + - dchen1107 + - haibinxie + - hanjiayao + - lichuqiang + - tengqm + - xiangpengzhao + - zhangxiaoyu-zidif + sig-docs-zh-reviews: #Team Chinese docs reviews; GH: sig-docs-zh-reviews + - tengqm + - xiangpengzhao + - zhangxiaoyu-zidif sig-federation: #Team: Federation; e.g. Federated Clusters - csbell diff --git a/config.toml b/config.toml index 00a08449102a6..6d780612c1f0e 100644 --- a/config.toml +++ b/config.toml @@ -7,7 +7,7 @@ enableRobotsTXT = true disableKinds = ["taxonomy", "taxonomyTerm"] -ignoreFiles = [ "^OWNERS$", "README.md", "^node_modules$", "content/en/docs/doc-contributor-tools" ] +ignoreFiles = [ "^OWNERS$", "README[-]+[a-z]*\.md", "^node_modules$", "content/en/docs/doc-contributor-tools" ] contentDir = "content/en" @@ -131,25 +131,29 @@ description = "Production-Grade Container Orchestration" languageName ="English" # Weight used for sorting. weight = 1 -[languages.cn] + +[languages.zh] title = "Kubernetes" description = "Production-Grade Container Orchestration" -languageName = "Chinese" +languageName = "中文 Chinese" weight = 2 -contentDir = "content/cn" +contentDir = "content/zh" + +[languages.ko] +title = "Kubernetes" +description = "Production-Grade Container Orchestration" +languageName = "한국어 Korean" +weight = 3 +contentDir = "content/ko" + [languages.no] title = "Kubernetes" description = "Production-Grade Container Orchestration" languageName ="Norsk" -weight = 3 +weight = 4 contentDir = "content/no" + [languages.no.params] time_format_blog = "02.01.2006" # A list of language codes to look for untranslated content, ordered from left to right. language_alternatives = ["en"] -[languages.ko] -title = "Kubernetes" -description = "Production-Grade Container Orchestration" -languageName = "Korean" -weight = 4 -contentDir = "content/ko" diff --git a/content/cn/includes/default-storage-class-prereqs.md b/content/cn/includes/default-storage-class-prereqs.md deleted file mode 100644 index ef4823318dcab..0000000000000 --- a/content/cn/includes/default-storage-class-prereqs.md +++ /dev/null @@ -1,6 +0,0 @@ -You need to either have a dynamic PersistentVolume provisioner with a default -[StorageClass](/docs/concepts/storage/storage-classes/), -or [statically provision PersistentVolumes](/docs/user-guide/persistent-volumes/#provisioning) -yourself to satisfy the [PersistentVolumeClaims](/docs/user-guide/persistent-volumes/#persistentvolumeclaims) -used here. - diff --git a/content/cn/includes/federated-task-tutorial-prereqs.md b/content/cn/includes/federated-task-tutorial-prereqs.md deleted file mode 100644 index c5ec939c07894..0000000000000 --- a/content/cn/includes/federated-task-tutorial-prereqs.md +++ /dev/null @@ -1,8 +0,0 @@ -This guide assumes that you have a running Kubernetes Cluster -Federation installation. If not, then head over to the -[federation admin guide](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) to learn how to -bring up a cluster federation (or have your cluster administrator do -this for you). -Other tutorials, such as Kelsey Hightower's -[Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation), -might also help you create a Federated Kubernetes cluster. \ No newline at end of file diff --git a/content/cn/includes/federation-content-moved.md b/content/cn/includes/federation-content-moved.md deleted file mode 100644 index 87a10e7199193..0000000000000 --- a/content/cn/includes/federation-content-moved.md +++ /dev/null @@ -1,2 +0,0 @@ -The topics in the [Federation API](/docs/federation/api-reference/) section of the Kubernetes docs -are being moved to the [Reference](/docs/reference/) section. The content in this topic has moved to: diff --git a/content/cn/includes/federation-current-state.md b/content/cn/includes/federation-current-state.md deleted file mode 100644 index 56e4decdf3c2e..0000000000000 --- a/content/cn/includes/federation-current-state.md +++ /dev/null @@ -1,7 +0,0 @@ -**Note:** `Federation V1`, the current Kubernetes federation API which reuses the Kubernetes API -resources 'as is', is currently considered alpha for many of its features, and there is no clear -path to evolve the API to GA. However, there is a `Federation V2` effort in progress to implement -a dedicated federation API apart from the Kubernetes API. The details can be found at -[sig-multicluster community page](https://github.com/kubernetes/community/tree/master/sig-multicluster). -{: .note} - diff --git a/content/cn/includes/index.md b/content/cn/includes/index.md deleted file mode 100644 index ca03031f1ee91..0000000000000 --- a/content/cn/includes/index.md +++ /dev/null @@ -1,3 +0,0 @@ ---- -headless: true ---- diff --git a/content/cn/includes/task-tutorial-prereqs.md b/content/cn/includes/task-tutorial-prereqs.md deleted file mode 100644 index 6f1407fe45913..0000000000000 --- a/content/cn/includes/task-tutorial-prereqs.md +++ /dev/null @@ -1,8 +0,0 @@ -You need to have a Kubernetes cluster, and the kubectl command-line tool must -be configured to communicate with your cluster. If you do not already have a -cluster, you can create one by using -[Minikube](/docs/getting-started-guides/minikube), -or you can use one of these Kubernetes playgrounds: - -* [Katacoda](https://www.katacoda.com/courses/kubernetes/playground) -* [Play with Kubernetes](http://labs.play-with-k8s.com/) diff --git a/content/cn/includes/user-guide-content-moved.md b/content/cn/includes/user-guide-content-moved.md deleted file mode 100644 index 8b93e29f125f7..0000000000000 --- a/content/cn/includes/user-guide-content-moved.md +++ /dev/null @@ -1,3 +0,0 @@ -The topics in the [User Guide](/docs/user-guide/) section of the Kubernetes docs -are being moved to the [Tasks](/docs/tasks/), [Tutorials](/docs/tutorials/), and -[Concepts](/docs/concepts) sections. The content in this topic has moved to: diff --git a/content/cn/includes/user-guide-migration-notice.md b/content/cn/includes/user-guide-migration-notice.md deleted file mode 100644 index 366a05907cda5..0000000000000 --- a/content/cn/includes/user-guide-migration-notice.md +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - -
-

NOTICE

-

As of March 14, 2017, the Kubernetes SIG-Docs-Maintainers group have begun migration of the User Guide content as announced previously to the SIG Docs community through the kubernetes-sig-docs group and kubernetes.slack.com #sig-docs channel.

-

The user guides within this section are being refactored into topics within Tutorials, Tasks, and Concepts. Anything that has been moved will have a notice placed in its previous location as well as a link to its new location. The reorganization implements a new table of contents and should improve the documentation's findability and readability for a wider range of audiences.

-

For any questions, please contact: kubernetes-sig-docs@googlegroups.com

-
diff --git a/content/en/OWNERS b/content/en/OWNERS new file mode 100644 index 0000000000000..52f02277d4437 --- /dev/null +++ b/content/en/OWNERS @@ -0,0 +1,11 @@ +# This is the directory for English source content. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-en-reviews + +approvers: +- sig-docs-en-owners + +labels: +- language/en diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 2f8c870d0d7a5..90d3c51ccfe71 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -4,17 +4,19 @@ content_template: templates/concept approvers: - chenopis - zacharysarah +- zparnold --- {{% capture overview %}} -The Kubernetes documentation is currently available in [multiple languages](#supported-languages) and we encourage you to add new localizations ([l10n](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/))! +Documentation for Kubernetes is available in multiple languages: -Currently available languages: +- English +- Chinese +- Japanese +- Korean -{{< language-repos-list >}} - -In order for localizations to be accepted, however, they must fulfill some requirements related to workflow (*how* to localize) and output (*what* to localize). +We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)! {{% /capture %}} @@ -22,42 +24,51 @@ In order for localizations to be accepted, however, they must fulfill some requi {{% capture body %}} -## Workflow +## Getting started -The Kubernetes documentation for all languages is built from the [kubernetes/website](https://github.com/kubernetes/website) repository on GitHub. Most day-to-work work on translations, however, happens in separate translation repositories. Changes to those repositories are then [periodically](#upstream-contributions) synced to the main kubernetes/website repository via [pull request](../create-pull-request). +Localizations must meet some requirements for workflow (*how* to localize) and output (*what* to localize). -Work on the Chinese translation, for example, happens in the [kubernetes/kubernetes-docs-zh](https://github.com/kubernetes/kubernetes-docs-zh) repository. +To add a new localization of the Kubernetes documentation, you'll need to update the website by modifying the [site configuration](#modify-the-site-configuration) and [directory structure](#add-a-new-localization-directory). Then you can start [translating documents](#translating-documents)! -{{< note >}} -**Note**: For an example localization-related [pull request](../create-pull-request), see [this pull request](https://github.com/kubernetes/website/pull/8636) to the [Kubernetes website repo](https://github.com/kubernetes/website) adding Korean localization to the Kubernetes docs. -{{< /note >}} +Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/). We're happy to help you get started and answer any questions you have. -## Source Files - -Localizations must use English files from the most recent major release as sources. To find the most recent release's documentation source files: - -1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website. -2. Select the `release-1.X` branch for the most recent version, which is currently **{{< latest-version >}}**, making the most recent release branch [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}). +All localization teams must be self-sustaining with their own resources. We're happy to host your work, but we can't translate it for you. -## Getting started +### Fork and clone the repo -In order to add a new localization of the Kubernetes documentation, you'll need to make a few modifications to the site's [configuration](#configuration) and [directory structure](#new-directory), and then you can get to work [translating documents](#translating-documents)! +First, [create your own fork](https://help.github.com/articles/fork-a-repo/) of the [kubernetes/website](https://github.com/kubernetes/website). -To get started, clone the website repo and `cd` into it: +Then, clone the website repo and `cd` into it: ```shell git clone https://github.com/kubernetes/website cd website -git checkout {{< release-branch >}} ``` -## Configuration +{{< note >}} +Contributors to `k/website` must [create a fork](https://kubernetes.io/docs/contribute/start/#improve-existing-content) from which to open pull requests. For localizations, we ask additionally that: + +1. Team approvers open development branches directly from https://github.com/kubernetes/website. +2. Localization contributors work from forks, with branches based on the current development branch. -We'll walk you through the configuration process using the German language (language code `de`) as an example. +This is because localization projects are collaborative efforts on long-running branches, similar to the development branches for the Kubernetes release cycle. For information about localization pull requests, see ["branching strategy"](#branching-strategy). +{{< /note >}} -There's currently no translation for German, but you're welcome to create one using the instructions here. +### Find your two-letter language code -The Kubernetes website's configuration is in the [`config.toml`](https://github.com/kubernetes/website/tree/master/config.toml) file. You need to add a configuration block for the new language to that file, under the existing `[languages]` block. The German block, for example, looks like this: +Consult the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) for your localization's two-letter country code. For example, the two-letter code for German is `de`. + +{{< note >}} +These instructions use the [ISO 639-1](https://www.loc.gov/standards/iso639-2/php/code_list.php) language code for German (`de`) as an example. + +There's currently no Kubernetes localization for German, but you're welcome to create one! +{{< /note >}} + +### Modify the site configuration + +The Kubernetes website uses Hugo as its web framework. The website's Hugo configuration resides in the [`config.toml`](https://github.com/kubernetes/website/tree/master/config.toml) file. To support a new localization, you'll need to modify `config.toml`. + +Add a configuration block for the new language to `config.toml`, under the existing `[languages]` block. The German block, for example, looks like: ```toml [languages.de] @@ -68,74 +79,128 @@ contentDir = "content/de" weight = 3 ``` -When assigning a `weight` parameter, see which of the current languages has the highest weight and add 1 to that value. +When assigning a `weight` parameter for your block, find the language block with the highest weight and add 1 to that value. -Now add a language-specific subdirectory to the [`content`](https://github.com/kubernetes/website/tree/master/content) folder. The two-letter code for German is `de`, so add a `content/de` directory: +For more information about Hugo's multilingual support, see "[Multilingual Mode](https://gohugo.io/content-management/multilingual/)". + +### Add a new localization directory + +Add a language-specific subdirectory to the [`content`](https://github.com/kubernetes/website/tree/master/content) folder in the repository. For example, the two-letter code for German is `de`: ```shell mkdir content/de ``` +### Add a localized README + +To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of k/website, where `**` is the two-letter language code. For example, a German README file would be `README-de.md`. + +Provide guidance to localization contributors in the localized `README-**.md` file. Include the same information contained in `README.md` as well as: + +- A point of contact for the localization project +- Any information specific to the localization + +After you create the localized README, add a link to the file from the main English file, [`README.md`] and include contact information in English. You can provide a GitHub ID, email address, [Slack channel](https://slack.com/), or other method of contact. + ## Translating documents -We understand that localizing *all* of the Kubernetes documentation would be an enormous task. We're okay with localizations smarting small and expanding over time. +Localizing *all* of the Kubernetes documentation is an enormous task. It's okay to start small and expand over time. -As an initial requirement, all localizations must include the following documentation at a minimum: +At a minimum, all localizations must include: Description | URLs -----|----- Home | [All heading and subheading URLs](https://kubernetes.io/docs/home/) Setup | [All heading and subheading URLs](https://kubernetes.io/docs/setup/) Tutorials | [Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/), [Hello Minikube](https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/) +Site strings | [All site strings in a new localized TOML file](https://github.com/kubernetes/website/tree/master/i18n) -Translated documents should have the same URL endpoint as the English docs (substituting the subdirectory of the `content` folder). To translate the [Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/) doc into German, for example, create the proper subfolder under the `content/de` folder and copy the English doc: +Translated documents must reside in their own `content/**/` subdirectory, but otherwise follow the same URL path as the English source. For example, to prepare the [Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/) tutorial for translation into German, create a subfolder under the `content/de/` folder and copy the English source: ```shell mkdir -p content/de/docs/tutorials cp content/en/docs/tutorials/kubernetes-basics.md content/de/docs/tutorials/kubernetes-basics.md ``` -## Project logistics +For an example of a localization-related [pull request](../create-pull-request), [this pull request](https://github.com/kubernetes/website/pull/10471) to the [Kubernetes website repo](https://github.com/kubernetes/website) added Korean localization to the Kubernetes docs. -### Contact with project chairs +### Source Files -When starting a new localization effort, you should get in touch with one of the chairs of the Kubernetes [SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) organization. The current chairs are listed [here](https://github.com/kubernetes/community/tree/master/sig-docs#chairs). +Localizations must use English files from the most recent release as their source. The most recent version is **{{< latest-version >}}**. -### Project information +To find source files for the most recent release: -Teams working on localization efforts must provide a single point of contact, including the name and contact information of a person who can respond to or redirect questions or concerns, listed in the translation repository's main [`README`](https://help.github.com/articles/about-readmes/). You can provide an email address, email list, [Slack channel](https://slack.com/), or some other method of contact. +1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website. +2. Select the `release-1.X` branch for the most recent version. + +The latest version is **{{< latest-version >}}**, so the most recent release branch is [`{{< release-branch >}}`](https://github.com/kubernetes/website/tree/{{< release-branch >}}). + +### Site strings in i18n/ + +Localizations must include the contents of [`i18n/en.toml`](https://github.com/kubernetes/website/blob/master/i18n/en.toml) in a new language-specific file. Using German as an example: `i18n/de.toml`. + +Add a new localization file to `i18n/`. For example, with German (`de`): + +```shell +cp i18n/en.toml i18n/de.toml +``` + +Then translate the value of each string: + +```TOML +[docs_label_i_am] +other = "ICH BIN..." +``` + +Localizing site strings lets you customize site-wide text and features: for example, the legal copyright text in the footer on each page. + +## Project logistics + +### Contact the SIG Docs chairs + +Contact one of the chairs of the Kubernetes [SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs#chairs) chairs when you start a new localization. ### Maintainers -Each localization repository must select its own maintainers. Maintainers can be from a single organization or multiple organizations. +Each localization repository must provide its own maintainers. Maintainers can be from a single organization or multiple organizations. Whenever possible, localization pull requests should be approved by a reviewer from a different organization than the translator. -In addition, all l10n work must be self-sustaining with the team's own resources. +A localization must provide a minimum of two maintainers. (It's not possible to review and approve one's own work.) -Wherever possible, every localized page must be approved by a reviewer from a different company than the translator. +### Branching strategy -### GitHub project +Because localization projects are highly collaborative efforts, we encourage teams to work from a shared development branch. -Each Kubernetes localization repository must track its overall progress with a [GitHub project](https://help.github.com/articles/creating-a-project-board/). +To collaborate on a development branch: -Projects must include at least these columns: +1. A team member opens a development branch, usually by opening a new pull request against a source branch on https://github.com/kubernetes/website. -- To Do -- In Progress -- Done + We recommend the following branch naming scheme: -{{< note >}} -**Note**: For an example GitHub project, see the [Chinese localization project](https://github.com/kubernetes/kubernetes-docs-zh/projects/1). -{{< /note >}} + `dev--.` -### Repository structure + For example, an approver on a German localization team opens the development branch `dev-1.12-de.1` directly against the k/website repository, based on the source branch for Kubernetes v1.12. -Each l10n repository must have branches for the different Kubernetes documentation release versions, matching the branches in the main [kubernetes/website](https://github.com/kubernetes/website) documentation repository. For example, the kubernetes/website `release-1.10` branch (https://github.com/kubernetes/website/tree/release-1.10) has a corresponding branch in the kubernetes/kubernetes-docs-zh repository (https://github.com/kubernetes/kubernetes-docs-zh/tree/release-1.10). These version branches keep track of the differences in the documentation between Kubernetes versions. +2. Individual contributors open feature branches based on the development branch. -### Upstream contributions + For example, a German contributor opens a pull request with changes to `kubernetes:dev-1.12-de.1` from `username:local-branch-name`. + +3. Approvers review and merge feature branches into the development branch. + +4. Periodically, an approver merges the development branch to its source branch. -Upstream contributions are welcome and encouraged! +Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German development branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc. + +Teams must merge localized content into the same release branch from which the content was sourced. For example, a development branch sourced from {{< release-branch >}} must be based on {{< release-branch >}}. + +An approver must maintain a development branch by keeping it current with its source branch and resolving merge conflicts. The longer a development branch stays open, the more maintenance it typically requires. Consider periodically merging development branches and opening new ones, rather than maintaining one extremely long-running development branch. + +While only approvers can merge pull requests, anyone can open a pull request for a new development branch. No special permissions are required. + +For more information about working from forks or directly from the repository, see ["fork and clone the repo"](#fork-and-clone-the-repo). + +### Upstream contributions -For the sake of efficiency, limit upstream contributions to a single pull request per week, containing a single [squashed commit](https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit). +SIG Docs welcomes upstream contributions and corrections to the English source! Open a [pull request](https://kubernetes.io/docs/contribute/start/#improve-existing-content) (from a fork) with any updates. {{% /capture %}} @@ -143,7 +208,7 @@ For the sake of efficiency, limit upstream contributions to a single pull reques Once a l10n meets requirements for workflow and minimum output, SIG docs will: -- Work with the localization team to implement language selection on the website. -- Publicize availability through [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) channels. +- Enable language selection on the website +- Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/). {{% /capture %}} diff --git a/content/ja/OWNERS b/content/ja/OWNERS new file mode 100644 index 0000000000000..91ea772f74b86 --- /dev/null +++ b/content/ja/OWNERS @@ -0,0 +1,11 @@ +# This is the localization project for Japanese. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-ja-reviews + +approvers: +- sig-docs-ja-owners + +labels: +- language/ja diff --git a/content/ko/OWNERS b/content/ko/OWNERS new file mode 100644 index 0000000000000..45a4bc5870300 --- /dev/null +++ b/content/ko/OWNERS @@ -0,0 +1,11 @@ +# This is the localization project for Korean. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-ko-reviews + +approvers: +- sig-docs-ko-owners + +labels: +- language/ko diff --git a/content/zh/OWNERS b/content/zh/OWNERS new file mode 100644 index 0000000000000..7eec09c28cf47 --- /dev/null +++ b/content/zh/OWNERS @@ -0,0 +1,11 @@ +# This is the localization project for Chinese. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-zh-reviews + +approvers: +- sig-docs-zh-owners + +labels: +- language/zh diff --git a/content/cn/_index.html b/content/zh/_index.html similarity index 100% rename from content/cn/_index.html rename to content/zh/_index.html diff --git a/content/zh/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md b/content/zh/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md new file mode 100644 index 0000000000000..072ec679f3b41 --- /dev/null +++ b/content/zh/blog/_posts/2016-04-00-Kubernetes-Network-Policy-APIs.md @@ -0,0 +1,182 @@ + + +--- +title: "SIG-Networking: Kubernetes Network Policy APIs Coming in 1.3 " +date: 2016-04-18 +slug: kubernetes-network-policy-apis +url: /blog/2016/04/Kubernetes-Network-Policy-APIs +--- + + + +编者按:这一周,我们的封面主题是 [Kubernetes 特别兴趣小组](https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs));今天的文章由网络兴趣小组撰写,来谈谈 1.3 版本中即将出现的网络策略 API - 针对安全,隔离和多租户的策略。 + + + +自去年下半年起,[Kubernetes 网络特别兴趣小组](https://kubernetes.slack.com/messages/sig-network/)经常定期开会,讨论如何将网络策略带入到 Kubernetes 之中,现在,我们也将慢慢看到这些工作的成果。 + + + +很多用户经常会碰到的一个问题是, Kubernetes 的开放访问网络策略并不能很好地满足那些需要对 pod 或服务( service )访问进行更为精确控制的场景。今天,这个场景可以是在多层应用中,只允许临近层的访问。然而,随着组合微服务构建原生应用程序潮流的发展,如何控制流量在不同服务之间的流动会别的越发的重要。 + + + +在大多数的(公共的或私有的) IaaS 环境中,这种网络控制通常是将 VM 和“安全组”结合,其中安全组中成员的通信都是通过一个网络策略或者访问控制表( Access Control List, ACL )来定义,以及借助于网络包过滤器来实现。 + + + +“网络特别兴趣小组”刚开始的工作是确定 [特定的使用场景](https://docs.google.com/document/d/1blfqiH4L_fpn33ZrnQ11v7LcYP0lmpiJ_RaapAPBbNU/edit?pref=2&pli=1#) ,这些用例需要基本的网络隔离来提升安全性。 +让这些API恰如其分地满足简单、共通的用例尤其重要,因为它们将为那些服务于 Kubernetes 内多租户,更为复杂的网络策略奠定基础。 + + + +根据这些应用场景,我们考虑了集中不同的方法,然后定义了一个最简[策略规范](https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug/edit)。 +基本的想法是,如果是根据命名空间的不同来进行隔离,那么就会根据所被允许的流量类型的不同,来选择特定的 pods 。 + + + +快速支持这个实验性 API 的办法是往 API 服务器上加入一个 `ThirdPartyResource` 扩展,这在 Kubernetes 1.2 就能办到。 + + + +如果你还不是很熟悉这其中的细节, Kubernetes API 是可以通过定义 `ThirdPartyResources` 扩展在特定的 URL 上创建一个新的 API 端点。 + +#### third-party-res-def.yaml + +``` +kind: ThirdPartyResource +apiVersion: extensions/v1beta1 +metadata: + - name: network-policy.net.alpha.kubernetes.io +description: "Network policy specification" +versions: + - name: v1alpha1 +``` + +``` +$kubectl create -f third-party-res-def.yaml +``` + + +这条命令会创建一个 API 端点(每个命名空间各一个): + +``` +/net.alpha.kubernetes.io/v1alpha1/namespace/default/networkpolicys/ +``` + + + + +第三方网络控制器可以监听这些端点,根据资源的创建,修改或者删除作出必要的响应。 +_注意:在接下来的 Kubernetes 1.3 发布中, Network Policy API 会以 beta API 的形式出现,这也就不需要像上面那样,创建一个 `ThirdPartyResource` API 端点了。_ + + + +网络隔离默认是关闭的,因而,所有的 pods 之间可以自由地通信。 +然而,很重要的一点是,一旦开通了网络隔离,所有命名空间下的所有 pods 之间的通信都会被阻断,换句话说,开通隔离会改变 pods 的行为。 + + + +网络隔离可以通过定义命名空间, `net.alpha.kubernetes.io` 里的 `network-isolation` 注释来开通关闭: + +``` +net.alpha.kubernetes.io/network-isolation: [on | off] +``` + + + +一旦开通了网络隔离,**一定需要使用** 显示的网络策略来允许 pod 间的通信。 + + + +一个策略规范可以被用到一个命名空间中,来定义策略的细节(如下所示): + +``` +POST /apis/net.alpha.kubernetes.io/v1alpha1/namespaces/tenant-a/networkpolicys/ +{ + "kind": "NetworkPolicy", + "metadata": { + "name": "pol1" + }, + "spec": { + "allowIncoming": { + "from": [ + { + "pods": { + "segment": "frontend" + } + } + ], + "toPorts": [ + { + "port": 80, + "protocol": "TCP" + } + ] + }, + "podSelector": { + "segment": "backend" + } + } +} +``` + + + +在这个例子中,**tenant-a** 空间将会使用 **pol1** 策略。 +具体而言,带有 **segment** 标签为 **backend** 的 pods 会允许 **segment** 标签为 **frontend** 的 pods 访问其端口 80 。 + + + + + +今天,[Romana](http://romana.io/), [OpenShift](https://www.openshift.com/), [OpenContrail](http://www.opencontrail.org/) 以及 [Calico](http://projectcalico.org/) 都已经支持在命名空间和pods中使用网络策略。 +而 Cisco 和 VMware 也在努力实现支持之中。 +Romana 和 Calico 已经在最近的 KubeCon 中展示了如何在 Kubernetes 1.2 下使用这些功能。 +你可以在这里看到他们的演讲: +[Romana](https://www.youtube.com/watch?v=f-dLKtK6qCs) ([幻灯片](http://www.slideshare.net/RomanaProject/kubecon-london-2016-ronana-cloud-native-sdn)), +[Calico](https://www.youtube.com/watch?v=p1zfh4N4SX0) ([幻灯片](http://www.slideshare.net/kubecon/kubecon-eu-2016-secure-cloudnative-networking-with-project-calico)). + + + +**这是如何工作的** + + + +每套解决方案都有自己不同的具体实现。尽管今天,他们都借助于每种主机上( on-host )的实现机制,但未来的实现可以通过将策略使用在 hypervisor 上,亦或是直接使用到网络本身上来达到同样的目的。 + + + +外部策略控制软件(不同实现各有不同)可以监听 pods 创建以及新加载策略的 API 端点。 +当产生一个需要策略配置的事件之后,监听器会确认这个请求,相应的,控制器会配置接口,使用该策略。 +下面的图例展示了 API 监视器和策略控制器是如何通过主机代理在本地应用网络策略的。 +这些 pods 的网络接口是使用过主机上的 CNI 插件来进行配置的(并未在图中注明)。 + + ![controller.jpg](https://lh5.googleusercontent.com/zMEpLMYmask-B-rYWnbMyGb0M7YusPQFPS6EfpNOSLbkf-cM49V7rTDBpA6k9-Zdh2soMul39rz9rHFJfL-jnEn_mHbpg0E1WlM-wjU-qvQu9KDTQqQ9uBmdaeWynDDNhcT3UjX5) + + + + +如果你一直受网络隔离或安全考虑的困扰,而犹豫要不要使用 Kubernetes 来开发应用程序,这些新的网络策略将会极大地解决你这方面的需求。并不需要等到 Kubernetes 1.3 ,现在就可以通过 `ThirdPartyResource` 的方式来使用这个实现性 API 。 + + + + +如果你对 Kubernetes 和网络感兴趣,可以通过下面的方式参与、加入其中: + +- 我们的[网络 slack channel](https://kubernetes.slack.com/messages/sig-network/) +- 我们的[Kubernetes 特别网络兴趣小组](https://groups.google.com/forum/#!forum/kubernetes-sig-network) 邮件列表 + + + +网络“特别兴趣小组”每两周下午三点(太平洋时间)开会,地址是[SIG-Networking hangout](https://zoom.us/j/5806599998). + +_--Chris Marino, Co-Founder, Pani Networks_ diff --git a/content/zh/blog/_posts/2016-04-00-Kubernetes-On-Aws_15.md b/content/zh/blog/_posts/2016-04-00-Kubernetes-On-Aws_15.md new file mode 100644 index 0000000000000..c5de9e6a1285e --- /dev/null +++ b/content/zh/blog/_posts/2016-04-00-Kubernetes-On-Aws_15.md @@ -0,0 +1,129 @@ + + +--- +title: " 如何在AWS上部署安全,可审计,可复现的k8s集群 " +date: 2016-04-15 +slug: kubernetes-on-aws_15 +url: /blog/2016/04/Kubernetes-On-Aws_15 +--- + + + +_今天的客座文章是由Colin Hom撰写,[CoreOS](https://coreos.com/)的基础架构工程师。CoreOS致力于推广谷歌的基础架构模式(Google’s Infrastructure for Everyone Else, #GIFEE),让全世界的容器都能在CoreOS Linux, Tectonic 和 Quay上安全运行。_ + +_加入到我们的[柏林CoreOS盛宴](https://coreos.com/fest/),这是一个开源分布式系统主题的会议,在这里可以了解到更多关于CoreOS和Kubernetes的信息。_ + + + +在CoreOS, 我们一直都是在生产环境中大规模部署Kubernetes。今天我们非常兴奋地想分享一款工具,它能让你的Kubernetes生产环境大规模部署更加的轻松。Kube-aws这个工具可以用来在AWS上部署可审计,可复现的k8s集群,而CoreOS本身就在生产环境中使用它。 + + + +也许今天,你更多的可能是用手工的方式来拼接Kubernetes组件。但有了这个工具之后,Kubernetes可以流水化地打包、交付,节省时间,减少了相互间的依赖,更加快捷地实现生产环境的部署。 + + + +借助于一个简单的模板系统,来生成集群配置,这么做是因为一套声明式的配置模板可以版本控制,审计以及重复部署。而且,由于整个创建过程只用到了[AWS CloudFormation](https://aws.amazon.com/cloudformation/) 和 cloud-init,你也就不需要额外用到其它的配置管理工具。开箱即用! + + + +如果要跳过演讲,直接了解这个项目,可以看看[kube-aws的最新发布](https://github.com/coreos/coreos-kubernetes/releases),支持Kubernetes 1.2.x。如果要部署集群,可以参考[文档]](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html). + + +**为什么是kube-aws?安全,可审计,可复现** + + +Kube-aws设计初衷有三个目标。 + + + +**安全** : TLS 资源在嵌入到CloudFormation JSON之前,通过[AWS 秘钥管理服务](https://aws.amazon.com/kms/)加密。通过单独管理KMS密钥的[IAM 策略](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html),可以将CloudFormation栈的访问与TLS秘钥的访问分离开。 + + + +**可审计** : kube-aws是围绕集群资产的概念来创建。这些配置和账户资产是对集群的完全描述。由于KMS被用来加密TLS资产,因而可以无所顾忌地将未加密的CloudFormation栈 JSON签入到版本控制服务中。 + + + +**可重复** : _--export_ 选项将参数化的集群定义打包成一整个JSON文件,对应一个CloudFormation栈。这个文件可以版本控制,然后,如果需要的话,通过现有的部署工具直接提交给CloudFormation API。 + + +**如何开始用kube-aws** + + +在此基础之上,kube-aws也实现了一些功能,使得在AWS上部署Kubernetes集群更加容易,灵活。下面是一些例子。 + + +**Route53集成** : Kube-aws 可以管理你的集群DNS记录,作为配置过程的一部分。 + +cluster.yaml +``` +externalDNSName: my-cluster.kubernetes.coreos.com + +createRecordSet: true + +hostedZone: kubernetes.coreos.com + +recordSetTTL: 300 +``` + + +**现有VPC支持** : 将集群部署到现有的VPC上。 + +cluster.yaml +``` +vpcId: vpc-xxxxx + +routeTableId: rtb-xxxxx +``` + + +**验证** : kube-aws 支持验证 cloud-init 和 CloudFormation定义,以及集群栈会集成用到的外部资源。例如,下面就是一个cloud-config,外带一个拼写错误的参数: + +userdata/cloud-config-worker +``` +#cloud-config + +coreos: + + flannel: + interrface: $private\_ipv4 + etcd\_endpoints: {{ .ETCDEndpoints }} +``` + +$ kube-aws validate + + \> Validating UserData... + Error: cloud-config validation errors: + UserDataWorker: line 4: warning: unrecognized key "interrface" + + +考虑如何起步?看看[kube-aws 文档](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)! + + +**未来的工作** + + +一如既往,kube-aws的目标是让生产环境部署更加的简单。尽管我们现在在AWS下使用kube-aws进行生产环境部署,但是这个项目还是pre-1.0,所以还有很多的地方,kube-aws需要考虑、扩展。 + + +**容错** : CoreOS坚信 Kubernetes on AWS是强健的平台,适于容错、自恢复部署。在接下来的几个星期,kube-aws将会迎接新的考验:混世猴子([Chaos Monkey](https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey))测试 - 控制平面以及全部! + + +**零停机更新** : 更新CoreOS节点和Kubernetes组件不需要停机,也不需要考虑实例更新策略(instance replacement strategy)的影响。 + + +有一个[github issue](https://github.com/coreos/coreos-kubernetes/issues/340)来追踪这些工作进展。我们期待你的参与,提交issue,或是直接贡献。 + + +_想要更多地了解Kubernetes,来[柏林CoreOS盛宴](https://coreos.com/fest/)看看,- 五月 9-10, 2016_ + + +_– Colin Hom, 基础架构工程师, CoreOS_ diff --git a/content/zh/blog/_posts/2018-03-00-Principles-Of-Container-App-Design.md b/content/zh/blog/_posts/2018-03-00-Principles-Of-Container-App-Design.md new file mode 100644 index 0000000000000..4797bb7986ea4 --- /dev/null +++ b/content/zh/blog/_posts/2018-03-00-Principles-Of-Container-App-Design.md @@ -0,0 +1,81 @@ +--- +title: "Principles of Container-based Application Design" +date: 2018-03-15 +slug: principles-of-container-app-design +url: /blog/2018/03/Principles-Of-Container-App-Design +--- + + + +现如今,几乎所有的的应用程序都可以在容器中运行。但创建云原生应用,通过诸如 Kubernetes 的云原生平台更有效地自动化运行、管理容器化的应用却需要额外的工作。 +云原生应用需要考虑故障;即使是在底层架构发生故障时也需要可靠地运行。 +为了提供这样的功能,像 Kubernetes 这样的云原生平台需要向运行的应用程序强加一些契约和约束。 +这些契约确保应用可以在符合某些约束的条件下运行,从而使得平台可以自动化应用管理。 + + + +我已经为容器化应用如何之为云原生应用概括出了[七项原则][1]。 + +| ----- | +| ![][2] | +| Container Design Principles | + + + + +这里所述的七项原则涉及到构建时和运行时,两类关注点。 + + +#### 构建时 + + + +* **单一关注点:** 每个容器只解决一个关注点,并且完成的很好。 +* **自包含:** 一个容器只依赖Linux内核。额外的库要求可以在构建容器时加入。 +* **镜像不变性:** 容器化的应用意味着不变性,一旦构建完成,不需要根据环境的不同而重新构建。 + + +#### 运行时 + + + +* **高可观测性:** 每个容器必须实现所有必要的 API 来帮助平台以最好的方式来观测、管理应用。 +* **生命周期一致性:** 一个容器必须要能从平台中获取事件信息,并作出相应的反应。 +* **进程易处理性:** 容器化应用的寿命一定要尽可能的短暂,这样,可以随时被另一个容器所替换。 +* **运行时限制:** 每个容器都必须要声明自己的资源需求,并将资源使用限制在所需要的范围之内。 + + + +编译时原则保证了容器拥有合适的粒度,一致性以及结构。运行时原则明确了容器化必须要实现那些功能才能成为云原生函数。遵循这些原则可以帮助你的应用适应 Kubernetes 上的自动化。 + + + +白皮书可以免费下载: + + + +想要了解更多关于如何面向 Kubernetes 设计云原生应用,可以看看我的 [Kubernetes 模式][3] 一书。 + + + +— [Bilgin Ibryam][4], 首席架构师, Red Hat + +Twitter: 
 +Blog: [http://www.ofbizian.com][5] +Linkedin: + + + +Bilgin Ibryam (@bibryam) 是 Red Hat 的一名首席架构师, ASF 的开源贡献者,博主,作者以及演讲者。 +他是 Camel 设计模式、 Kubernetes 模式的作者。在他的日常生活中,他非常享受指导、培训以及帮助各个团队更加成功地使用分布式系统、微服务、容器,以及云原生应用。 + +[1]: https://www.redhat.com/en/resources/cloud-native-container-design-whitepaper +[2]: https://lh5.googleusercontent.com/1XqojkVC0CET1yKCJqZ3-0VWxJ3W8Q74zPLlqnn6eHSJsjHOiBTB7EGUX5o_BOKumgfkxVdgBeLyoyMfMIXwVm9p2QXkq_RRy2mDJG1qEExJDculYL5PciYcWfPAKxF2-DGIdiLw +[3]: http://leanpub.com/k8spatterns/ +[4]: http://twitter.com/bibryam +[5]: http://www.ofbizian.com/ diff --git a/content/zh/blog/_posts/2018-06-28-Airflow-Kubernetes-Operator.md b/content/zh/blog/_posts/2018-06-28-Airflow-Kubernetes-Operator.md new file mode 100644 index 0000000000000..6ff8dc79c83bb --- /dev/null +++ b/content/zh/blog/_posts/2018-06-28-Airflow-Kubernetes-Operator.md @@ -0,0 +1,676 @@ +--- + +layout: blog + +title: 'Airflow on Kubernetes (Part 1): A Different Kind of Operator' + +date: 2018-06-28 + +title: 'Airflow在Kubernetes中的使用(第一部分):一种不同的操作器' + +cn-approvers: + +- congfairy + +--- + + + + + + +作者: Daniel Imberman (Bloomberg LP) + + + + + + + +## 介绍 + + + +作为Bloomberg [继续致力于开发Kubernetes生态系统]的一部分(https://www.techatbloomberg.com/blog/bloomberg-awarded-first-cncf-end-user-award-contributions-kubernetes/),我们很高兴能够宣布Kubernetes Airflow Operator的发布; [Apache Airflow](https://airflow.apache.org/)的机制,一种流行的工作流程编排框架,使用Kubernetes API可以在本机启动任意的Kubernetes Pod。 + + + + + + + +## 什么是Airflow? + + + +Apache Airflow是DevOps“Configuration As Code”理念的一种实现。 Airflow允许用户使用简单的Python对象DAG(有向无环图)启动多步骤流水线。 您可以在易于阅读的UI中定义依赖关系,以编程方式构建复杂的工作流,并监视调度的作业。 + + + + + + + + + + + + + +## 为什么在Kubernetes上使用Airflow? + + + +自成立以来,Airflow的最大优势在于其灵活性。 Airflow提供广泛的服务集成,包括Spark和HBase,以及各种云提供商的服务。 Airflow还通过其插件框架提供轻松的可扩展性。但是,该项目的一个限制是Airflow用户仅限于执行时Airflow站点上存在的框架和客户端。单个组织可以拥有各种Airflow工作流程,范围从数据科学流到应用程序部署。用例中的这种差异会在依赖关系管理中产生问题,因为两个团队可能会在其工作流程使用截然不同的库。 + + + +为了解决这个问题,我们使Kubernetes允许用户启动任意Kubernetes pod和配置。 Airflow用户现在可以在其运行时环境,资源和机密上拥有全部权限,基本上将Airflow转变为“您想要的任何工作”工作流程协调器。 + + + + + + + +## Kubernetes运营商 + + + +在进一步讨论之前,我们应该澄清Airflow中的[Operator](https://airflow.apache.org/concepts.html#operators)是一个任务定义。 当用户创建DAG时,他们将使用像“SparkSubmitOperator”或“PythonOperator”这样的operator分别提交/监视Spark作业或Python函数。 Airflow附带了Apache Spark,BigQuery,Hive和EMR等框架的内置运算符。 它还提供了一个插件入口点,允许DevOps工程师开发自己的连接器。 + + + +Airflow用户一直在寻找更易于管理部署和ETL流的方法。 在增加监控的同时,任何解耦流程的机会都可以减少未来的停机等问题。 以下是Airflow Kubernetes Operator提供的好处: + + + + + + + +* 提高部署灵活性: + +Airflow的插件API一直为希望在其DAG中测试新功能的工程师提供了重要的福利。 不利的一面是,每当开发人员想要创建一个新的operator时,他们就必须开发一个全新的插件。 现在,任何可以在Docker容器中运行的任务都可以通过完全相同的运算符访问,而无需维护额外的Airflow代码。 + + + + + + + +* 配置和依赖的灵活性: + +对于在静态Airflow工作程序中运行的operator,依赖关系管理可能变得非常困难。 如果开发人员想要运行一个需要[SciPy](https://www.scipy.org) 的任务和另一个需要[NumPy](http://www.numpy.org) 的任务,开发人员必须维护所有Airflow节点中的依赖关系或将任务卸载到其他计算机(如果外部计算机以未跟踪的方式更改,则可能导致错误)。 自定义Docker镜像允许用户确保任务环境,配置和依赖关系完全是幂等的。 + + + + + + + +* 使用kubernetes Secret以增加安全性: + +处理敏感数据是任何开发工程师的核心职责。 Airflow用户总有机会在严格条款的基础上隔离任何API密钥,数据库密码和登录凭据。 使用Kubernetes运算符,用户可以利用Kubernetes Vault技术存储所有敏感数据。 这意味着Airflow工作人员将永远无法访问此信息,并且可以容易地请求仅使用他们需要的密码信息构建pod。 + + + + + + + +#架构 + + + + + + + +Kubernetes Operator使用[Kubernetes Python客户端](https://github.com/kubernetes-client/Python)生成由APIServer处理的请求(1)。 然后,Kubernetes将使用您定义的需求启动您的pod(2)。映像文件中将加载环境变量,Secret和依赖项,执行单个命令。 一旦启动作业,operator只需要监视跟踪日志的状况(3)。 用户可以选择将日志本地收集到调度程序或当前位于其Kubernetes集群中的任何分布式日志记录服务。 + + + + + + + +#使用Kubernetes Operator + + + +##一个基本的例子 + + + +以下DAG可能是我们可以编写的最简单的示例,以显示Kubernetes Operator的工作原理。 这个DAG在Kubernetes上创建了两个pod:一个带有Python的Linux发行版和一个没有它的基本Ubuntu发行版。 Python pod将正确运行Python请求,而没有Python的那个将向用户报告失败。 如果Operator正常工作,则应该完成“passing-task”pod,而“falling-task”pod则向Airflow网络服务器返回失败。 + + + + + +```Python + +from airflow import DAG + +from datetime import datetime, timedelta + +from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator + +from airflow.operators.dummy_operator import DummyOperator + + +default_args = { + + 'owner': 'airflow', + + 'depends_on_past': False, + + 'start_date': datetime.utcnow(), + + 'email': ['airflow@example.com'], + + 'email_on_failure': False, + + 'email_on_retry': False, + + 'retries': 1, + + 'retry_delay': timedelta(minutes=5) + +} + + + +dag = DAG( + + 'kubernetes_sample', default_args=default_args, schedule_interval=timedelta(minutes=10)) + +start = DummyOperator(task_id='run_this_first', dag=dag) +passing = KubernetesPodOperator(namespace='default', + + image="Python:3.6", + + cmds=["Python","-c"], + + arguments=["print('hello world')"], + + labels={"foo": "bar"}, + + name="passing-test", + + task_id="passing-task", + + get_logs=True, + + dag=dag + + ) + + failing = KubernetesPodOperator(namespace='default', + + image="ubuntu:1604", + + cmds=["Python","-c"], + + arguments=["print('hello world')"], + + labels={"foo": "bar"}, + + name="fail", + + task_id="failing-task", + + get_logs=True, + + dag=dag + + ) + +passing.set_upstream(start) + +failing.set_upstream(start) + +``` + + + +##但这与我的工作流程有什么关系? + + + +虽然这个例子只使用基本映像,但Docker的神奇之处在于,这个相同的DAG可以用于您想要的任何图像/命令配对。 以下是推荐的CI / CD管道,用于在Airflow DAG上运行生产就绪代码。 + + + +### 1:github中的PR + +使用Travis或Jenkins运行单元和集成测试,请您的朋友PR您的代码,并合并到主分支以触发自动CI构建。 + + + +### 2:CI / CD构建Jenkins - > Docker Image + + + +[在Jenkins构建中生成Docker镜像和缓冲版本](https://getintodevops.com/blog/building-your-first-Docker-image-with-jenkins-2-guide-for-developers)。 + + + +### 3:Airflow启动任务 + + + +最后,更新您的DAG以反映新版本,您应该准备好了! + + + +```Python + +production_task = KubernetesPodOperator(namespace='default', + + # image="my-production-job:release-1.0.1", <-- old release + + image="my-production-job:release-1.0.2", + + cmds=["Python","-c"], + + arguments=["print('hello world')"], + + name="fail", + + task_id="failing-task", + + get_logs=True, + + dag=dag + + ) + +``` + + + + + +#启动测试部署 + + + +由于Kubernetes运营商尚未发布,我们尚未发布官方[helm](https://helm.sh/) 图表或operator(但两者目前都在进行中)。 但是,我们在下面列出了基本部署的说明,并且正在积极寻找测试人员来尝试这一新功能。 要试用此系统,请按以下步骤操作: + + + +##步骤1:将kubeconfig设置为指向kubernetes集群 + + + +##步骤2:clone Airflow 仓库: + + + +运行git clone https:// github.com / apache / incubator-airflow.git来clone官方Airflow仓库。 + + + +##步骤3:运行 + + + +为了运行这个基本Deployment,我们正在选择我们目前用于Kubernetes Executor的集成测试脚本(将在本系列的下一篇文章中对此进行解释)。 要启动此部署,请运行以下三个命令: + + + +``` + +sed -ie "s/KubernetesExecutor/LocalExecutor/g" scripts/ci/kubernetes/kube/configmaps.yaml + +./scripts/ci/kubernetes/Docker/build.sh + +./scripts/ci/kubernetes/kube/deploy.sh + +``` + + + + + +在我们继续之前,让我们讨论这些命令正在做什么: + + + +### sed -ie“s / KubernetesExecutor / LocalExecutor / g”scripts / ci / kubernetes / kube / configmaps.yaml + + + +Kubernetes Executor是另一种Airflow功能,允许动态分配任务已解决幂等pod的问题。我们将其切换到LocalExecutor的原因只是一次引入一个功能。如果您想尝试Kubernetes Executor,欢迎您跳过此步骤,但我们将在以后的文章中详细介绍。 + + + +### ./scripts/ci/kubernetes/Docker/build.sh + + + +此脚本将对Airflow主分支代码进行打包,以根据Airflow的发行文件构建Docker容器 + + + +### ./scripts/ci/kubernetes/kube/deploy.sh + + + +最后,我们在您的群集上创建完整的Airflow部署。这包括Airflow配置,postgres后端,webserver +调度程序以及之间的所有必要服务。需要注意的一点是,提供的角色绑定是集群管理员,因此如果您没有该集群的权限级别,可以在scripts / ci / kubernetes / kube / airflow.yaml中进行修改。 + + + +##步骤4:登录您的网络服务器 + + + +现在您的Airflow实例正在运行,让我们来看看UI!用户界面位于Airflow pod的8080端口,因此只需运行即可 + + + +``` + +WEB=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep "airflow" | head -1) + +kubectl port-forward $WEB 8080:8080 + + ``` + + + + + +现在,Airflow UI将存在于http://localhost:8080上。 要登录,只需输入airflow /airflow,您就可以完全访问Airflow Web UI。 + + + +##步骤5:上传测试文档 + + + +要修改/添加自己的DAG,可以使用kubectl cp将本地文件上传到Airflow调度程序的DAG文件夹中。 然后,Airflow将读取新的DAG并自动将其上传到其系统。 以下命令将任何本地文件上载到正确的目录中: + + + +kubectl cp /:/root/airflow/dags -c scheduler + + + + + + + +##步骤6:使用它! + + + +#那么我什么时候可以使用它? + + + + 虽然此功能仍处于早期阶段,但我们希望在未来几个月内发布该功能以进行广泛发布。 + + + +#参与其中 + + + +此功能只是将Apache Airflow集成到Kubernetes中的多项主要工作的开始。 Kubernetes Operator已合并到[Airflow的1.10发布分支](https://github.com/apache/incubator-airflow/tree/v1-10-test)(实验模式中的执行模块),以及完整的k8s本地调度程序称为Kubernetes Executor(即将发布文章)。这些功能仍处于早期采用者/贡献者可能对这些功能的未来产生巨大影响的阶段。 + + + +对于有兴趣加入这些工作的人,我建议按照以下步骤: + + + + *加入airflow-dev邮件列表dev@airflow.apache.org。 + + *在[Apache Airflow JIRA]中提出问题(https://issues.apache.org/jira/projects/AIRFLOW/issues/) + + *周三上午10点太平洋标准时间加入我们的SIG-BigData会议。 + + *在kubernetes.slack.com上的#sig-big-data找到我们。 + + + +特别感谢Apache Airflow和Kubernetes社区,特别是Grant Nicholas,Ben Goldberg,Anirudh Ramanathan,Fokko Dreisprong和Bolke de Bruin,感谢您对这些功能的巨大帮助以及我们未来的努力。 diff --git a/content/zh/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md b/content/zh/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md new file mode 100644 index 0000000000000..560fe618bcf1b --- /dev/null +++ b/content/zh/blog/_posts/2018-07-09-IPVS-In-Cluster-Load-Balancing.md @@ -0,0 +1,401 @@ +--- +title: 基于IPVS的集群内部负载均衡 +cn-approvers: +- congfairy +layout: blog +title: 'IPVS-Based In-Cluster Load Balancing Deep Dive' +date: 2018-07-09 +--- + + + +作者: Jun Du(华为), Haibin Xie(华为), Wei Liang(华为) + +注意: 这篇文章出自 系列深度文章 介绍 Kubernetes 1.11 的新特性 + + + +介绍 + +根据 Kubernetes 1.11 发布的博客文章, 我们宣布基于 IPVS 的集群内部服务负载均衡已达到一般可用性。 在这篇博客中,我们将带您深入了解该功能。 + + + +什么是 IPVS ? + +IPVS (IP Virtual Server)是在 Netfilter 上层构建的,并作为 Linux 内核的一部分,实现传输层负载均衡。 + +IPVS 集成在 LVS(Linux Virtual Server,Linux 虚拟服务器)中,它在主机上运行,并在物理服务器集群前作为负载均衡器。IPVS 可以将基于 TCP 和 UDP 服务的请求定向到真实服务器,并使真实服务器的服务在单个IP地址上显示为虚拟服务。 因此,IPVS 自然支持 Kubernetes 服务。 + + + +为什么为 Kubernetes 选择 IPVS ? + +随着 Kubernetes 的使用增长,其资源的可扩展性变得越来越重要。特别是,服务的可扩展性对于运行大型工作负载的开发人员/公司采用 Kubernetes 至关重要。 + +Kube-proxy 是服务路由的构建块,它依赖于经过强化攻击的 iptables 来实现支持核心的服务类型,如 ClusterIP 和 NodePort。 但是,iptables 难以扩展到成千上万的服务,因为它纯粹是为防火墙而设计的,并且基于内核规则列表。 + +尽管 Kubernetes 在版本v1.6中已经支持5000个节点,但使用 iptables 的 kube-proxy 实际上是将集群扩展到5000个节点的瓶颈。 一个例子是,在5000节点集群中使用 NodePort 服务,如果我们有2000个服务并且每个服务有10个 pod,这将在每个工作节点上至少产生20000个 iptable 记录,这可能使内核非常繁忙。 + +另一方面,使用基于 IPVS 的集群内服务负载均衡可以为这种情况提供很多帮助。 IPVS 专门用于负载均衡,并使用更高效的数据结构(哈希表),允许几乎无限的规模扩张。 + + + +基于 IPVS 的 Kube-proxy + +参数更改 + +参数: --proxy-mode 除了现有的用户空间和 iptables 模式,IPVS 模式通过--proxy-mode = ipvs 进行配置。 它隐式使用 IPVS NAT 模式进行服务端口映射。 + + + +参数: --ipvs-scheduler + +添加了一个新的 kube-proxy 参数来指定 IPVS 负载均衡算法,参数为 --ipvs-scheduler。 如果未配置,则默认为 round-robin 算法(rr)。 + +- rr: round-robin +- lc: least connection +- dh: destination hashing +- sh: source hashing +- sed: shortest expected delay +- nq: never queue + +将来,我们可以实现特定于服务的调度程序(可能通过注释),该调度程序具有更高的优先级并覆盖该值。 + + + +参数: --cleanup-ipvs 类似于 --cleanup-iptables 参数,如果为 true,则清除在 IPVS 模式下创建的 IPVS 配置和 IPTables 规则。 + +参数: --ipvs-sync-period 刷新 IPVS 规则的最大间隔时间(例如'5s','1m')。 必须大于0。 + +参数: --ipvs-min-sync-period 刷新 IPVS 规则的最小间隔时间间隔(例如'5s','1m')。 必须大于0。 + + + +参数: --ipvs-exclude-cidrs 清除 IPVS 规则时 IPVS 代理不应触及的 CIDR 的逗号分隔列表,因为 IPVS 代理无法区分 kube-proxy 创建的 IPVS 规则和用户原始规则 IPVS 规则。 如果您在环境中使用 IPVS proxier 和您自己的 IPVS 规则,则应指定此参数,否则将清除原始规则。 + + + +设计注意事项 + +IPVS 服务网络拓扑 + +创建 ClusterIP 类型服务时,IPVS proxier 将执行以下三项操作: + +- 确保节点中存在虚拟接口,默认为 kube-ipvs0 +- 将服务 IP 地址绑定到虚拟接口 +- 分别为每个服务 IP 地址创建 IPVS 虚拟服务器 + + + +这是一个例子: + + # kubectl describe svc nginx-service + Name: nginx-service + ... + Type: ClusterIP + IP: 10.102.128.4 + Port: http 3080/TCP + Endpoints: 10.244.0.235:8080,10.244.1.237:8080 + Session Affinity: None + + # ip addr + ... + 73: kube-ipvs0: mtu 1500 qdisc noop state DOWN qlen 1000 + link/ether 1a:ce:f5:5f:c1:4d brd ff:ff:ff:ff:ff:ff + inet 10.102.128.4/32 scope global kube-ipvs0 + valid_lft forever preferred_lft forever + + # ipvsadm -ln + IP Virtual Server version 1.2.1 (size=4096) + Prot LocalAddress:Port Scheduler Flags + -> RemoteAddress:Port Forward Weight ActiveConn InActConn + TCP 10.102.128.4:3080 rr + -> 10.244.0.235:8080 Masq 1 0 0 + -> 10.244.1.237:8080 Masq 1 0 0 + + + +请注意,Kubernetes 服务和 IPVS 虚拟服务器之间的关系是“1:N”。 例如,考虑具有多个 IP 地址的 Kubernetes 服务。 外部 IP 类型服务有两个 IP 地址 - 集群IP和外部 IP。 然后,IPVS 代理将创建2个 IPVS 虚拟服务器 - 一个用于集群 IP,另一个用于外部 IP。 Kubernetes 的 endpoint(每个IP +端口对)与 IPVS 虚拟服务器之间的关系是“1:1”。 + +删除 Kubernetes 服务将触发删除相应的 IPVS 虚拟服务器,IPVS 物理服务器及其绑定到虚拟接口的 IP 地址。 + +端口映射 + +IPVS 中有三种代理模式:NAT(masq),IPIP 和 DR。 只有 NAT 模式支持端口映射。 Kube-proxy 利用 NAT 模式进行端口映射。 以下示例显示 IPVS 服务端口3080到Pod端口8080的映射。 + + TCP 10.102.128.4:3080 rr + -> 10.244.0.235:8080 Masq 1 0 0 + -> 10.244.1.237:8080 Masq 1 0 + + + +会话关系 + +IPVS 支持客户端 IP 会话关联(持久连接)。 当服务指定会话关系时,IPVS 代理将在 IPVS 虚拟服务器中设置超时值(默认为180分钟= 10800秒)。 例如: + + # kubectl describe svc nginx-service + Name: nginx-service + ... + IP: 10.102.128.4 + Port: http 3080/TCP + Session Affinity: ClientIP + + # ipvsadm -ln + IP Virtual Server version 1.2.1 (size=4096) + Prot LocalAddress:Port Scheduler Flags + -> RemoteAddress:Port Forward Weight ActiveConn InActConn + TCP 10.102.128.4:3080 rr persistent 10800 + + + +IPVS 代理中的 Iptables 和 Ipset + +IPVS 用于负载均衡,它无法处理 kube-proxy 中的其他问题,例如 包过滤,数据包欺骗,SNAT 等 + +IPVS proxier 在上述场景中利用 iptables。 具体来说,ipvs proxier 将在以下4种情况下依赖于 iptables: + +- kube-proxy 以 --masquerade-all = true 开头 +- 在 kube-proxy 启动中指定集群 CIDR +- 支持 Loadbalancer 类型服务 +- 支持 NodePort 类型的服务 + +但是,我们不想创建太多的 iptables 规则。 所以我们采用 ipset 来减少 iptables 规则。 以下是 IPVS proxier 维护的 ipset 集表: + + + + 设置名称 成员 用法 + KUBE-CLUSTER-IP 所有服务 IP + 端口 masquerade-all=true 或 clusterCIDR 指定的情况下进行伪装 + KUBE-LOOP-BACK 所有服务 IP +端口+ IP 解决数据包欺骗问题 + KUBE-EXTERNAL-IP 服务外部 IP +端口 将数据包伪装成外部 IP + KUBE-LOAD-BALANCER 负载均衡器入口 IP +端口 将数据包伪装成 Load Balancer 类型的服务 + KUBE-LOAD-BALANCER-LOCAL 负载均衡器入口 IP +端口 以及 externalTrafficPolicy=local 接受数据包到 Load Balancer externalTrafficPolicy=local + KUBE-LOAD-BALANCER-FW 负载均衡器入口 IP +端口 以及 loadBalancerSourceRanges 使用指定的 loadBalancerSourceRanges 丢弃 Load Balancer类型Service的数据包 + KUBE-LOAD-BALANCER-SOURCE-CIDR 负载均衡器入口 IP +端口 + 源 CIDR 接受 Load Balancer 类型 Service 的数据包,并指定loadBalancerSourceRanges + KUBE-NODE-PORT-TCP NodePort 类型服务 TCP 将数据包伪装成 NodePort(TCP) + KUBE-NODE-PORT-LOCAL-TCP NodePort 类型服务 TCP 端口,带有 externalTrafficPolicy=local 接受数据包到 NodePort 服务 使用 externalTrafficPolicy=local + KUBE-NODE-PORT-UDP NodePort 类型服务 UDP 端口 将数据包伪装成 NodePort(UDP) + KUBE-NODE-PORT-LOCAL-UDP NodePort 类型服务 UDP 端口 使用 externalTrafficPolicy=local 接受数据包到NodePort服务 使用 externalTrafficPolicy=local + + + +通常,对于 IPVS proxier,无论我们有多少 Service/ Pod,iptables 规则的数量都是静态的。 + + + +在 IPVS 模式下运行 kube-proxy + +目前,本地脚本,GCE 脚本和 kubeadm 支持通过导出环境变量(KUBE_PROXY_MODE=ipvs)或指定标志(--proxy-mode=ipvs)来切换 IPVS 代理模式。 在运行IPVS 代理之前,请确保已安装 IPVS 所需的内核模块。 + + ip_vs + ip_vs_rr + ip_vs_wrr + ip_vs_sh + nf_conntrack_ipv4 + +最后,对于 Kubernetes v1.10,“SupportIPVSProxyMode” 默认设置为 “true”。 对于 Kubernetes v1.11 ,该选项已完全删除。 但是,您需要在v1.10之前为Kubernetes 明确启用 --feature-gates = SupportIPVSProxyMode = true。 + + + +参与其中 + +参与 Kubernetes 的最简单方法是加入众多[特别兴趣小组](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIG)中与您的兴趣一致的小组。 你有什么想要向 Kubernetes 社区广播的吗? 在我们的每周[社区会议](https://github.com/kubernetes/community/blob/master/communication.md#weekly-meeting)或通过以下渠道分享您的声音。 + +感谢您的持续反馈和支持。 +在[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)上发布问题(或回答问题) + +加入[K8sPort](http://k8sport.org/)的倡导者社区门户网站 + +在 Twitter 上关注我们 [@Kubernetesio](https://twitter.com/kubernetesio )获取最新更新 + +在[Slack](http://slack.k8s.io/)上与社区聊天 + +分享您的 Kubernetes [故事](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform) diff --git a/content/cn/docs/.gitkeep b/content/zh/docs/.gitkeep similarity index 100% rename from content/cn/docs/.gitkeep rename to content/zh/docs/.gitkeep diff --git a/content/cn/docs/_index.md b/content/zh/docs/_index.md similarity index 100% rename from content/cn/docs/_index.md rename to content/zh/docs/_index.md diff --git a/content/cn/docs/admin/accessing-the-api.md b/content/zh/docs/admin/accessing-the-api.md similarity index 100% rename from content/cn/docs/admin/accessing-the-api.md rename to content/zh/docs/admin/accessing-the-api.md diff --git a/content/cn/docs/admin/authorization/_index.md b/content/zh/docs/admin/authorization/_index.md similarity index 100% rename from content/cn/docs/admin/authorization/_index.md rename to content/zh/docs/admin/authorization/_index.md diff --git a/content/cn/docs/admin/authorization/abac.md b/content/zh/docs/admin/authorization/abac.md similarity index 100% rename from content/cn/docs/admin/authorization/abac.md rename to content/zh/docs/admin/authorization/abac.md diff --git a/content/cn/docs/admin/authorization/webhook.md b/content/zh/docs/admin/authorization/webhook.md similarity index 100% rename from content/cn/docs/admin/authorization/webhook.md rename to content/zh/docs/admin/authorization/webhook.md diff --git a/content/cn/docs/admin/bootstrap-tokens.md b/content/zh/docs/admin/bootstrap-tokens.md similarity index 100% rename from content/cn/docs/admin/bootstrap-tokens.md rename to content/zh/docs/admin/bootstrap-tokens.md diff --git a/content/cn/docs/admin/cluster-large.md b/content/zh/docs/admin/cluster-large.md similarity index 100% rename from content/cn/docs/admin/cluster-large.md rename to content/zh/docs/admin/cluster-large.md diff --git a/content/cn/docs/admin/daemon.yaml b/content/zh/docs/admin/daemon.yaml similarity index 100% rename from content/cn/docs/admin/daemon.yaml rename to content/zh/docs/admin/daemon.yaml diff --git a/content/cn/docs/admin/high-availability/_index.md b/content/zh/docs/admin/high-availability/_index.md similarity index 100% rename from content/cn/docs/admin/high-availability/_index.md rename to content/zh/docs/admin/high-availability/_index.md diff --git a/content/cn/docs/admin/kube-apiserver.md b/content/zh/docs/admin/kube-apiserver.md similarity index 100% rename from content/cn/docs/admin/kube-apiserver.md rename to content/zh/docs/admin/kube-apiserver.md diff --git a/content/cn/docs/admin/kubelet-authentication-authorization.md b/content/zh/docs/admin/kubelet-authentication-authorization.md similarity index 100% rename from content/cn/docs/admin/kubelet-authentication-authorization.md rename to content/zh/docs/admin/kubelet-authentication-authorization.md diff --git a/content/cn/docs/admin/kubelet-tls-bootstrapping.md b/content/zh/docs/admin/kubelet-tls-bootstrapping.md similarity index 100% rename from content/cn/docs/admin/kubelet-tls-bootstrapping.md rename to content/zh/docs/admin/kubelet-tls-bootstrapping.md diff --git a/content/cn/docs/admin/multiple-zones.md b/content/zh/docs/admin/multiple-zones.md similarity index 100% rename from content/cn/docs/admin/multiple-zones.md rename to content/zh/docs/admin/multiple-zones.md diff --git a/content/cn/docs/admin/node-conformance.md b/content/zh/docs/admin/node-conformance.md similarity index 100% rename from content/cn/docs/admin/node-conformance.md rename to content/zh/docs/admin/node-conformance.md diff --git a/content/cn/docs/admin/ovs-networking.md b/content/zh/docs/admin/ovs-networking.md similarity index 100% rename from content/cn/docs/admin/ovs-networking.md rename to content/zh/docs/admin/ovs-networking.md diff --git a/content/cn/docs/admin/service-accounts-admin.md b/content/zh/docs/admin/service-accounts-admin.md similarity index 100% rename from content/cn/docs/admin/service-accounts-admin.md rename to content/zh/docs/admin/service-accounts-admin.md diff --git a/content/cn/docs/concepts/architecture/cloud-controller.md b/content/zh/docs/concepts/architecture/cloud-controller.md similarity index 100% rename from content/cn/docs/concepts/architecture/cloud-controller.md rename to content/zh/docs/concepts/architecture/cloud-controller.md diff --git a/content/cn/docs/concepts/architecture/master-node-communication.md b/content/zh/docs/concepts/architecture/master-node-communication.md similarity index 100% rename from content/cn/docs/concepts/architecture/master-node-communication.md rename to content/zh/docs/concepts/architecture/master-node-communication.md diff --git a/content/cn/docs/concepts/architecture/nodes.md b/content/zh/docs/concepts/architecture/nodes.md similarity index 100% rename from content/cn/docs/concepts/architecture/nodes.md rename to content/zh/docs/concepts/architecture/nodes.md diff --git a/content/cn/docs/concepts/cluster-administration/addons.md b/content/zh/docs/concepts/cluster-administration/addons.md similarity index 100% rename from content/cn/docs/concepts/cluster-administration/addons.md rename to content/zh/docs/concepts/cluster-administration/addons.md diff --git a/content/cn/docs/concepts/cluster-administration/certificates.md b/content/zh/docs/concepts/cluster-administration/certificates.md similarity index 100% rename from content/cn/docs/concepts/cluster-administration/certificates.md rename to content/zh/docs/concepts/cluster-administration/certificates.md diff --git a/content/cn/docs/concepts/cluster-administration/cloud-providers.md b/content/zh/docs/concepts/cluster-administration/cloud-providers.md similarity index 100% rename from content/cn/docs/concepts/cluster-administration/cloud-providers.md rename to content/zh/docs/concepts/cluster-administration/cloud-providers.md diff --git a/content/cn/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/zh/docs/concepts/cluster-administration/cluster-administration-overview.md similarity index 100% rename from content/cn/docs/concepts/cluster-administration/cluster-administration-overview.md rename to content/zh/docs/concepts/cluster-administration/cluster-administration-overview.md diff --git a/content/cn/docs/concepts/cluster-administration/device-plugins.md b/content/zh/docs/concepts/cluster-administration/device-plugins.md similarity index 100% rename from content/cn/docs/concepts/cluster-administration/device-plugins.md rename to content/zh/docs/concepts/cluster-administration/device-plugins.md diff --git a/content/cn/docs/concepts/cluster-administration/federation.md b/content/zh/docs/concepts/cluster-administration/federation.md similarity index 100% rename from content/cn/docs/concepts/cluster-administration/federation.md rename to content/zh/docs/concepts/cluster-administration/federation.md diff --git a/content/cn/docs/concepts/cluster-administration/proxies.md b/content/zh/docs/concepts/cluster-administration/proxies.md similarity index 100% rename from content/cn/docs/concepts/cluster-administration/proxies.md rename to content/zh/docs/concepts/cluster-administration/proxies.md diff --git a/content/cn/docs/concepts/cluster-administration/sysctl-cluster.md b/content/zh/docs/concepts/cluster-administration/sysctl-cluster.md similarity index 100% rename from content/cn/docs/concepts/cluster-administration/sysctl-cluster.md rename to content/zh/docs/concepts/cluster-administration/sysctl-cluster.md diff --git a/content/cn/docs/concepts/configuration/commands.yaml b/content/zh/docs/concepts/configuration/commands.yaml similarity index 100% rename from content/cn/docs/concepts/configuration/commands.yaml rename to content/zh/docs/concepts/configuration/commands.yaml diff --git a/content/cn/docs/concepts/configuration/manage-compute-resources-container.md b/content/zh/docs/concepts/configuration/manage-compute-resources-container.md similarity index 100% rename from content/cn/docs/concepts/configuration/manage-compute-resources-container.md rename to content/zh/docs/concepts/configuration/manage-compute-resources-container.md diff --git a/content/cn/docs/concepts/configuration/pod-with-node-affinity.yaml b/content/zh/docs/concepts/configuration/pod-with-node-affinity.yaml similarity index 100% rename from content/cn/docs/concepts/configuration/pod-with-node-affinity.yaml rename to content/zh/docs/concepts/configuration/pod-with-node-affinity.yaml diff --git a/content/cn/docs/concepts/configuration/pod-with-pod-affinity.yaml b/content/zh/docs/concepts/configuration/pod-with-pod-affinity.yaml similarity index 100% rename from content/cn/docs/concepts/configuration/pod-with-pod-affinity.yaml rename to content/zh/docs/concepts/configuration/pod-with-pod-affinity.yaml diff --git a/content/cn/docs/concepts/configuration/pod.yaml b/content/zh/docs/concepts/configuration/pod.yaml similarity index 100% rename from content/cn/docs/concepts/configuration/pod.yaml rename to content/zh/docs/concepts/configuration/pod.yaml diff --git a/content/cn/docs/concepts/configuration/secret.md b/content/zh/docs/concepts/configuration/secret.md similarity index 98% rename from content/cn/docs/concepts/configuration/secret.md rename to content/zh/docs/concepts/configuration/secret.md index 5d59573aa739a..fd1a9d03779fd 100644 --- a/content/cn/docs/concepts/configuration/secret.md +++ b/content/zh/docs/concepts/configuration/secret.md @@ -104,7 +104,7 @@ $ kubectl create -f ./secret.yaml secret "mysecret" created ``` -**编码注意:** secret 数据的序列化 JSON 和 YAML 值使用 base64 编码成字符串。换行符在这些字符串中无效,必须省略。当在 Darwin/macOS 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来拆分长行。另外,对于 Linux 用户如果 `-w` 选项不可用的话,应该添加选项 `-w 0` 到 `base64` 命令或管道 `base64 | tr -d '\n' ` 。 +**编码注意:** secret 数据的序列化 JSON 和 YAML 值使用 base64 编码成字符串。换行符在这些字符串中无效,必须省略。当在 Darwin/OS X 上使用 `base64` 实用程序时,用户应避免使用 `-b` 选项来拆分长行。另外,对于 Linux 用户如果 `-w` 选项不可用的话,应该添加选项 `-w 0` 到 `base64` 命令或管道 `base64 | tr -d '\n' ` 。 #### 解码 Secret diff --git a/content/zh/docs/concepts/configuration/taint-and-toleration.md b/content/zh/docs/concepts/configuration/taint-and-toleration.md new file mode 100755 index 0000000000000..a3c0eda8dd859 --- /dev/null +++ b/content/zh/docs/concepts/configuration/taint-and-toleration.md @@ -0,0 +1,464 @@ +--- +approvers: +- davidopp +- kevin-wangzefeng +- bsalamat +cn-approvers: +- linyouchong +title: Taint 和 Toleration +content_template: templates/concept +weight: 40 +--- + + + +{{< toc >}} + +{{% capture overview %}} + +节点亲和性(详见[这里](/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature)),是 *pod* 的一种属性(偏好或硬性要求),它使 *pod* 被吸引到一类特定的节点。Taint 则相反,它使 *节点* 能够 *排斥* 一类特定的 pod。 + + +Taint 和 toleration 相互配合,可以用来避免 pod 被分配到不合适的节点上。每个节点上都可以应用一个或多个 taint ,这表示对于那些不能容忍这些 taint 的 pod,是不会被该节点接受的。如果将 toleration 应用于 pod 上,则表示这些 pod 可以(但不要求)被调度到具有匹配 taint 的节点上。 + +{{% /capture %}} + +{{% capture body %}} + + + +## 概念 + + +您可以使用命令 [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) 给节点增加一个 taint。比如, + +```shell +kubectl taint nodes node1 key=value:NoSchedule +``` + + +给节点 `node1` 增加一个 taint,它的 key 是 `key`,value 是 `value`,effect 是 `NoSchedule`。这表示只有拥有和这个 taint 相匹配的 toleration 的 pod 才能够被分配到 `node1` 这个节点。您可以在 PodSpec 中定义 pod 的 toleration。下面两个 toleration 均与上面例子中使用 `kubectl taint` 命令创建的 taint 相匹配,因此如果一个 pod 拥有其中的任何一个 toleration 都能够被分配到 `node1` : + + +想删除上述命令添加的 taint ,您可以运行: +```shell +kubectl taint nodes kube11 key:NoSchedule- +``` + +```yaml +tolerations: +- key: "key" + operator: "Equal" + value: "value" + effect: "NoSchedule" +``` + +```yaml +tolerations: +- key: "key" + operator: "Exists" + effect: "NoSchedule" +``` + + +一个 toleration 和一个 taint 相“匹配”是指它们有一样的 key 和 effect ,并且: + +* 如果 `operator` 是 `Exists` (此时 toleration 不能指定 `value`),或者 +* 如果 `operator` 是 `Equal` ,则它们的 `value` 应该相等 + +{{< note >}} + + +**注意:** 存在两种特殊情况: + +* 如果一个 toleration 的 `key` 为空且 operator 为 `Exists` ,表示这个 toleration 与任意的 key 、 value 和 effect 都匹配,即这个 toleration 能容忍任意 taint。 + +```yaml +tolerations: +- operator: "Exists" +``` + + +* 如果一个 toleration 的 `effect` 为空,则 `key` 值与之相同的相匹配 taint 的 `effect` 可以是任意值。 + +```yaml +tolerations: +- key: "key" + operator: "Exists" +``` +{{< /note >}} + + +上述例子使用到的 `effect` 的一个值 `NoSchedule`,您也可以使用另外一个值 `PreferNoSchedule`。这是“优化”或“软”版本的 `NoSchedule` ——系统会*尽量*避免将 pod 调度到存在其不能容忍 taint 的节点上,但这不是强制的。`effect` 的值还可以设置为 `NoExecute` ,下文会详细描述这个值。 + + +您可以给一个节点添加多个 taint ,也可以给一个 pod 添加多个 toleration。Kubernetes 处理多个 taint 和 toleration 的过程就像一个过滤器:从一个节点的所有 taint 开始遍历,过滤掉那些 pod 中存在与之相匹配的 toleration 的 taint。余下未被过滤的 taint 的 effect 值决定了 pod 是否会被分配到该节点,特别是以下情况: + + +* 如果未被过滤的 taint 中存在一个以上 effect 值为 `NoSchedule` 的 taint,则 Kubernetes 不会将 pod 分配到该节点。 +* 如果未被过滤的 taint 中不存在 effect 值为 `NoSchedule` 的 taint,但是存在 effect 值为 `PreferNoSchedule` 的 taint,则 Kubernetes 会*尝试*将 pod 分配到该节点。 +* 如果未被过滤的 taint 中存在一个以上 effect 值为 `NoExecute` 的 taint,则 Kubernetes 不会将 pod 分配到该节点(如果 pod 还未在节点上运行),或者将 pod 从该节点驱逐(如果 pod 已经在节点上运行)。 + + +例如,假设您给一个节点添加了如下的 taint + +```shell +kubectl taint nodes node1 key1=value1:NoSchedule +kubectl taint nodes node1 key1=value1:NoExecute +kubectl taint nodes node1 key2=value2:NoSchedule +``` + + +然后存在一个 pod,它有两个 toleration + +```yaml +tolerations: +- key: "key1" + operator: "Equal" + value: "value1" + effect: "NoSchedule" +- key: "key1" + operator: "Equal" + value: "value1" + effect: "NoExecute" +``` + + +在这个例子中,上述 pod 不会被分配到上述节点,因为其没有 toleration 和第三个 taint 相匹配。但是如果在给节点添加 上述 taint 之前,该 pod 已经在上述节点运行,那么它还可以继续运行在该节点上,因为第三个 taint 是三个 taint 中唯一不能被这个 pod 容忍的。 + + +通常情况下,如果给一个节点添加了一个 effect 值为 `NoExecute` 的 taint,则任何不能忍受这个 taint 的 pod 都会马上被驱逐,任何可以忍受这个 taint 的 pod 都不会被驱逐。但是,如果 pod 存在一个 effect 值为 `NoExecute` 的 toleration 指定了可选属性 `tolerationSeconds` 的值,则表示在给节点添加了上述 taint 之后,pod 还能继续在节点上运行的时间。例如, + +```yaml +tolerations: +- key: "key1" + operator: "Equal" + value: "value1" + effect: "NoExecute" + tolerationSeconds: 3600 +``` + + +这表示如果这个 pod 正在运行,然后一个匹配的 taint 被添加到其所在的节点,那么 pod 还将继续在节点上运行 3600 秒,然后被驱逐。如果在此之前上述 taint 被删除了,则 pod 不会被驱逐。 + + + +## 使用例子 + + +通过 taint 和 toleration ,可以灵活地让 pod *避开*某些节点或者将 pod 从某些节点驱逐。下面是几个使用例子: + + +* **专用节点**:如果您想将某些节点专门分配给特定的一组用户使用,您可以给这些节点添加一个 taint(即, + `kubectl taint nodes nodename dedicated=groupName:NoSchedule`),然后给这组用户的 pod 添加一个相对应的 toleration(通过编写一个自定义的[admission controller](/docs/admin/admission-controllers/),很容易就能做到)。拥有上述 toleration 的 pod 就能够被分配到上述专用节点,同时也能够被分配到集群中的其它节点。如果您希望这些 pod 只能被分配到上述专用节点,那么您还需要给这些专用节点另外添加一个和上述 taint 类似的 label (例如:`dedicated=groupName`),同时 还要在上述 admission controller 中给 pod 增加节点亲和性要求上述 pod 只能被分配到添加了 `dedicated=groupName` 标签的节点上。 + + +* **配备了特殊硬件的节点**:在部分节点配备了特殊硬件(比如 GPU)的集群中,我们希望不需要这类硬件的 pod 不要被分配到这些特殊节点,以便为后继需要这类硬件的 pod 保留资源。要达到这个目的,可以先给配备了特殊硬件的节点添加 taint(例如 `kubectl taint nodes nodename special=true:NoSchedule` or `kubectl taint nodes nodename special=true:PreferNoSchedule`),然后给使用了这类特殊硬件的 pod 添加一个相匹配的 toleration。和专用节点的例子类似,添加这个 toleration 的最简单的方法是使用自定义 [admission controller](/docs/reference/access-authn-authz/admission-controllers/)。比如,我们推荐使用 [Extended Resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) 来表示特殊硬件,给配置了特殊硬件的节点添加 taint 时包含 extended resource 名称,然后运行一个 [ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration) admission controller。此时,因为节点已经被 taint 了,没有对应 toleration 的 Pod 会被调度到这些节点。但当你创建一个使用了 extended resource 的 Pod 时,`ExtendedResourceToleration` admission controller 会自动给 Pod 加上正确的 toleration ,这样 Pod 就会被自动调度到这些配置了特殊硬件件的节点上。这样就能够确保这些配置了特殊硬件的节点专门用于运行 需要使用这些硬件的 Pod,并且您无需手动给这些 Pod 添加 toleration。 + + +* **基于 taint 的驱逐 (alpha 特性)**: 这是在每个 pod 中配置的在节点出现问题时的驱逐行为,接下来的章节会描述这个特性 + + + +## 基于 taint 的驱逐 + + + 前文我们提到过 taint 的 effect 值 `NoExecute` ,它会影响已经在节点上运行的 pod + * 如果 pod 不能忍受effect 值为 `NoExecute` 的 taint,那么 pod 将马上被驱逐 + * 如果 pod 能够忍受effect 值为 `NoExecute` 的 taint,但是在 toleration 定义中没有指定 `tolerationSeconds`,则 pod 还会一直在这个节点上运行。 + * 如果 pod 能够忍受effect 值为 `NoExecute` 的 taint,而且指定了 `tolerationSeconds`,则 pod 还能在这个节点上继续运行这个指定的时间长度。 + + + 此外,Kubernetes 1.6 已经支持(alpha阶段)节点问题的表示。换句话说,当某种条件为真时,node controller会自动给节点添加一个 taint。当前内置的 taint 包括: + * `node.kubernetes.io/not-ready`:节点未准备好。这相当于节点状态 `Ready` 的值为 "`False`"。 + * `node.kubernetes.io/unreachable`:node controller 访问不到节点. 这相当于节点状态 `Ready` 的值为 "`Unknown`"。 + * `node.kubernetes.io/out-of-disk`:节点磁盘耗尽。 + * `node.kubernetes.io/memory-pressure`:节点存在内存压力。 + * `node.kubernetes.io/disk-pressure`:节点存在磁盘压力。 + * `node.kubernetes.io/network-unavailable`:节点网络不可用。 + * `node.kubernetes.io/unschedulable`: 节点不可调度。 + * `node.cloudprovider.kubernetes.io/uninitialized`:如果 kubelet 启动时指定了一个 "外部" cloud provider,它将给当前节点添加一个 taint 将其标志为不可用。在 cloud-controller-manager 的一个 controller 初始化这个节点后,kubelet 将删除这个 taint。 + + +在启用了 `TaintBasedEvictions` 这个 alpha 功能特性后(在 Kubernetes controller manager 的 `--feature-gates` 参数中包含`TaintBasedEvictions=true` 开启这个功能特性,例如:`--feature-gates=FooBar=true,TaintBasedEvictions=true`),NodeController (或 kubelet)会自动给节点添加这类 taint,上述基于节点状态 Ready 对 pod 进行驱逐的逻辑会被禁用。 + +{{< note >}} + +注意:为了保证由于节点问题引起的 pod 驱逐[rate limiting](/docs/concepts/architecture/nodes/)行为正常,系统实际上会以 rate-limited 的方式添加 taint。在像 master 和 node 通讯中断等场景下,这避免了 pod 被大量驱逐。 +{{< /note >}} + + +使用这个 alpha 功能特性,结合 `tolerationSeconds` ,pod 就可以指定当节点出现一个或全部上述问题时还将在这个节点上运行多长的时间。 + + +比如,一个使用了很多本地状态的应用程序在网络断开时,仍然希望停留在当前节点上运行一段较长的时间,愿意等待网络恢复以避免被驱逐。在这种情况下,pod 的 toleration 可能是下面这样的: + +```yaml +tolerations: +- key: "node.alpha.kubernetes.io/unreachable" + operator: "Exists" + effect: "NoExecute" + tolerationSeconds: 6000 +``` + + +注意,Kubernetes 会自动给 pod 添加一个 key 为 `node.kubernetes.io/not-ready` 的 toleration 并配置 `tolerationSeconds=300`,除非用户提供的 pod 配置中已经已存在了 key 为 `node.kubernetes.io/not-ready` 的 toleration。同样,Kubernetes 会给 pod 添加一个 key 为 `node.kubernetes.io/unreachable` 的 toleration 并配置 `tolerationSeconds=300`,除非用户提供的 pod 配置中已经已存在了 key 为 `node.kubernetes.io/unreachable` 的 toleration。 + + +这种自动添加 toleration 机制保证了在其中一种问题被检测到时 pod 默认能够继续停留在当前节点运行 5 分钟。这两个默认 toleration 是由 [DefaultTolerationSeconds +admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds)添加的。 + + +[DaemonSet](/docs/concepts/workloads/controllers/daemonset/) 中的 pod 被创建时,针对以下 taint 自动添加的 `NoExecute` 的 toleration 将不会指定 `tolerationSeconds`: + + * `node.alpha.kubernetes.io/unreachable` + * `node.kubernetes.io/not-ready` + +这保证了出现上述问题时 DaemonSet 中的 pod 永远不会被驱逐,这和 `TaintBasedEvictions` 这个特性被禁用后的行为是一样的。 + + + +## 基于节点状态添加 taint + + +1.8 版本引入了一个 alpha 特性,让 node controller 根据节点的状态创建 taint。当开启了这个特性时(通过给 scheduler 的 `--feature-gates` 添加 `TaintNodesByCondition=true` 参数,例如:`--feature-gates=FooBar=true,TaintNodesByCondition=true`),scheduler不会去检查节点的状态,而是检查节点的 taint。这确保了节点的状态不影响应该调度哪些 Pod 到节点上。用户可以通过给 Pod 添加 toleration 来选择忽略节点的一些问题(节点状态的形式表示)。 +从 Kubernetes 1.8 开始,DaemonSet controller 会自动添加如下 `NoSchedule` toleration,以防止 DaemonSet 中断。 + * `node.kubernetes.io/memory-pressure` + * `node.kubernetes.io/disk-pressure` + * `node.kubernetes.io/out-of-disk` (*只适合 critical pod*) + * `node.kubernetes.io/unschedulable` (1.10 或更高版本) + * `node.kubernetes.io/network-unavailable` (*只适合 host network*) + + +添加上述 toleration 确保了向后兼容,您也可以选择自由的向 DaemonSet 添加 toleration。 diff --git a/content/cn/docs/concepts/containers/container-environment-variables.md b/content/zh/docs/concepts/containers/container-environment-variables.md similarity index 100% rename from content/cn/docs/concepts/containers/container-environment-variables.md rename to content/zh/docs/concepts/containers/container-environment-variables.md diff --git a/content/cn/docs/concepts/containers/images.md b/content/zh/docs/concepts/containers/images.md similarity index 99% rename from content/cn/docs/concepts/containers/images.md rename to content/zh/docs/concepts/containers/images.md index 11f902444889c..cc123251bef2c 100644 --- a/content/cn/docs/concepts/containers/images.md +++ b/content/zh/docs/concepts/containers/images.md @@ -87,7 +87,7 @@ Kubelet会获取并且定期刷新ECR的凭证。它需要以下权限 - 验证是否满足以上要求 - 获取工作站的$REGION (例如 `us-west-2`)凭证,使用凭证SSH到主机手动运行docker,检查是否运行 -- 验证kublet是否使用参数`--cloud-provider=aws`运行 +- 验证kubelet是否使用参数`--cloud-provider=aws`运行 - 检查kubelet日志(例如 `journalctl -u kubelet`),是否有类似的行 - `plugins.go:56] Registering credential provider: aws-ecr-key` - `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider` diff --git a/content/cn/docs/concepts/example-concept-template.md b/content/zh/docs/concepts/example-concept-template.md similarity index 100% rename from content/cn/docs/concepts/example-concept-template.md rename to content/zh/docs/concepts/example-concept-template.md diff --git a/content/cn/docs/concepts/overview/components.md b/content/zh/docs/concepts/overview/components.md similarity index 100% rename from content/cn/docs/concepts/overview/components.md rename to content/zh/docs/concepts/overview/components.md diff --git a/content/cn/docs/concepts/overview/kubernetes-api.md b/content/zh/docs/concepts/overview/kubernetes-api.md similarity index 100% rename from content/cn/docs/concepts/overview/kubernetes-api.md rename to content/zh/docs/concepts/overview/kubernetes-api.md diff --git a/content/cn/docs/concepts/overview/what-is-kubernetes.md b/content/zh/docs/concepts/overview/what-is-kubernetes.md similarity index 100% rename from content/cn/docs/concepts/overview/what-is-kubernetes.md rename to content/zh/docs/concepts/overview/what-is-kubernetes.md diff --git a/content/cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md similarity index 100% rename from content/cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md rename to content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md diff --git a/content/cn/docs/concepts/overview/working-with-objects/nginx-deployment.yaml b/content/zh/docs/concepts/overview/working-with-objects/nginx-deployment.yaml similarity index 100% rename from content/cn/docs/concepts/overview/working-with-objects/nginx-deployment.yaml rename to content/zh/docs/concepts/overview/working-with-objects/nginx-deployment.yaml diff --git a/content/cn/docs/concepts/policy/pod-security-policy.md b/content/zh/docs/concepts/policy/pod-security-policy.md similarity index 100% rename from content/cn/docs/concepts/policy/pod-security-policy.md rename to content/zh/docs/concepts/policy/pod-security-policy.md diff --git a/content/cn/docs/concepts/policy/psp.yaml b/content/zh/docs/concepts/policy/psp.yaml similarity index 100% rename from content/cn/docs/concepts/policy/psp.yaml rename to content/zh/docs/concepts/policy/psp.yaml diff --git a/content/cn/docs/concepts/policy/resource-quotas.md b/content/zh/docs/concepts/policy/resource-quotas.md similarity index 100% rename from content/cn/docs/concepts/policy/resource-quotas.md rename to content/zh/docs/concepts/policy/resource-quotas.md diff --git a/content/cn/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/content/zh/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md similarity index 100% rename from content/cn/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md rename to content/zh/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md diff --git a/content/cn/docs/concepts/services-networking/connect-applications-service.md b/content/zh/docs/concepts/services-networking/connect-applications-service.md similarity index 100% rename from content/cn/docs/concepts/services-networking/connect-applications-service.md rename to content/zh/docs/concepts/services-networking/connect-applications-service.md diff --git a/content/cn/docs/concepts/services-networking/curlpod.yaml b/content/zh/docs/concepts/services-networking/curlpod.yaml similarity index 100% rename from content/cn/docs/concepts/services-networking/curlpod.yaml rename to content/zh/docs/concepts/services-networking/curlpod.yaml diff --git a/content/cn/docs/concepts/services-networking/dns-pod-service.md b/content/zh/docs/concepts/services-networking/dns-pod-service.md similarity index 100% rename from content/cn/docs/concepts/services-networking/dns-pod-service.md rename to content/zh/docs/concepts/services-networking/dns-pod-service.md diff --git a/content/cn/docs/concepts/services-networking/hostaliases-pod.yaml b/content/zh/docs/concepts/services-networking/hostaliases-pod.yaml similarity index 100% rename from content/cn/docs/concepts/services-networking/hostaliases-pod.yaml rename to content/zh/docs/concepts/services-networking/hostaliases-pod.yaml diff --git a/content/cn/docs/concepts/services-networking/ingress.yaml b/content/zh/docs/concepts/services-networking/ingress.yaml similarity index 100% rename from content/cn/docs/concepts/services-networking/ingress.yaml rename to content/zh/docs/concepts/services-networking/ingress.yaml diff --git a/content/cn/docs/concepts/services-networking/network-policies.md b/content/zh/docs/concepts/services-networking/network-policies.md similarity index 100% rename from content/cn/docs/concepts/services-networking/network-policies.md rename to content/zh/docs/concepts/services-networking/network-policies.md diff --git a/content/cn/docs/concepts/services-networking/nginx-secure-app.yaml b/content/zh/docs/concepts/services-networking/nginx-secure-app.yaml similarity index 100% rename from content/cn/docs/concepts/services-networking/nginx-secure-app.yaml rename to content/zh/docs/concepts/services-networking/nginx-secure-app.yaml diff --git a/content/cn/docs/concepts/services-networking/nginx-svc.yaml b/content/zh/docs/concepts/services-networking/nginx-svc.yaml similarity index 100% rename from content/cn/docs/concepts/services-networking/nginx-svc.yaml rename to content/zh/docs/concepts/services-networking/nginx-svc.yaml diff --git a/content/cn/docs/concepts/services-networking/run-my-nginx.yaml b/content/zh/docs/concepts/services-networking/run-my-nginx.yaml similarity index 100% rename from content/cn/docs/concepts/services-networking/run-my-nginx.yaml rename to content/zh/docs/concepts/services-networking/run-my-nginx.yaml diff --git a/content/cn/docs/concepts/services-networking/service.md b/content/zh/docs/concepts/services-networking/service.md similarity index 100% rename from content/cn/docs/concepts/services-networking/service.md rename to content/zh/docs/concepts/services-networking/service.md diff --git a/content/cn/docs/concepts/workloads/controllers/cron-jobs.md b/content/zh/docs/concepts/workloads/controllers/cron-jobs.md similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/cron-jobs.md rename to content/zh/docs/concepts/workloads/controllers/cron-jobs.md diff --git a/content/cn/docs/concepts/workloads/controllers/daemonset.md b/content/zh/docs/concepts/workloads/controllers/daemonset.md similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/daemonset.md rename to content/zh/docs/concepts/workloads/controllers/daemonset.md diff --git a/content/cn/docs/concepts/workloads/controllers/daemonset.yaml b/content/zh/docs/concepts/workloads/controllers/daemonset.yaml similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/daemonset.yaml rename to content/zh/docs/concepts/workloads/controllers/daemonset.yaml diff --git a/content/cn/docs/concepts/workloads/controllers/deployment.md b/content/zh/docs/concepts/workloads/controllers/deployment.md similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/deployment.md rename to content/zh/docs/concepts/workloads/controllers/deployment.md diff --git a/content/cn/docs/concepts/workloads/controllers/frontend.yaml b/content/zh/docs/concepts/workloads/controllers/frontend.yaml similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/frontend.yaml rename to content/zh/docs/concepts/workloads/controllers/frontend.yaml diff --git a/content/cn/docs/concepts/workloads/controllers/garbage-collection.md b/content/zh/docs/concepts/workloads/controllers/garbage-collection.md similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/garbage-collection.md rename to content/zh/docs/concepts/workloads/controllers/garbage-collection.md diff --git a/content/cn/docs/concepts/workloads/controllers/hpa-rs.yaml b/content/zh/docs/concepts/workloads/controllers/hpa-rs.yaml similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/hpa-rs.yaml rename to content/zh/docs/concepts/workloads/controllers/hpa-rs.yaml diff --git a/content/cn/docs/concepts/workloads/controllers/job.yaml b/content/zh/docs/concepts/workloads/controllers/job.yaml similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/job.yaml rename to content/zh/docs/concepts/workloads/controllers/job.yaml diff --git a/content/cn/docs/concepts/workloads/controllers/my-repset.yaml b/content/zh/docs/concepts/workloads/controllers/my-repset.yaml similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/my-repset.yaml rename to content/zh/docs/concepts/workloads/controllers/my-repset.yaml diff --git a/content/cn/docs/concepts/workloads/controllers/nginx-deployment.yaml b/content/zh/docs/concepts/workloads/controllers/nginx-deployment.yaml similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/nginx-deployment.yaml rename to content/zh/docs/concepts/workloads/controllers/nginx-deployment.yaml diff --git a/content/cn/docs/concepts/workloads/controllers/replication.yaml b/content/zh/docs/concepts/workloads/controllers/replication.yaml similarity index 100% rename from content/cn/docs/concepts/workloads/controllers/replication.yaml rename to content/zh/docs/concepts/workloads/controllers/replication.yaml diff --git a/content/cn/docs/concepts/workloads/pods/init-containers.md b/content/zh/docs/concepts/workloads/pods/init-containers.md similarity index 100% rename from content/cn/docs/concepts/workloads/pods/init-containers.md rename to content/zh/docs/concepts/workloads/pods/init-containers.md diff --git a/content/cn/docs/concepts/workloads/pods/pod-lifecycle.md b/content/zh/docs/concepts/workloads/pods/pod-lifecycle.md similarity index 100% rename from content/cn/docs/concepts/workloads/pods/pod-lifecycle.md rename to content/zh/docs/concepts/workloads/pods/pod-lifecycle.md diff --git a/content/cn/docs/concepts/workloads/pods/podpreset.md b/content/zh/docs/concepts/workloads/pods/podpreset.md similarity index 100% rename from content/cn/docs/concepts/workloads/pods/podpreset.md rename to content/zh/docs/concepts/workloads/pods/podpreset.md diff --git a/content/zh/docs/getting-started-guides/ubuntu/security.md b/content/zh/docs/getting-started-guides/ubuntu/security.md new file mode 100644 index 0000000000000..49839858b20d4 --- /dev/null +++ b/content/zh/docs/getting-started-guides/ubuntu/security.md @@ -0,0 +1,68 @@ +--- +title: 安全考虑 +content_template: templates/task +--- + + +{{% capture overview %}} + +默认情况下,所有提供的节点之间的所有连接(包括 etcd 集群)都通过 easyrsa 的 TLS 进行保护。 + +本文介绍已部署集群的安全注意事项和生产环境建议。 +{{% /capture %}} +{{% capture prerequisites %}} + +本文假定您拥有一个使用 Juju 部署的正在运行的集群。 +{{% /capture %}} + + +{{% capture steps %}} + +## 实现 + +TLS 和 easyrsa 的实现使用以下 [layers](https://jujucharms.com/docs/2.2/developer-layers)。 + +[layer-tls-client](https://github.com/juju-solutions/layer-tls-client) +[layer-easyrsa](https://github.com/juju-solutions/layer-easyrsa) + + +## 限制 ssh 访问 + +默认情况下,管理员可以 ssh 到集群中的任意已部署节点。您可以通过以下命令来批量禁用集群节点的 ssh 访问权限。 + + juju model-config proxy-ssh=true + +注意:Juju 控制器节点在您的云中仍然有开放的 ssh 访问权限,并且在这种情况下将被用作跳板机。 + +有关如何管理 ssh 密钥的说明,请参阅 Juju 文档中的 [模型管理](https://jujucharms.com/docs/2.2/models) 页面。 +{{% /capture %}} + + diff --git a/content/zh/docs/reference/access-authn-authz/admission-controllers.md b/content/zh/docs/reference/access-authn-authz/admission-controllers.md new file mode 100755 index 0000000000000..d855803396f22 --- /dev/null +++ b/content/zh/docs/reference/access-authn-authz/admission-controllers.md @@ -0,0 +1,563 @@ +--- +assignees: +- bprashanth +- davidopp +- derekwaynecarr +- erictune +- janetkuo +- thockin +cn-approvers: +- linyouchong +title: 使用准入控制插件 +--- + + +* TOC +{:toc} + + +## 什么是准入控制插件? + + +一个准入控制插件是一段代码,它会在请求通过认证和授权之后、对象被持久化之前拦截到达 API server 的请求。插件代码运行在 API server 进程中,必须将其编译为二进制文件,以便在此时使用。 + + +在每个请求被集群接受之前,准入控制插件依次执行。如果插件序列中任何一个拒绝了该请求,则整个请求将立即被拒绝并且返回一个错误给终端用户。 + + +准入控制插件可能会在某些情况下改变传入的对象,从而应用系统配置的默认值。另外,作为请求处理的一部分,准入控制插件可能会对相关的资源进行变更,以实现类似增加配额使用量这样的功能。 + + +## 为什么需要准入控制插件? + + +Kubernetes 的许多高级功能都要求启用一个准入控制插件,以便正确地支持该特性。因此,一个没有正确配置准入控制插件的 Kubernetes API server 是不完整的,它不会支持您所期望的所有特性。 + + +## 如何启用一个准入控制插件? + + +Kubernetes API server 支持一个标志参数 `admission-control` ,它指定了一个用于在集群修改对象之前调用的以逗号分隔的准入控制插件顺序列表。 + + +## 每个插件的功能是什么? + +### AlwaysAdmit + + +使用这个插件自行通过所有的请求。 + +### AlwaysPullImages + + +这个插件修改每一个新创建的 Pod 的镜像拉取策略为 Always 。这在多租户集群中是有用的,这样用户就可以放心,他们的私有镜像只能被那些有凭证的人使用。没有这个插件,一旦镜像被拉取到节点上,任何用户的 pod 都可以通过已了解到的镜像的名称(假设 pod 被调度到正确的节点上)来使用它,而不需要对镜像进行任何授权检查。当启用这个插件时,总是在启动容器之前拉取镜像,这意味着需要有效的凭证。 + +### AlwaysDeny + + +拒绝所有的请求。用于测试。 + + +### DenyExecOnPrivileged (已废弃) + + +如果一个 pod 拥有一个特权容器,这个插件将拦截所有在该 pod 中执行 exec 命令的请求。 + + +如果集群支持特权容器,并且希望限制最终用户在这些容器中执行 exec 命令的能力,我们强烈建议启用这个插件。 + + +此功能已合并到 [DenyEscalatingExec](#denyescalatingexec)。 + +### DenyEscalatingExec + + +这个插件将拒绝在拥有衍生特权而具备访问宿主机能力的 pod 中执行 exec 和 attach 命令。这包括在特权模式运行的 pod ,可以访问主机 IPC 命名空间的 pod ,和访问主机 PID 命名空间的 pod 。 + + +如果集群支持使用以衍生特权运行的容器,并且希望限制最终用户在这些容器中执行 exec 命令的能力,我们强烈建议启用这个插件。 + +### ImagePolicyWebhook + + +ImagePolicyWebhook 插件允许使用一个后端的 webhook 做出准入决策。您可以按照如下配置 admission-control 选项来启用这个插件: + +```shell +--admission-control=ImagePolicyWebhook +``` + + +#### 配置文件格式 + +ImagePolicyWebhook 插件使用了admission config 文件 `--admission-control-config-file` 来为后端行为设置配置选项。该文件可以是 json 或 yaml ,并具有以下格式: + +```javascript +{ + "imagePolicy": { + "kubeConfigFile": "path/to/kubeconfig/for/backend", + "allowTTL": 50, // time in s to cache approval + "denyTTL": 50, // time in s to cache denial + "retryBackoff": 500, // time in ms to wait between retries + "defaultAllow": true // determines behavior if the webhook backend fails + } +} +``` + + +这个配置文件必须引用一个 [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) 格式的文件,并在其中配置指向后端的连接。且需要在 TLS 上与后端进行通信。 + + +kubeconfig 文件的 cluster 字段需要指向远端服务,user 字段需要包含已返回的授权者。 + +```yaml +# clusters refers to the remote service. +clusters: +- name: name-of-remote-imagepolicy-service + cluster: + certificate-authority: /path/to/ca.pem # CA for verifying the remote service. + server: https://images.example.com/policy # URL of remote service to query. Must use 'https'. + +# users refers to the API server's webhook configuration. +users: +- name: name-of-api-server + user: + client-certificate: /path/to/cert.pem # cert for the webhook plugin to use + client-key: /path/to/key.pem # key matching the cert +``` + +对于更多的 HTTP 配置,请参阅 [kubeconfig](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) 文档。 + + +#### 请求载荷 + + +当面对一个准入决策时,API server 发送一个描述操作的 JSON 序列化的 api.imagepolicy.v1alpha1.ImageReview 对象。该对象包含描述被审核容器的字段,以及所有匹配 `*.image-policy.k8s.io/*` 的 pod 注释。 + + +注意,webhook API 对象与其他 Kubernetes API 对象一样受制于相同的版本控制兼容性规则。实现者应该知道对 alpha 对象的更宽松的兼容性,并检查请求的 "apiVersion" 字段,以确保正确的反序列化。此外,API server 必须启用 imagepolicy.k8s.io/v1alpha1 API 扩展组 (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`)。 + + +请求载荷例子: + +``` +{ + "apiVersion":"imagepolicy.k8s.io/v1alpha1", + "kind":"ImageReview", + "spec":{ + "containers":[ + { + "image":"myrepo/myimage:v1" + }, + { + "image":"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed" + } + ], + "annotations":[ + "mycluster.image-policy.k8s.io/ticket-1234": "break-glass" + ], + "namespace":"mynamespace" + } +} +``` + + +远程服务将填充请求的 ImageReviewStatus 字段,并返回允许或不允许访问。响应主体的 "spec" 字段会被忽略,并且可以省略。一个允许访问应答会返回: + +``` +{ + "apiVersion": "imagepolicy.k8s.io/v1alpha1", + "kind": "ImageReview", + "status": { + "allowed": true + } +} +``` + + +不允许访问,服务将返回: + +``` +{ + "apiVersion": "imagepolicy.k8s.io/v1alpha1", + "kind": "ImageReview", + "status": { + "allowed": false, + "reason": "image currently blacklisted" + } +} +``` + + +更多的文档,请参阅 `imagepolicy.v1alpha1` API 对象 和 `plugin/pkg/admission/imagepolicy/admission.go`。 + + +使用注解进行扩展 + + +一个 pod 中匹配 `*.image-policy.k8s.io/*` 的注解都会被发送给 webhook。这允许了解镜像策略后端的用户向它发送额外的信息,并为不同的后端实现接收不同的信息。 + + +您可以在这里输入的信息有: + + + * 在紧急情况下,请求 "break glass" 覆盖一个策略。 + + * 从一个记录了 break-glass 的请求的票证系统得到的一个票证编号 + + * 向策略服务器提供一个提示,用于提供镜像的 imageID,以方便它进行查找 + + +在任何情况下,注解都是由用户提供的,并不会被 Kubernetes 以任何方式进行验证。在将来,如果一个注解确定将被广泛使用,它可能会被提升为 ImageReviewSpec 的一个命名字段。 + +### ServiceAccount + + +这个插件实现了 [serviceAccounts](/docs/user-guide/service-accounts) 的自动化。 +如果您打算使用 Kubernetes 的 ServiceAccount 对象,我们强烈建议您使用这个插件。 + +### SecurityContextDeny + + +该插件将拒绝任何试图设置特定扩展 [SecurityContext](/docs/user-guide/security-context) 字段的 pod。如果集群没有使用 [ pod 安全策略](/docs/user-guide/pod-security-policy) 来限制安全上下文所能获取的值集,那么应该启用这个功能。 + +### ResourceQuota + + +此插件将观察传入的请求,并确保它不违反任何一个 `Namespace` 中的 `ResourceQuota` 对象中枚举出来的约束。如果您在 Kubernetes 部署中使用了 `ResourceQuota` +,您必须使用这个插件来强制执行配额限制。 + + +请查看 [resourceQuota 设计文档](https://git.k8s.io/community/contributors/design-proposals/admission_control_resource_quota.md) 和 [Resource Quota 例子](/docs/concepts/policy/resource-quotas/) 了解更多细节。 + + +强烈建议将这个插件配置在准入控制插件序列的末尾。这样配额就不会过早地增加,只会在稍后的准入控制中被拒绝。 + +### LimitRanger + + +这个插件将观察传入的请求,并确保它不会违反 `Namespace` 中 `LimitRange` 对象枚举的任何约束。如果您在 Kubernetes 部署中使用了 `LimitRange` 对象,则必须使用此插件来执行这些约束。LimitRanger 插件还可以用于将默认资源请求应用到没有指定任何内容的 Pod ;当前,默认的 LimitRanger 对 `default` 命名空间中的所有 pod 应用了0.1 CPU 的需求。 + + +请查看 [limitRange 设计文档](https://git.k8s.io/community/contributors/design-proposals/admission_control_limit_range.md) 和 [Limit Range 例子](/docs/tasks/configure-pod-container/limit-range/) 了解更多细节。 + + +### InitialResources (试验) + + +此插件观察 pod 创建请求。如果容器忽略了 requests 和 limits 计算资源,那么插件就会根据运行相同镜像的容器的历史使用记录来自动填充计算资源请求。如果没有足够的数据进行决策,则请求将保持不变。当插件设置了一个计算资源请求时,它会用它自动填充的计算资源对 pod 进行注解。 + + +请查看 [InitialResouces 建议书](https://git.k8s.io/community/contributors/design-proposals/initial-resources.md) 了解更多细节。 + +### NamespaceLifecycle + + +这个插件强制不能在一个正在被终止的 `Namespace` 中创建新对象,和确保使用不存在 `Namespace` 的请求被拒绝。 + + +删除 `Namespace` 触发了在该命名空间中删除所有对象( pod 、 services 等)的一系列操作。为了确保这个过程的完整性,我们强烈建议启用这个插件。 + +### DefaultStorageClass + + +这个插件观察不指定 storage class 字段的 `PersistentVolumeClaim` 对象的创建,并自动向它们添加默认的 storage class 。这样,不指定 storage class 字段的用户根本无需关心它们,它们将得到默认的 storage class 。 + + +当没有配置默认 storage class 时,这个插件不会执行任何操作。当一个以上的 storage class 被标记为默认时,它拒绝 `PersistentVolumeClaim` 创建并返回一个错误,管理员必须重新检查 `StorageClass` 对象,并且只标记一个作为默认值。这个插件忽略了任何 `PersistentVolumeClaim` 更新,它只对创建起作用。 + + +查看 [persistent volume](/docs/user-guide/persistent-volumes) 文档了解 persistent volume claims 和 storage classes 并了解如何将一个 storage classes 标志为默认。 + +### DefaultTolerationSeconds + + +这个插件设置了 pod 默认的宽恕容忍时间,对于那些没有设置宽恕容忍时间的 pod ,可以容忍 `notready:NoExecute` 和 `unreachable:NoExecute` 这些 taint 5分钟。 + +### PodNodeSelector + + +通过读取命名空间注释和全局配置,这个插件默认并限制了在一个命名空间中使用什么节点选择器。 + + +#### 配置文件格式 + +PodNodeSelector 插件使用准入配置文件 `--admission-control-config-file` 来设置后端行为的配置选项。 + + +请注意,配置文件格式将在未来版本中移至版本化文件。 + + +这个文件可能是 json 或 yaml ,格式如下: + +```yaml +podNodeSelectorPluginConfig: + clusterDefaultNodeSelector: + namespace1: + namespace2: +``` + + +#### 配置注解格式 + +PodNodeSelector 插件使用键为 `scheduler.alpha.kubernetes.io/node-selector` 的注解将节点选择器分配给 namespace 。 + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + annotations: + scheduler.alpha.kubernetes.io/node-selector: + name: namespace3 +``` + +### PodSecurityPolicy + + +此插件负责在创建和修改 pod 时根据请求的安全上下文和可用的 pod 安全策略确定是否应该通过 pod。 + + +对于 Kubernetes < 1.6.0 的版本,API Server 必须启用 extensions/v1beta1/podsecuritypolicy API 扩展组 (`--runtime-config=extensions/v1beta1/podsecuritypolicy=true`)。 + + +查看 [Pod 安全策略文档](/docs/concepts/policy/pod-security-policy/) 了解更多细节。 + +### NodeRestriction + + +这个插件限制了 kubelet 可以修改的 `Node` 和 `Pod` 对象。 为了受到这个入场插件的限制,kubelet 必须在 `system:nodes` 组中使用凭证,并使用 `system:node:` 形式的用户名。这样的 kubelet 只允许修改自己的 `Node` API 对象,只能修改绑定到节点本身的 `Pod` 对象。 + +未来的版本可能会添加额外的限制,以确保 kubelet 具有正确操作所需的最小权限集。 + + +## 是否有推荐的一组插件可供使用? + + +有。 +对于 Kubernetes >= 1.6.0 版本,我们强烈建议运行以下一系列准入控制插件(顺序也很重要) + +```shell +--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds +``` + + +对于 Kubernetes >= 1.4.0 版本,我们强烈建议运行以下一系列准入控制插件(顺序也很重要) + +```shell +--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota +``` + + +对于 Kubernetes >= 1.2.0 版本,我们强烈建议运行以下一系列准入控制插件(顺序也很重要) + +```shell +--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota +``` + + +对于 Kubernetes >= 1.0.0 版本,我们强烈建议运行以下一系列准入控制插件(顺序也很重要) + +```shell +--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,PersistentVolumeLabel,ResourceQuota +``` diff --git a/content/zh/docs/reference/access-authn-authz/authorization.md b/content/zh/docs/reference/access-authn-authz/authorization.md new file mode 100644 index 0000000000000..e8d8d8043e095 --- /dev/null +++ b/content/zh/docs/reference/access-authn-authz/authorization.md @@ -0,0 +1,304 @@ +--- +reviewers: +- erictune +- lavalamp +- deads2k +- liggitt +cnapprove: +- fatalc + +title: 授权概述 +content_template: templates/concept +weight: 60 +--- + +{{% capture overview %}} + +了解有关 Kubernetes 授权的更多信息,包括使用支持的授权模块创建策略的详细信息。 +{{% /capture %}} + +{{% capture body %}} + + +在Kubernetes中,您必须在授权(授予访问权限)之前进行身份验证(登录),有关身份验证的信息, +请参阅 [访问控制概述](/docs/reference/access-authn-authz/controlling-access/). + +Kubernetes期望REST API请求中常见的属性。 +这意味着Kubernetes授权适用于现有的组织范围或云提供商范围的访问控制系统, +除了Kubernetes API之外,它还可以处理其他API。 + + + +## 确定是允许还是拒绝请求 +Kubernetes 使用 API ​​服务器授权 API 请求。它根据所有策略评估所有请求属性来决定允许或拒绝请求。 +一个API请求的所有部分必须被某些策略允许才能继续。这意味着默认情况下拒绝权限。 + +(尽管 Kubernetes 使用 API ​​服务器,但是依赖于特定种类对象的特定字段的访问控制和策略由准入控制器处理。) + +配置多个授权模块时,将按顺序检查每个模块。 +如果任何授权模块批准或拒绝请求,则立即返回该决定,并且不会与其他授权模块协商。 +如果所有模块对请求没有意见,则拒绝该请求。一个拒绝响应返回 HTTP 状态代码 403 。 + + + +## 审查您的请求属性 +Kubernetes仅审查以下API请求属性: + + * **user** - 身份验证期间提供的`user`字符串。 + * **group** - 经过身份验证的用户所属的组名列表。 + * **extra** - 由身份验证层提供的任意字符串键到字符串值的映射。 + * **API** - 指示请求是否针对 API 资源。 + * **Request path** - 各种非资源端点的路径,如`/api`或`/healthz`。 + * **API request verb** - API 动词`get`,`list`,`create`,`update`,`patch`,`watch`,`proxy`,`redirect`,`delete`和`deletecollection`用于资源请求。要确定资源API端点的请求动词,请参阅[确定请求动词](/docs/reference/access-authn-authz/authorization/#determine-whether-a-request-is-allowed-or-denied)。 + * **HTTP request verb** - HTTP 动词`get`,`post`,`put`和`delete`用于非资源请求。 + * **Resource** - 正在访问的资源的 ID 或名称(仅限资源请求) - 对于使用`get`,`update`,`patch`和`delete`动词的资源请求,您必须提供资源名称。 + * **Subresource** - 正在访问的子资源(仅限资源请求)。 + * **Namespace** - 正在访问的对象的名称空间(仅适用于命名空间资源请求)。 + * **API group** - 正在访问的 API 组(仅限资源请求)。空字符串表示[核心API组](/docs/concepts/overview/kubernetes-api/)。 + + + +## 确定请求动词 + +要确定资源API端点的请求谓词,请检查所使用的 HTTP 动词以及请求是否对单个资源或资源集合起作用: + +HTTP 动词 | request 动词 +----------|--------------- +POST | create +GET, HEAD | get (单个资源), list (资源集合) +PUT | update +PATCH | patch +DELETE | delete (单个资源), deletecollection (资源集合) + +Kubernetes有时使用专门的动词检查授权以获得额外的权限。例如: + +* [Pod安全策略](/docs/concepts/policy/pod-security-policy/) 检查`policy` API组中`podsecuritypolicies`资源的`use`动词的授权。 +* [RBAC](/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping) 检查`rbac.authorization.k8s.io` API 组中`roles`和`clusterroles`资源的`bind`动词的授权。 +* [认证](/docs/reference/access-authn-authz/authentication/) layer检查核心API组中`users`,`groups`和`serviceaccounts`的`impersonate`动词的授权,以及`authentication.k8s.io` API组中的`userextras`。 + + + +## 授权模块 + * **Node** - 一个专用授权程序,根据计划运行的 pod 为 kubelet 授予权限。了解有关使用节点授权模式的更多信息,请参阅[节点授权](/docs/reference/access-authn-authz/node/). + * **ABAC** - 基于属性的访问控制(ABAC) 定义了一种访问控制范例,通过使用将属性组合在一起的策略,将访问权限授予用户。策略可以使用任何类型的属性(用户属性,资源属性,对象,环境属性等)。要了解有关使用 ABAC 模式的更多信息,请参阅[ABAC 模式](/docs/reference/access-authn-authz/abac/)。 + * **RBAC** - 基于角色的访问控制(RBAC)是一种基于企业内个人用户的角色来管理对计算机或网络资源的访问的方法。在此上下文中,权限是单个用户执行特定任务的能力,例如查看,创建或修改文件。要了解有关使用 RBAC 模式的更多信息,请参阅[RBAC 模式](/docs/reference/access-authn-authz/rbac/)。 + * 当指定的RBAC(基于角色的访问控制)使用`rbac.authorization.k8s.io` API 组来驱动授权决策时,允许管理员通过 Kubernetes API 动态配置权限策略。 + * 要启用RBAC,请使用`--authorization-mode = RBAC`启动 apiserver 。 + * **Webhook** - WebHook 是一个 HTTP 回调: 发生某些事情时调用的 HTTP POST;通过 HTTP POST 进行简单的事件通知。实现 WebHooks 的 Web 应用程序会在发生某些事情时将消息发布到URL。要了解有关使用 Webhook 模式的更多信息,请参阅[Webhook 模式](/docs/reference/access-authn-authz/webhook/)。 + + + +#### 检查API访问 + +`kubectl`提供`auth can-i`子命令,用于快速查询 API 授权层。 +该命令使用`SelfSubjectAccessReview` API来确定当前用户是否可以执行给定操作,并且无论使用何种授权模式都可以工作。 + +```bash +$ kubectl auth can-i create deployments --namespace dev +yes +$ kubectl auth can-i create deployments --namespace prod +no +``` + +管理员可以将此与[用户模拟](/docs/reference/access-authn-authz/authentication/#user-impersonation)结合使用,以确定其他用户可以执行的操作。 + +```bash +$ kubectl auth can-i list secrets --namespace dev --as dave +no +``` + + +`SelfSubjectAccessReview`是`authorization.k8s.io` API组的一部分,它将 API 服务器授权公开给外部服务。 +该组中的其他资源包括: + +* `SubjectAccessReview` - 访问任何用户的 Review ,而不仅仅是当前用户。用于将授权决策委派给API服务器。例如,kubelet 和扩展 API 服务器使用它来确定用户对自己的API的访问权限。 +* `LocalSubjectAccessReview` - 与`SubjectAccessReview`类似,但仅限于特定的命名空间。 +* `SelfSubjectRulesReview` - 返回用户可在命名空间内执行的操作集的审阅。用户可以快速汇总自己的访问权限,或者用于隐藏/显示操作的UI。 + +可以通过创建普通 Kubernetes 资源来查询这些 API ,其中返回对象的响应“status”字段是查询的结果。 + +```bash +$ kubectl create -f - -o yaml << EOF +apiVersion: authorization.k8s.io/v1 +kind: SelfSubjectAccessReview +spec: + resourceAttributes: + group: apps + name: deployments + verb: create + namespace: dev +EOF + +apiVersion: authorization.k8s.io/v1 +kind: SelfSubjectAccessReview +metadata: + creationTimestamp: null +spec: + resourceAttributes: + group: apps + name: deployments + namespace: dev + verb: create +status: + allowed: true + denied: false +``` + + + +## 为您的授权模块使用标志 + +您必须在策略中包含一个标志,以指明您的策略包含哪个授权模块: + +可以使用以下标志: + + * `--authorization-mode=ABAC` 基于属性的访问控制(ABAC)模式允许您使用本地文件配置策略。 + * `--authorization-mode=RBAC` 基于角色的访问控制(RBAC)模式允许您使用 Kubernetes API 创建和存储策略。 + * `--authorization-mode=Webhook` WebHook 是一种 HTTP 回调模式,允许您使用远程REST端点管理授权。 + * `--authorization-mode=Node` 节点授权是一种特殊用途的授权模式,专门授权由 kubelet 发出的API请求。 + * `--authorization-mode=AlwaysDeny` 该标志阻止所有请求。仅将此标志用于测试。 + * `--authorization-mode=AlwaysAllow` 此标志允许所有请求。仅在您不需要 API 请求的授权时才使用此标志。 + +您可以选择多个授权模块。按顺序检查模块,以便较早的模块具有更高的优先级来允许或拒绝请求。 + + +## 通过pod创建权限升级 + +能够在命名空间中创建 pod 的用户可能会升级其在该命名空间内的权限。 +他们可以创建在该命名空间内访问其权限的 pod 。 +他们可以创建用户无法自己读取 secret 的 pod ,或者在具有不同/更高权限的服务帐户下运行的 pod 。 + +{{< caution >}} + +**注意:** 系统管理员在授予对 pod 创建的访问权限时要小心。 +授予在命名空间中创建 pod(或创建pod的控制器)的权限的用户可以: +读取命名空间中的所有秘密;读取命名空间中的所有配置映射; +并模拟命名空间中的任何服务帐户并执行帐户可以执行的任何操作。 +无论采用何种授权方式,这都适用。 +{{< /caution >}} +{{% /capture %}} + +{{% capture whatsnext %}} + +* 要了解有关身份验证的更多信息,请参阅 **身份验证** [控制对Kubernetes API的访问](/docs/reference/access-authn-authz/controlling-access/). +* 要了解有关准入控制的更多信息,请参阅 [使用准入控制器](/docs/reference/access-authn-authz/admission-controllers/). +{{% /capture %}} \ No newline at end of file diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md b/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md new file mode 100644 index 0000000000000..80acad7403fb8 --- /dev/null +++ b/content/zh/docs/reference/command-line-tools-reference/kube-proxy.md @@ -0,0 +1,491 @@ +--- +title: kube-proxy +notitle: true +--- +## kube-proxy + + + +### 概要 + +Kubernetes 在每个节点上运行网络代理。这反映每个节点上 Kubernetes API 中定义的服务,并且可以做简单的 +TCP 和 UDP 流转发或在一组后端中轮询,进行 TCP 和 UDP 转发。目前服务集群 IP 和端口通过由服务代理打开的端口 +的 Docker-links-compatible 环境变量找到。有一个可选的为这些集群 IP 提供集群 DNS 的插件。用户必须 +通过 apiserver API 创建服务去配置代理。 + +``` +kube-proxy [flags] +``` + + +### 选项 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--azure-container-registry-config string
包含 Azure 容器仓库配置信息的文件的路径。
--bind-address 0.0.0.0     默认: 0.0.0.0
要服务的代理服务器的 IP 地址(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::)
--cleanup
如果为 true,清理 iptables 和 ipvs 规则并退出。
--cleanup-ipvs     默认: true
如果为 true,则使 kube-proxy 在运行之前清理 ipvs 规则。 默认为 true
--cluster-cidr string
集群中的 CIDR 范围。 配置后,从该范围之外发送到服务集群 IP 的流量将被伪装,从 pod 发送到外部 LoadBalancer IP 的流量将被定向到相应的集群 IP
--config string
配置文件的路径。
--config-sync-period duration     默认: 15m0s
来自 apiserver 的配置的刷新频率。必须大于 0。
--conntrack-max-per-core int32     默认: 32768
每个 CPU 核跟踪的最大 NAT 连接数(0 表示保留原样限制并忽略 conntrack-min)。
--conntrack-min int32     默认: 131072
要分配的最小 conntrack 条目数,不管 conntrack-max-per-core(设置 conntrack-max-per-core = 0 保留原样限制)。
--conntrack-tcp-timeout-close-wait duration     默认: 1h0m0s
处于 CLOSE_WAIT 状态的 TCP 连接的 NAT 超时
--conntrack-tcp-timeout-established duration     默认: 24h0m0s
已建立的 TCP 连接的空闲超时(0 保持原样)
--feature-gates mapStringBool
一组 key=value 对,用于描述 alpha/experimental 特征的特征门。选项包括:
APIListChunking=true|false (BETA - 默认=true)
APIResponseCompression=true|false (ALPHA - 默认=false)
AdvancedAuditing=true|false (BETA - 默认=true)
AllAlpha=true|false (ALPHA - 默认=false)
AppArmor=true|false (BETA - 默认=true)
AttachVolumeLimit=true|false (ALPHA - 默认=false)
BalanceAttachedNodeVolumes=true|false (ALPHA - 默认=false)
BlockVolume=true|false (ALPHA - 默认=false)
CPUManager=true|false (BETA - 默认=true)
CRIContainerLogRotation=true|false (BETA - 默认=true)
CSIBlockVolume=true|false (ALPHA - 默认=false)
CSIPersistentVolume=true|false (BETA - 默认=true)
CustomPodDNS=true|false (BETA - 默认=true)
CustomResourceSubresources=true|false (BETA - 默认=true)
CustomResourceValidation=true|false (BETA - 默认=true)
DebugContainers=true|false (ALPHA - 默认=false)
DevicePlugins=true|false (BETA - 默认=true)
DynamicKubeletConfig=true|false (BETA - 默认=true)
DynamicProvisioningScheduling=true|false (ALPHA - 默认=false)
EnableEquivalenceClassCache=true|false (ALPHA - 默认=false)
ExpandInUsePersistentVolumes=true|false (ALPHA - 默认=false)
ExpandPersistentVolumes=true|false (BETA - 默认=true)
ExperimentalCriticalPodAnnotation=true|false (ALPHA - 默认=false)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认=false)
GCERegionalPersistentDisk=true|false (BETA - 默认=true)
HugePages=true|false (BETA - 默认=true)
HyperVContainer=true|false (ALPHA - 默认=false)
Initializers=true|false (ALPHA - 默认=false)
KubeletPluginsWatcher=true|false (ALPHA - 默认=false)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
MountContainers=true|false (ALPHA - 默认=false)
MountPropagation=true|false (BETA - 默认=true)
PersistentLocalVolumes=true|false (BETA - 默认=true)
PodPriority=true|false (BETA - 默认=true)
PodReadinessGates=true|false (BETA - 默认=false)
PodShareProcessNamespace=true|false (ALPHA - 默认=false)
QOSReserved=true|false (ALPHA - 默认=false)
ReadOnlyAPIDataVolumes=true|false (弃用 - 默认=true)
ResourceLimitsPriorityFunction=true|false (ALPHA - 默认=false)
ResourceQuotaScopeSelectors=true|false (ALPHA - 默认=false)
RotateKubeletClientCertificate=true|false (BETA - 默认=true)
RotateKubeletServerCertificate=true|false (ALPHA - 默认=false)
RunAsGroup=true|false (ALPHA - 默认=false)
ScheduleDaemonSetPods=true|false (ALPHA - 默认=false)
ServiceNodeExclusion=true|false (ALPHA - 默认=false)
ServiceProxyAllowExternalIPs=true|false (弃用 - 默认=false)
StorageObjectInUseProtection=true|false (default=true)
StreamingProxyRedirects=true|false (BETA - 默认=true)
SupportIPVSProxyMode=true|false (默认=true)
SupportPodPidsLimit=true|false (ALPHA - 默认=false)
Sysctls=true|false (BETA - 默认=true)
TaintBasedEvictions=true|false (ALPHA - 默认=false)
TaintNodesByCondition=true|false (ALPHA - 默认=false)
TokenRequest=true|false (ALPHA - 默认=false)
TokenRequestProjection=true|false (ALPHA - 默认=false)
VolumeScheduling=true|false (BETA - 默认=true)
VolumeSubpath=true|false (默认=true)
VolumeSubpathEnvExpansion=true|false (ALPHA - 默认=false)
--healthz-bind-address 0.0.0.0     默认: 0.0.0.0:10256
服务健康检查的 IP 地址和端口(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::)
--healthz-port int32     默认: 10256
绑定健康检查服务的端口。使用 0 禁用。
-h, --help
kube-proxy 帮助信息
--hostname-override string
如果非空,将使用此字符串作为标识而不是实际的主机名。
--iptables-masquerade-bit int32     默认: 14
如果使用纯 iptables 代理,则 fwmark 空间的位用于标记需要 SNAT 的数据包。 必须在 [0,31] 范围内。
--iptables-min-sync-period duration
当端点和服务发生变化时,iptables 规则的刷新的最小间隔(例如 '5s','1m','2h22m')。
--iptables-sync-period duration     默认: 30s
iptables 规则刷新的最大时间间隔(例如 '5s','1m','2h22m')。必须大于 0。
--ipvs-exclude-cidrs stringSlice
以逗号分隔的 CIDR 列表,在清理 IPVS 规则时,不应该触及 ipvs proxier。
--ipvs-min-sync-period duration
当端点和服务发生变化时,ipvs 规则的刷新的最小间隔(例如 '5s','1m','2h22m')。
--ipvs-scheduler string
代理模式为 ipvs 时的 ipvs 调度器类型
--ipvs-sync-period duration     默认: 30s
ipvs 规则刷新的最大时间间隔(例如 '5s','1m','2h22m')。必须大于 0。
--kube-api-burst int32     默认: 10
每秒与 kubernetes apiserver 交互的数量
--kube-api-content-type string     默认: "application/vnd.kubernetes.protobuf"
发送到 apiserver 的请求的内容类型。
--kube-api-qps float32     默认: 5
与 kubernetes apiserver 交互时使用的 QPS
--kubeconfig string
包含授权信息的 kubeconfig 文件的路径(master 位置由 master 标志设置)。
--log-flush-frequency duration     默认: 5s
日志刷新最大间隔
--masquerade-all
如果使用纯 iptables 代理,SNAT 所有通过服务句群 IP 发送的流量(这通常不需要)
--master string
Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值)
--metrics-bind-address 0.0.0.0     默认: 127.0.0.1:10249
要服务的度量服务器的 IP 地址和端口(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::)
--nodeport-addresses stringSlice
一个字符串值,指定用于 NodePorts 的地址。 值可以是有效的 IP 块(例如 1.2.3.0/24, 1.2.3.4/32)。 默认的空字符串切片([])表示使用所有本地地址。
--oom-score-adj int32     默认: -999
kube-proxy 进程的 oom-score-adj 值。 值必须在 [-1000,1000] 范围内
--profiling
如果为 true,则通过 Web 接口 /debug/pprof 启用性能分析。
--proxy-mode ProxyMode
使用哪种代理模式:'userspace'(较旧)或 'iptables'(较快)或 'ipvs'(实验)。 如果为空,使用最佳可用代理(当前为 iptables)。 如果选择了 iptables 代理,无论如何,但系统的内核或 iptables 版本不足,这总是会回退到用户空间代理。
--proxy-port-range port-range
主机端口的范围(beginPort-endPort,单端口或 beginPort + offset,包括),可以被代理服务流量消耗。 如果(未指定,0 或 0-0)则随机选择端口。
--udp-timeout duration     默认: 250ms
空闲 UDP 连接将保持打开的时长(例如 '250ms','2s')。 必须大于 0。仅适用于 proxy-mode=userspace
--version version[=true]
打印版本信息并退出
--write-config-to string
如果设置,将配置值写入此文件并退出。
diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/zh/docs/reference/command-line-tools-reference/kube-scheduler.md new file mode 100644 index 0000000000000..4180967d696cc --- /dev/null +++ b/content/zh/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -0,0 +1,380 @@ +--- +title: kube-scheduler +notitle: true +--- +## kube-scheduler + + + + +### 概要 + + +Kubernetes 调度器是一个策略丰富、拓扑感知、工作负载特定的功能,显著影响可用性、性能和容量。调度器需要考虑个人和集体 +的资源要求、服务质量要求、硬件/软件/政策约束、亲和力和反亲和力规范、数据局部性、负载间干扰、完成期限等。 +工作负载特定的要求必要时将通过 API 暴露。 + +``` +kube-scheduler [flags] +``` + + +### 选项 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--address string     默认: "0.0.0.0"
弃用: 要监听 --port 端口的 IP 地址(对于所有 IPv4 接口设置为 0.0.0.0,对于所有 IPv6 接口设置为 ::)。 请参阅 --bind-address。
--algorithm-provider string
弃用: 要使用的调度算法,可选值:ClusterAutoscalerProvider |DefaultProvider
--azure-container-registry-config string
包含 Azure 容器仓库配置信息的文件的路径。
--config string
配置文件的路径。标志会覆盖此文件中的值。
--contention-profiling
弃用: 如果启用了性能分析,则启用锁竞争分析
--feature-gates mapStringBool
一组 key=value 对,用于描述 alpha/experimental 特征的特征门。选项包括:
APIListChunking=true|false (BETA - 默认=true)
APIResponseCompression=true|false (ALPHA - 默认=false)
AdvancedAuditing=true|false (BETA - 默认=true)
AllAlpha=true|false (ALPHA - 默认=false)
AppArmor=true|false (BETA - 默认=true)
AttachVolumeLimit=true|false (ALPHA - 默认=false)
BalanceAttachedNodeVolumes=true|false (ALPHA - 默认=false)
BlockVolume=true|false (ALPHA - 默认=false)
CPUManager=true|false (BETA - 默认=true)
CRIContainerLogRotation=true|false (BETA - 默认=true)
CSIBlockVolume=true|false (ALPHA - 默认=false)
CSIPersistentVolume=true|false (BETA - 默认=true)
CustomPodDNS=true|false (BETA - 默认=true)
CustomResourceSubresources=true|false (BETA - 默认=true)
CustomResourceValidation=true|false (BETA - 默认=true)
DebugContainers=true|false (ALPHA - 默认=false)
DevicePlugins=true|false (BETA - 默认=true)
DynamicKubeletConfig=true|false (BETA - 默认=true)
DynamicProvisioningScheduling=true|false (ALPHA - 默认=false)
EnableEquivalenceClassCache=true|false (ALPHA - 默认=false)
ExpandInUsePersistentVolumes=true|false (ALPHA - 默认=false)
ExpandPersistentVolumes=true|false (BETA - 默认=true)
ExperimentalCriticalPodAnnotation=true|false (ALPHA - 默认=false)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认=false)
GCERegionalPersistentDisk=true|false (BETA - 默认=true)
HugePages=true|false (BETA - 默认=true)
HyperVContainer=true|false (ALPHA - 默认=false)
Initializers=true|false (ALPHA - 默认=false)
KubeletPluginsWatcher=true|false (ALPHA - 默认=false)
LocalStorageCapacityIsolation=true|false (BETA - 默认=true)
MountContainers=true|false (ALPHA - 默认=false)
MountPropagation=true|false (BETA - 默认=true)
PersistentLocalVolumes=true|false (BETA - 默认=true)
PodPriority=true|false (BETA - 默认=true)
PodReadinessGates=true|false (BETA - 默认=false)
PodShareProcessNamespace=true|false (ALPHA - 默认=false)
QOSReserved=true|false (ALPHA - 默认=false)
ReadOnlyAPIDataVolumes=true|false (弃用 - 默认=true)
ResourceLimitsPriorityFunction=true|false (ALPHA - 默认=false)
ResourceQuotaScopeSelectors=true|false (ALPHA - 默认=false)
RotateKubeletClientCertificate=true|false (BETA - 默认=true)
RotateKubeletServerCertificate=true|false (ALPHA - 默认=false)
RunAsGroup=true|false (ALPHA - 默认=false)
ScheduleDaemonSetPods=true|false (ALPHA - 默认=false)
ServiceNodeExclusion=true|false (ALPHA - 默认=false)
ServiceProxyAllowExternalIPs=true|false (弃用 - 默认=false)
StorageObjectInUseProtection=true|false (默认=true)
StreamingProxyRedirects=true|false (BETA - 默认=true)
SupportIPVSProxyMode=true|false (默认=true)
SupportPodPidsLimit=true|false (ALPHA - 默认=false)
Sysctls=true|false (BETA - 默认=true)
TaintBasedEvictions=true|false (ALPHA - 默认=false)
TaintNodesByCondition=true|false (ALPHA - 默认=false)
TokenRequest=true|false (ALPHA - 默认=false)
TokenRequestProjection=true|false (ALPHA - 默认=false)
VolumeScheduling=true|false (BETA - 默认=true)
VolumeSubpath=true|false (默认=true)
VolumeSubpathEnvExpansion=true|false (ALPHA - 默认=false)
-h, --help
kube-scheduler 帮助信息
--kube-api-burst int32     默认: 100
弃用: 每秒与 kubernetes apiserver 交互的数量
--kube-api-content-type string     默认: "application/vnd.kubernetes.protobuf"
弃用: 发送到 apiserver 的请求的内容类型
--kube-api-qps float32     默认: 50
弃用: 与 kubernetes apiserver 交互时使用的 QPS
--kubeconfig string
弃用: 包含授权和 master 位置信息的 kubeconfig 文件的路径。
--leader-elect     默认: true
在执行主循环之前,启动 leader 选举客户端并获得领导能力。在运行复制组件以实现高可用性时启用此选项。
--leader-elect-lease-duration duration     默认: 15s
非 leader 候选人在观察领导层续约之后将等待的时间,直到试图获得领导但尚未更新的 leader 位置。这实际上是 leader 在被另一个候选人替换之前可以停止的最长持续时间。这仅适用于启用 leader 选举的情况。
--leader-elect-renew-deadline duration     默认: 10s
代理 master 在停止领导之前更新领导位置的时间间隔。这必须小于或等于租约期限。这仅适用于启用 leader 选举的情况
--leader-elect-resource-lock endpoints     默认: "endpoints"
在 leader 选举期间用于锁定的资源对象的类型。支持的选项是 endpoints (默认) 和 `configmaps`。
--leader-elect-retry-period duration     默认: 2s
客户端在尝试获取和更新领导之间应该等待的持续时间。这仅适用于启用leader选举的情况。
--lock-object-name string     默认: "kube-scheduler"
弃用: 定义锁对象的名称。
--lock-object-namespace string     默认: "kube-system"
弃用: 定义锁对象的命名空间。
--log-flush-frequency duration     默认: 5s
日志刷新最大间隔
--master string
Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值)
--policy-config-file string
弃用: 包含调度器策略配置的文件。如果未提供策略 ConfigMap 或 --use-legacy-policy-config==true,则使用此文件
--policy-configmap string
弃用: 包含调度器策略配置的 ConfigMap 对象的名称。如果 --use-legacy-policy-config==false,它必须在调度器初始化之前存在于系统命名空间中。配置必须作为 'Data' 映射中元素的值提供,其中 key='policy.cfg'
--policy-configmap-namespace string     默认: "kube-system"
弃用: 策略 ConfigMap 所在的命名空间。 如果未提供此命名空间或为空,则将使用系统命名空间。
--port int     默认: 10251
弃用: 不安全地提供没有身份验证和授权的 HTTP 端口。 如果为0,则根本不提供 HTTPS。 请参阅 --secure-port。
--profiling
弃用: 通过 web 接口 host:port/debug/pprof/ 启动性能分析
--scheduler-name string     默认: "default-scheduler"
弃用: 调度器名称,用于根据 pod 的 "spec.SchedulerName" 选择哪些 pod 将被此调度器处理。
--use-legacy-policy-config
弃用: 当设置为 true 时,调度器将忽略策略 ConfigMap 并使用策略配置文件
--version version[=true]
打印版本信息并退出
--write-config-to string
如果设置,将配置值写入此文件并退出。
diff --git a/content/zh/docs/reference/command-line-tools-reference/kubelet.md b/content/zh/docs/reference/command-line-tools-reference/kubelet.md new file mode 100644 index 0000000000000..0413ade7d9168 --- /dev/null +++ b/content/zh/docs/reference/command-line-tools-reference/kubelet.md @@ -0,0 +1,112 @@ +--- +title: kubelet +notitle: true +--- +## kubelet + + + + +### 概要 + + + +kubelet 是在每个节点上运行的主要 "节点代理"。kubelet 以 PodSpec 为单位来运行任务,PodSpec 是一个描述 pod 的 YAML 或 JSON 对象。 +kubelet 运行多种机制(主要通过 apiserver)提供的一组 PodSpec,并确保这些 PodSpecs 中描述的容器健康运行。 +不是 Kubernetes 创建的容器将不在 kubelet 的管理范围。 + + +除了来自 apiserver 的 PodSpec 之外,还有三种方法可以将容器清单提供给 Kubelet。 + +文件:通过命令行传入的文件路径。kubelet 将定期监听该路径下的文件以获得更新。监视周期默认为 20 秒,可通过参数进行配置。 + +HTTP 端点:HTTP 端点以命令行参数传入。每 20 秒检查一次该端点(该时间间隔也是可以通过命令行配置的)。 + +HTTP 服务:kubelet 还可以监听 HTTP 并响应简单的 API(当前未指定)以提交新的清单。 + +``` +kubelet [flags] +``` + + +### 选项 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
--azure-container-registry-config string
包含 Azure 容器注册配置信息的文件路径
-h, --help
kubelet 的帮助信息
--log-flush-frequency 间隔     默认: 5s
日志刷新间隔的最大秒数
--version version[=true]
打印版本信息并退出
+ + + diff --git a/content/zh/docs/reference/kubectl/kubectl.md b/content/zh/docs/reference/kubectl/kubectl.md new file mode 100644 index 0000000000000..052ce700a6e9e --- /dev/null +++ b/content/zh/docs/reference/kubectl/kubectl.md @@ -0,0 +1,178 @@ +--- +title: kubectl +notitle: true +--- + +## kubectl + +kubectl Բٿ Kubernetes Ⱥ + + +### + +kubectl Բٿ Kubernetes Ⱥ + +ȡϢʣhttps://kubernetes.io/docs/reference/kubectl/overview/ + +``` +kubectl [flags] +``` + +### ѡ +``` + --alsologtostderr ͬʱ־׼̨ļ + --as string ָûִв + --as-group stringArray ģ飬ʹʶָ顣 + --cache-dir string Ĭ HTTP Ŀ¼Ĭֵ "/home/username/.kube/http-cache" + --certificate-authority string ڽ֤Ȩ .cert ļ· + --client-certificate string TLS ʹõĿͻ֤· + --client-key string TLS ʹõĿͻԿļ· + --cluster string ָҪʹõ kubeconfig ļмȺ + --context string ָҪʹõ kubeconfig ļ + -h, --help kubectl + --insecure-skip-tls-verify ֵΪ true򲻻֤Чԡ ⽫ʹHTTPSӲȫ + --kubeconfig string CLI ʹõ kubeconfig ļ· + --log-backtrace-at traceLocation ־ȳ涨ʱԶջϢĬֵ0 + --log-dir string Ϊգ־ļдĿ¼ + --logtostderr ־׼̨ļ + --match-server-version Ҫͻ˰汾ͷ˰汾ƥ + -n, --namespace string ڣCLI ʹôռ + --request-timeout string һ򵥷ǰĵȴʱ䣬ֵҪӦʱ䵥λ(磺1s, 2m, 3h)ֵΪʱ (Ĭֵ "0") + -s, --server string Kubernetes API server ĵַͶ˿ + --stderrthreshold severity ڻڴֵ־׼̨Ĭֵ2 + --token string API server ֤ij + --user string ָʹõ kubeconfig ļеû + -v, --v Level ָ־־ + --vmodule moduleSpec ָ־ģ飬ʽ£pattern=Nʹöŷָ +``` + +### + +* [kubectl alpha](kubectl_alpha.md) - alpha +* [kubectl annotate](kubectl_annotate.md) - Դע +* [kubectl api-resources](kubectl_api-resources.md) - ڷϴӡֵ֧ API Դ +* [kubectl api-versions](kubectl_api-versions.md) - "group/version" ʽڷϴӡֵ֧ API 汾 +* [kubectl apply](kubectl_apply.md) - ͨļ׼뽫ӸԴ +* [kubectl attach](kubectl_attach.md) - ӵе +* [kubectl auth](kubectl_auth.md) - Ȩ +* [kubectl autoscale](kubectl_autoscale.md) - Զչ Deployment, ReplicaSet ReplicationController +* [kubectl certificate](kubectl_certificate.md) - ޸֤Դ +* [kubectl cluster-info](kubectl_cluster-info.md) - չʾȺϢ +* [kubectl completion](kubectl_completion.md) - Ϊ shell ɴ루 bash zsh +* [kubectl config](kubectl_config.md) - ޸ kubeconfig ļ +* [kubectl convert](kubectl_convert.md) - ڲͬ API 汾֮תļ +* [kubectl cordon](kubectl_cordon.md) - node ڵΪɵ +* [kubectl cp](kubectl_cp.md) - ļĿ¼ҲɽļĿ¼Ƶ +* [kubectl create](kubectl_create.md) - ͨļ׼봴Դ +* [kubectl delete](kubectl_delete.md) - ͨļ׼룬ԴƻԴͱǩѡɾԴ +* [kubectl describe](kubectl_describe.md) - ʾضԴԴϸϢ +* [kubectl drain](kubectl_drain.md) - ΪάҪǰnodeڵ +* [kubectl edit](kubectl_edit.md) - ڷ༭Դ +* [kubectl exec](kubectl_exec.md) - ˳ +* [kubectl explain](kubectl_explain.md) - Դĵ +* [kubectl expose](kubectl_expose.md) - ȡ replication controller, service, deployment pod ԴΪµ Kubernetes ¶ +* [kubectl get](kubectl_get.md) - չʾһԴ +* [kubectl label](kubectl_label.md) - Դǩ +* [kubectl logs](kubectl_logs.md) - Ϊ pod еӡ־ +* [kubectl options](kubectl_options.md) - ӡ̳еıʶб +* [kubectl patch](kubectl_patch.md) - ʹսԺϲԴֶ +* [kubectl plugin](kubectl_plugin.md) - в +* [kubectl port-forward](kubectl_port-forward.md) - pod һض˿ +* [kubectl proxy](kubectl_proxy.md) - Ϊ Kubernetes API server д +* [kubectl replace](kubectl_replace.md) - ͨļ׼滻Դ +* [kubectl rollout](kubectl_rollout.md) - Դչʾ +* [kubectl run](kubectl_run.md) - ڼȺָ +* [kubectl scale](kubectl_scale.md) - Deployment, ReplicaSet, Replication Controller Job ¸ģ +* [kubectl set](kubectl_set.md) - ض +* [kubectl taint](kubectl_taint.md) - һ node ڵ۵Ϣ +* [kubectl top](kubectl_top.md) - չʾԴ (CPU/Memory/Storage) ʹϢ +* [kubectl uncordon](kubectl_uncordon.md) - node ڵΪɵ +* [kubectl version](kubectl_version.md) - ӡͻ˺ͷ˰汾Ϣ +* [kubectl wait](kubectl_wait.md) - : һԴϵȴ + + +######2018616գͨspf13/cobraԶ + diff --git a/content/cn/docs/reference/labels-annotations-taints.md b/content/zh/docs/reference/labels-annotations-taints.md similarity index 100% rename from content/cn/docs/reference/labels-annotations-taints.md rename to content/zh/docs/reference/labels-annotations-taints.md diff --git a/content/zh/docs/reference/tools.md b/content/zh/docs/reference/tools.md new file mode 100644 index 0000000000000..565d94b2fe860 --- /dev/null +++ b/content/zh/docs/reference/tools.md @@ -0,0 +1,112 @@ + +--- +reviewers: +- janetkuo +title: +content_template: templates/concept +--- + + +{{% capture overview %}} +Kubernetes һЩùߣ԰ûõʹ Kubernetes ϵͳ +{{% /capture %}} + +{{% capture body %}} +## Kubectl + + +[`kubectl`](/docs/tasks/tools/install-kubectl/) Kubernetes йߣٿ Kubernetes Ⱥ + +## Kubeadm + + +[`kubeadm`](/docs/tasks/tools/install-kubeadm/) һйߣƷĿǰ alpha ׶Σɲһȫɿ Kubernetes Ⱥ + +## Kubefed + + +[`kubefed`](/docs/tasks/federation/set-up-cluster-federation-kubefed/) һйߣûȺ + + +## Minikube + + +[`minikube`](/docs/tasks/tools/install-minikube/) һԷû乤վ㱾زһڵ Kubernetes ȺĹߣڿͲԡ + + +## Dashboard + + +[`Dashboard`](/docs/tasks/access-application-cluster/web-ui-dashboard/), Kubernetes Web û棬ûӦõ Kubernetes ȺйŲԼȺͼȺԴ + +## Helm + + +[`Kubernetes Helm`](https://github.com/kubernetes/helm) һԤ Kubernetes ԴĹߣԴ Helm Ҳ Kubernetes charts + + +ʹ Helm + +*ҲʹѾΪ Kubernetes charts +*ԼӦΪ Kubernetes charts +*Ϊ Kubernetes ӦôظִеĹ +*Ϊ Kubernetes 嵥ļṩܻĹ +* Helm ķ + +## Kompose + + +[`Kompose`](https://github.com/kubernetes-incubator/kompose) һתߣ Docker Compose ûǨ Kubernetes + + +ʹ Kompose: + +* һ Docker Compose ļͳ Kubernetes +* Docker תͨ Kubernetes +* ת v1 v2 Docker Compose `yaml` ļ [ֲʽӦó](https://docs.docker.com/compose/bundles/) +{{% /capture %}} \ No newline at end of file diff --git a/content/zh/docs/setup/salt.md b/content/zh/docs/setup/salt.md new file mode 100755 index 0000000000000..aa5ef2a87dedf --- /dev/null +++ b/content/zh/docs/setup/salt.md @@ -0,0 +1,244 @@ +--- +cn-approvers: +- linyouchong +reviewers: +- davidopp +title: 使用 Salt 配置 Kubernetes 集群 +weight: 70 +content_template: templates/concept +--- + + +{{% capture overview %}} + + +Kubernetes 集群可以使用 Salt 进行配置 + + +这些 Salt 脚本可以跨多个托管提供商共享,这取决于您在何处托管 Kubernetes 集群,您可能正在使用多种不同的操作系统和多种不同的网络配置。因此,在做修改 Salt 配置之前了解一些背景信息是很重要的,以便在使用其他主机托管提供商时降低集群配置失败的可能。 + +{{% /capture %}} + +{{% capture body %}} + + +## 创建 Salt 集群 + + +**salt-master** 服务运行在 kubernetes-master 节点 [(除了在默认的 GCE 环境和 OpenStack-Heat 环境)](#standalone-salt-configuration-on-gce-and-others)。 + + +**salt-minion** 服务运行在 kubernetes-master 节点和每个 kubernetes-node 节点。 + + +每个 salt-minion 服务在 **master.conf** 文件中配置与 kubernetes-master 节点上的 **salt-master** 服务进行交互 [(除了 GCE 环境和 OpenStack-Heat 环境)](#standalone-salt-configuration-on-gce-and-others)。 + +```shell +cat /etc/salt/minion.d/master.conf +``` + +```none +master: kubernetes-master +``` + + +每个 salt-minion 都会与 salt-master 联系,根据其提供的机器信息,salt-master 会向其提供作为 kubernetes-master 或 kubernetes-node 用于运行 Kubernetes 所需要的能力。 + + +如果您正使用基于 Vagrant 的环境,**salt-api** 服务运行在 kubernetes-master 节点。它被配置为使 Vagrant 用户能够对 Salt 集群进行内省,以便通过 REST API 了解 Vagrant 环境中的机器的信息。 + + +## 在 GCE 和其它环境下独立配置 Salt + + +在 GCE 和 OpenStack 环境,使用 Openstack-Heat 提供商,master 和 node 节点被配置为 [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html)。每个 VM 的配置都源于它的 [instance metadata](https://cloud.google.com/compute/docs/metadata) 并被保存在 Salt grains (`/etc/salt/minion.d/grains.conf`) 和本地 Salt 用于保存执行状态的 pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) 中。 + + +对于 GCE 和 OpenStack ,所有引用 master/minion 设置的其余部分都应该被忽略。这种设置的一个后果是,Salt 不存在 - 节点之间不存在配置共享。 + + +## Salt 安全 + + +*(不适用于 默认的 GCE 和 OpenStack-Heat 配置环境。)* + + +salt-master 没有启用安全功能,salt-master 被配置为自动接受所有来自 minion 的接入请求。在深入研究之前,不推荐在生产环境中启用安全配置。(在某些环境中,如果 salt-master 端口不能从外部访问,并且您信任您的网络上的每个节点,这并不像它看起来那么糟糕) + +```shell +cat /etc/salt/master.d/auto-accept.conf +``` + +```shell +open_mode: True +auto_accept: True +``` + + +## 配置 Salt minion + + +Salt 集群中的每个 minion 都有一个相关的配置,它指示 salt-master 如何在机器上提供所需的资源。 + + +下面是一个基于 Vagrant 环境的示例文件: + +```shell +cat /etc/salt/minion.d/grains.conf +``` + +```yaml +grains: + etcd_servers: $MASTER_IP + cloud: vagrant + roles: + - kubernetes-master +``` + + +每个托管环境都使用了略微不同的 grains.conf 文件,用于在需要的 Salt 文件中构建条件逻辑。 + + +下面列举了目前支持的定义键/值对的集合。如果你添加了新的,请确保更新这个列表。 + + +键 | 值 +-----------------------------------|---------------------------------------------------------------- + +`api_servers` | (可选) IP 地址/主机名 ,kubelet 用其访问 kube-apiserver + +`cbr-cidr` | (可选) docker 容器网桥分配给 minion 节点的 IP 地址范围 + +`cloud` | (可选) 托管 Kubernetes 的 IaaS 平台, *gce*, *azure*, *aws*, *vagrant* + +`etcd_servers` | (可选) 以逗号分隔的 IP 地址列表,kube-apiserver 和 kubelet 使用其访问 etcd。kubernetes_master 角色的节点使用第一个机器的 IP ,在 GCE 环境上使用 127.0.0.1。 + +`hostnamef` | (可选) 机器的完整主机名,即:uname -n + +`node_ip` | (可选)用于定位本节点的 IP 地址 + +`hostname_override` | (可选)对应 kubelet 的 hostname-override 参数 + +`network_mode` | (可选)节点间使用的网络模型:*openvswitch* + +`networkInterfaceName` | (可选)用于绑定地址的网络接口,默认值 *eth0* + +`publicAddressOverride` | (可选)kube-apiserver 用于外部只读访问而绑定的IP地址 + +`roles` | (必选)1、`kubernetes-master` 表示本节点是 Kubernetes 集群的 master。2、`kubernetes-pool` 表示本节点是一个 kubernetes-node。根据角色,Salt 脚本会在机器上提供不同的资源 + + +Salt sls 文件可以应用这些键到分支行为。 + + +此外,一个集群可能运行在基于 Debian 的操作系统或基于 Red Hat 的操作系统(Centos、Fedora、RHEL等)。因此,有时区分基于操作系统的行为(如果像下面这样的分支)是很重要的。 + +```liquid + +{% if grains['os_family'] == 'RedHat' %} +// something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc. +{% else %} +// something specific to Debian environment (apt-get, initd) +{% endif %} + +``` + + +## 最佳实践 + + +在为进程配置默认参数时,最好避免使用环境文件( Red Hat 环境中的 Systemd )或 init.d 文件( Debian 发行版)以保留在操作系统环境中应该通用的默认值。这有助于保持我们的 Salt 模板文件易于理解,因为管理员可能不熟悉每个发行版的细节。 + + +## 未来的增强(网络) + + +每个 pod IP 配置都是特定于提供商的,因此在进行网络更改时,必须将这些设置为沙箱,因为不同的提供商可能不会使用相同的机制( iptables 、openvswitch 等)。 + + +我们应该定义一个 grains.conf 键,这样能更明确地捕获正在使用的网络环境配置,以避免将来在不同的提供商之间产生混淆。 + +{{% /capture %}} \ No newline at end of file diff --git a/content/zh/docs/tasks/access-application-cluster/access-cluster.md b/content/zh/docs/tasks/access-application-cluster/access-cluster.md new file mode 100644 index 0000000000000..3b6f66b203e03 --- /dev/null +++ b/content/zh/docs/tasks/access-application-cluster/access-cluster.md @@ -0,0 +1,573 @@ + +--- +title: 访问集群 +weight: 20 +content_template: templates/concept +--- + +{{% capture overview %}} + + +本文阐述多种与集群交互的方法。 + +{{% /capture %}} + +{{< toc >}} + +{{% capture body %}} + + +## 使用 kubectl 完成集群的第一次访问 + +当您第一次访问 Kubernetes API 的时候,我们建议您使用 Kubernetes CLI,`kubectl`。 + +访问集群时,您需要知道集群的地址并且拥有访问的凭证。通常,这些在您通过 [Getting started guide](/docs/setup/) 安装集群时都是自动安装好的,或者其他人安装时也应该提供了凭证和集群地址。 + +通过以下命令检查 kubectl 是否知道集群地址及凭证: + +```shell +$ kubectl config view +``` + + +有许多 [例子](/docs/user-guide/kubectl-cheatsheet) 介绍了如何使用 kubectl,可以在 [kubectl手册](/docs/user-guide/kubectl-overview) 中找到更完整的文档。 + +## 直接访问 REST API +Kubectl 处理 apiserver 的定位和身份验证。 +如果要使用 curl 或 wget 等 http 客户端或浏览器直接访问 REST API,可以通过多种方式查找和验证: + + + - 以代理模式运行 kubectl。 + - 推荐此方式。 + - 使用已存储的 apiserver 地址。 + - 使用自签名的证书来验证 apiserver 的身份。杜绝 MITM 攻击。 + - 对 apiserver 进行身份验证。 + - 未来可能会实现智能化的客户端负载均衡和故障恢复。 + - 直接向 http 客户端提供位置和凭据。 + - 可选的方案。 + - 适用于代理可能引起混淆的某些客户端类型。 + - 需要引入根证书到您的浏览器以防止 MITM 攻击。 + + +### 使用 kubectl 代理 + +以下命令以反向代理的模式运行kubectl。它处理 apiserver 的定位和验证。 +像这样运行: + +```shell +$ kubectl proxy --port=8080 & +``` + + +参阅 [kubectl proxy](/docs/reference/generated/kubectl/kubectl-commands/#proxy) 获取更多详细信息。 + +然后,您可以使用 curl、wget 或浏览器访问 API,如果是 IPv6 则用 [::1] 替换 localhost,如下所示: + +```shell +$ curl http://localhost:8080/api/ +{ + "versions": [ + "v1" + ] +} +``` + + + +### 不使用 kubectl 代理 + +在 Kubernetes 1.3 或更高版本中,`kubectl config view` 不再显示 token。使用 `kubectl describe secret ...` 来获取默认服务帐户的 token,如下所示: + +```shell +$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ") +$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t') +$ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure +{ + "kind": "APIVersions", + "versions": [ + "v1" + ], + "serverAddressByClientCIDRs": [ + { + "clientCIDR": "0.0.0.0/0", + "serverAddress": "10.0.1.149:443" + } + ] +} +``` + + +上面的例子使用了 `--insecure` 参数,这使得它很容易受到 MITM 攻击。当 kubectl 访问集群时,它使用存储的根证书和客户端证书来访问服务器(这些安装在 `~/.kube` 目录中)。由于集群证书通常是自签名的,因此可能需要特殊配置才能让您的 http 客户端使用根证书。 + +在一些集群中,apiserver 不需要身份验证;它可能只服务于 localhost,或者被防火墙保护,这个没有一定的标准。 [配置对 API 的访问](/docs/admin/accessing-the-api) 描述了集群管理员如何进行配置。此类方法可能与未来的高可用性支持相冲突。 + + +## 以编程方式访问 API + +Kubernetes 官方提供对 [Go](#go-client) 和 [Python](#python-client) 的客户端库支持。 + +### Go 客户端 + +* 想要获得这个库,请运行命令:`go get k8s.io/client-go//kubernetes`。参阅 [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go) 来查看目前支持哪些版本。 +* 基于这个 client-go 客户端库编写应用程序。 请注意,client-go 定义了自己的 API 对象,因此如果需要,请从 client-go 而不是从主存储库导入 API 定义,例如,`import "k8s.io/client-go/1.4/pkg/api/v1"` 才是对的。 + +Go 客户端可以像 kubectl CLI 一样使用相同的 [kubeconfig 文件](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) 来定位和验证 apiserver。可参阅 [示例](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go)。 + +如果应用程序以 Pod 的形式部署在集群中,那么请参阅 [下一章](#accessing-the-api-from-a-pod)。 + + +### Python 客户端 + +如果想要使用 [Python 客户端](https://github.com/kubernetes-client/python),请运行命令:`pip install kubernetes`。参阅 [Python Client Library page](https://github.com/kubernetes-client/python) 以获得更详细的安装参数。 + +Python 客户端可以像 kubectl CLI 一样使用相同的 [kubeconfig 文件](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) 来定位和验证 apiserver,可参阅 [示例](https://github.com/kubernetes-client/python/tree/master/examples/example1.py)。 + +### 其它语言 + +目前有多个 [客户端库](/docs/reference/using-api/client-libraries/) 为其它语言提供访问 API 的方法。 +参阅其它库的相关文档以获取他们是如何验证的。 + + +### 从 Pod 中访问 API + +当你从 Pod 中访问 API 时,定位和验证 apiserver 会有些许不同。 + +在 Pod 中定位 apiserver 的推荐方式是通过 `kubernetes.default.svc` 这个 DNS 名称,该名称将会解析为服务 IP,然后服务 IP 将会路由到 apiserver。 + +向 apiserver 进行身份验证的推荐方法是使用 [服务帐户](/docs/tasks/configure-pod-container/configure-service-account/) 凭据。 +通过 kube-system,pod 与服务帐户相关联,并且该服务帐户的凭证(token)被放置在该 pod 中每个容器的文件系统中,位于 `/var/run/secrets/kubernetes.io/serviceaccount/token`。 + + +如果可用,则将证书放入每个容器的文件系统中的 `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`,并且应该用于验证 apiserver 的服务证书。 + +最后,命名空间化的 API 操作所使用的默认命名空间将被放置在每个容器的 `/var/run/secrets/kubernetes.io/serviceaccount/namespace` 文件中。 + +在 pod 中,建议连接 API 的方法是: + + - 在 pod 的 sidecar 容器中运行 `kubectl proxy`,或者以后台进程的形式运行。 + 这将把 Kubernetes API 代理到当前 pod 的 localhost interface,所以 pod 中的所有容器中的进程都能访问它。 + - 使用 Go 客户端库,并使用 `rest.InClusterConfig()` 和 `kubernetes.NewForConfig()` 函数创建一个客户端。 + 他们处理 apiserver 的定位和身份验证。[示例](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go) + +在每种情况下,pod 的凭证都是为了与 apiserver 安全地通信。 + + +## 访问集群中正在运行的服务 + +上一节介绍了如何连接 Kubernetes API 服务。本节介绍如何连接到 Kubernetes 集群上运行的其他服务。 +在 Kubernetes 中,[节点](/docs/admin/node),[pods](/docs/user-guide/pods) 和 [服务](/docs/user-guide/services) 都有自己的 IP。 +在许多情况下,集群上的节点 IP,pod IP 和某些服务 IP 将无法路由,因此无法从集群外部的计算机(例如桌面计算机)访问它们。 + + +### 连接的方法 + +有多种方式可以从集群外部连接节点、pod 和服务: + + - 通过公共 IP 访问服务。 + - 类型为 `NodePort` 或 `LoadBalancer` 的服务,集群外部可以访问。 + 请参阅 [服务](/docs/user-guide/services) 和 [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands/#expose) 文档。 + - 取决于您的集群环境,该服务可能仅暴露给您的公司网络,或者也可能暴露给整个互联网。 + 请考虑公开该服务是否安全。它是否进行自己的身份验证? + - 在服务后端放置 pod。要从一组副本中访问一个特定的 pod,例如进行调试,请在 pod 上放置一个唯一的标签,然后创建一个选择此标签的新服务。 + - 在大多数情况下,应用程序开发人员不应该通过其 nodeIP 直接访问节点。 + - 使用 Proxy Verb 访问服务、node 或者 pod。 + - 在访问远程服务之前进行 apiserver 身份验证和授权。 + 如果服务不能够安全地暴露到互联网,或者服务不能获得节点 IP 端口的访问权限,或者是为了 debug,那么请使用此选项。 + - 代理可能会给一些 web 应用带来问题。 + - 只适用于 HTTP/HTTPS。 + - 更多详细信息在 [这里]。 + - 从集群中的 node 或者 pod 中访问。 + - 运行一个 pod,然后使用 [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec) 来连接 pod 里的 shell。 + 然后从 shell 中连接其它的节点、pod 和服务。 + - 有些集群可能允许您通过 ssh 连接到 node,从那您可能可以访问集群的服务。 + 这是一个非正式的方式,可能可以运行在个别的集群上。 + 浏览器和其它一些工具可能没有被安装。集群的 DNS 可能无法使用。 + + +### 发现内建服务 + +通常来说,集群中会有 kube-system 创建的一些运行的服务。 + +通过 `kubectl cluster-info` 命令获得这些服务列表: + +```shell +$ kubectl cluster-info + + Kubernetes master is running at https://104.197.5.247 + elasticsearch-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy + kibana-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kibana-logging/proxy + kube-dns is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kube-dns/proxy + grafana is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy + heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy +``` + + +这展示了访问每个服务的 proxy-verb URL。 +例如,如果集群启动了集群级别的日志(使用 Elasticsearch),并且传递合适的凭证,那么可以通过 `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/` 进行访问。日志也能通过 kubectl 代理获取,例如: +`http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/`。 +(参阅 [上面的内容](#accessing-the-cluster-api) 来获取如何使用 kubectl 代理来传递凭证) + + +#### 手动构建 apiserver 代理 URL + +如上所述,您可以使用 `kubectl cluster-info` 命令来获得服务的代理 URL。要创建包含服务端点、后缀和参数的代理 URL,只需添加到服务的代理 URL: +`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy` + +如果尚未为端口指定名称,则不必在 URL 中指定 *port_name*。 + +默认情况下,API server 使用 http 代理您的服务。要使用 https,请在服务名称前加上 `https:`: +`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`https:service_name:[port_name]`*`/proxy` + +URL 名称段支持的格式为: + +* `` - 使用 http 代理到默认或未命名的端口 +* `:` - 使用 http 代理到指定的端口 +* `https::` - 使用 https 代理到默认或未命名的端口(注意后面的冒号) +* `https::` - 使用 https 代理到指定的端口 + + +##### 示例 + + * 要访问 Elasticsearch 服务端点 `_search?q=user:kimchy`,您需要使用:`http://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_search?q=user:kimchy` + * 要访问 Elasticsearch 集群健康信息 `_cluster/health?pretty=true`,您需要使用:`https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/_cluster/health?pretty=true` + +```json + { + "cluster_name" : "kubernetes_logging", + "status" : "yellow", + "timed_out" : false, + "number_of_nodes" : 1, + "number_of_data_nodes" : 1, + "active_primary_shards" : 5, + "active_shards" : 5, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 5 + } +``` + + +### 使用 web 浏览器访问运行在集群上的服务 + +您可以在浏览器地址栏中输入 apiserver 代理 URL。但是: + + - Web 浏览器通常不能传递 token,因此您可能需要使用基本(密码)身份验证。Apiserver 可以配置为接受基本身份验证,但您的集群可能未进行配置。 + - 某些 Web 应用程序可能无法运行,尤其是那些使用客户端 javascript 以不知道代理路径前缀的方式构建 URL 的应用程序。 + + +## 请求重定向 + +已弃用并删除了重定向功能。请改用代理(见下文)。 + + +## 多种代理 + +使用 Kubernetes 时可能会遇到几种不同的代理: + +1. [kubectl 代理](#directly-accessing-the-rest-api): + + - 在用户的桌面或 pod 中运行 + - 代理从本地主机地址到 Kubernetes apiserver + - 客户端到代理将使用 HTTP + - 代理到 apiserver 使用 HTTPS + - 定位 apiserver + - 添加身份验证 header + + +1. [apiserver 代理](#discovering-builtin-services): + + - 内置于 apiserver 中 + - 将集群外部的用户连接到集群 IP,否则这些 IP 可能无法访问 + - 运行在 apiserver 进程中 + - 客户端代理使用 HTTPS(也可配置为 http) + - 代理将根据可用的信息决定使用 HTTP 或者 HTTPS 代理到目标 + - 可用于访问节点、Pod 或服务 + - 在访问服务时进行负载平衡 + + +1. [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips): + + - 运行在每个节点上 + - 代理 UDP 和 TCP + - 不能代理 HTTP + - 提供负载均衡 + - 只能用来访问服务 + + +1. 位于 apiserver 之前的 Proxy/Load-balancer: + + - 存在和实现因集群而异(例如 nginx) + - 位于所有客户和一个或多个 apiserver 之间 + - 如果有多个 apiserver,则充当负载均衡器 + + +1. 外部服务上的云负载均衡器: + + - 由一些云提供商提供(例如 AWS ELB,Google Cloud Load Balancer) + - 当 Kubernetes 服务类型为 `LoadBalancer` 时自动创建 + - 只使用 UDP/TCP + - 具体实现因云提供商而异。 + +除了前两种类型之外,Kubernetes 用户通常不需要担心任何其他问题。集群管理员通常会确保后者的正确配置。 + +{{% /capture %}} diff --git a/content/cn/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md b/content/zh/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md rename to content/zh/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md diff --git a/content/cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md rename to content/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md diff --git a/content/cn/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md b/content/zh/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md rename to content/zh/docs/tasks/access-application-cluster/configure-cloud-provider-firewall.md diff --git a/content/zh/docs/tasks/access-application-cluster/configure-dns-cluster.md b/content/zh/docs/tasks/access-application-cluster/configure-dns-cluster.md new file mode 100644 index 0000000000000..48cc0a019a75a --- /dev/null +++ b/content/zh/docs/tasks/access-application-cluster/configure-dns-cluster.md @@ -0,0 +1,26 @@ + +--- +title: 为集群配置 DNS +weight: 120 +content_template: templates/concept +--- + +{{% capture overview %}} + +Kubernetes 提供 DNS 集群插件,大多数支持的环境默认情况下都会启用。 +{{% /capture %}} +{{% capture body %}} + +有关如何为 Kubernetes 集群配置 DNS 的详细信息,请参阅 [Kubernetes DNS 插件示例.](https://github.com/kubernetes/kubernetes/tree/release-1.5/examples/cluster-dns) + +{{% /capture %}} diff --git a/content/cn/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/zh/docs/tasks/access-application-cluster/connecting-frontend-backend.md similarity index 98% rename from content/cn/docs/tasks/access-application-cluster/connecting-frontend-backend.md rename to content/zh/docs/tasks/access-application-cluster/connecting-frontend-backend.md index 74ba22eaa49cb..0c12f2410d450 100644 --- a/content/cn/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/content/zh/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -34,7 +34,7 @@ content_template: templates/tutorial * 本任务使用 [外部负载均衡服务](/docs/tasks/access-application-cluster/create-external-load-balancer/), 所以需要对应的可支持此功能的环境。如果你的环境不能支持,你可以使用 - [NodePort](/docs/user-guide/services/#nodeport) 类型的服务代替。 + [NodePort](/docs/user-guide/services/#type-nodeport) 类型的服务代替。 {{% /capture %}} diff --git a/content/zh/docs/tasks/access-application-cluster/create-external-load-balancer.md b/content/zh/docs/tasks/access-application-cluster/create-external-load-balancer.md new file mode 100644 index 0000000000000..45ba1aa7e842e --- /dev/null +++ b/content/zh/docs/tasks/access-application-cluster/create-external-load-balancer.md @@ -0,0 +1,322 @@ + +--- +title: 创建一个外部负载均衡器 +content_template: templates/task +weight: 80 +--- + + +{{% capture overview %}} + + +本文展示如何创建一个外部负载均衡器。 + +创建服务时,您可以选择自动创建云网络负载均衡器。这提供了一个外部可访问的 IP 地址,可将流量分配到集群节点上的正确端口上 _假设集群在支持的环境中运行,并配置了正确的云负载平衡器提供商包_。 + +有关如何配置和使用 Ingress 资源以为服务提供外部可访问的 URL、负载均衡流量、终止 SSL 等功能,请查看 [Ingress](/docs/concepts/services-networking/ingress/) 文档。 + +{{% /capture %}} + +{{% capture prerequisites %}} + +* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + + +## 配置文件 + +要创建外部负载均衡器,请将以下内容添加到 [服务配置文件](/docs/concepts/services-networking/service/#type-loadbalancer): + +```json + "type": "LoadBalancer" +``` + + +您的配置文件可能会如下所示: + +```json + { + "kind": "Service", + "apiVersion": "v1", + "metadata": { + "name": "example-service" + }, + "spec": { + "ports": [{ + "port": 8765, + "targetPort": 9376 + }], + "selector": { + "app": "example" + }, + "type": "LoadBalancer" + } + } +``` + + +## 使用 kubectl + +您也可以使用 `kubectl expose` 命令及其 `--type=LoadBalancer` 参数创建服务: + +```bash +kubectl expose rc example --port=8765 --target-port=9376 \ + --name=example-service --type=LoadBalancer +``` + + +此命令通过使用与引用资源(在上面的示例的情况下,名为 `example` 的 replication controller)相同的选择器来创建一个新的服务。 + +更多信息(包括更多的可选参数),请参阅 [`kubectl expose` reference](/docs/reference/generated/kubectl/kubectl-commands/#expose)。 + + +## 找到您的 IP 地址 + +您可以通过 `kubectl` 获取服务信息,找到为您的服务创建的 IP 地址: + +```bash +kubectl describe services example-service +``` + + +这将获得如下输出: + +```bash + Name: example-service + Namespace: default + Labels: + Annotations: + Selector: app=example + Type: LoadBalancer + IP: 10.67.252.103 + LoadBalancer Ingress: 123.45.678.9 + Port: 80/TCP + NodePort: 32445/TCP + Endpoints: 10.64.0.4:80,10.64.1.5:80,10.64.2.4:80 + Session Affinity: None + Events: +``` + + +IP 地址列在 `LoadBalancer Ingress` 旁边。 + +{{< note >}} + +**注意:** 如果您在 Minikube 上运行服务,您可以通过以下命令找到分配的 IP 地址和端口: +{{< /note >}} +```bash +minikube service example-service --url +``` + + +## 保留客户端源 IP + +由于此功能的实现,目标容器中看到的源 IP 将 *不是客户端的原始源 IP*。要启用保留客户端 IP,可以在服务的 spec 中配置以下字段(支持 GCE/Google Kubernetes Engine 环境): + +* `service.spec.externalTrafficPolicy` - 表示此服务是否希望将外部流量路由到节点本地或集群范围的端点。有两个可用选项:"Cluster"(默认)和 "Local"。"Cluster" 隐藏了客户端源 IP,可能导致第二跳到另一个节点,但具有良好的整体负载分布。 "Local" 保留客户端源 IP 并避免 LoadBalancer 和 NodePort 类型服务的第二跳,但存在潜在的不均衡流量传播风险。 +* `service.spec.healthCheckNodePort` - 指定服务的 healthcheck nodePort(数字端口号)。如果未指定,则 serviceCheckNodePort 由服务 API 后端使用已分配的 nodePort 创建。如果客户端指定,它将使用客户端指定的 nodePort 值。仅当 type 设置为 "LoadBalancer" 并且 externalTrafficPolicy 设置为 "Local" 时才生效。 + +可以通过在服务的配置文件中将 `externalTrafficPolicy` 设置为 "Local" 来激活此功能。 + +```json + { + "kind": "Service", + "apiVersion": "v1", + "metadata": { + "name": "example-service" + }, + "spec": { + "ports": [{ + "port": 8765, + "targetPort": 9376 + }], + "selector": { + "app": "example" + }, + "type": "LoadBalancer", + "externalTrafficPolicy": "Local" + } + } +``` + + +### 特性可用性 + +| k8s 版本 | 特性支持 | +| :---------: |:-----------:| +| 1.7+ | 支持完整的 API 字段 | +| 1.5 - 1.6 | 支持 Beta Annotation | +| <1.5 | 不支持 | + +您可以在下面找到已弃用的 Beta annotation,在稳定版本前使用它来使用该功能。较新的 Kubernetes 版本可能会在 v1.7 之后停止支持这些功能。 +请更新现有应用程序以直接使用这些字段。 + +* `service.beta.kubernetes.io/external-traffic` annotation <-> `service.spec.externalTrafficPolicy` 字段 +* `service.beta.kubernetes.io/healthcheck-nodeport` annotation <-> `service.spec.healthCheckNodePort` 字段 + +`service.beta.kubernetes.io/external-traffic` annotation 与 `service.spec.externalTrafficPolicy` 字段相比拥有一组不同的值。值匹配如下: + +* "OnlyLocal" annotation <-> "Local" 字段 +* "Global" annotation <-> "Cluster" 字段 + +**请注意,此功能目前尚未实现在所有云提供商/环境中。** + + +已知的问题: + +* AWS: [kubernetes/kubernetes#35758](https://github.com/kubernetes/kubernetes/issues/35758) +* Weave-Net: [weaveworks/weave/#2924](https://github.com/weaveworks/weave/issues/2924) + +{{% /capture %}} + +{{% capture discussion %}} + + +## 外部负载均衡器提供商 + +请务必注意,此功能的数据路径由 Kubernetes 集群外部的负载均衡器提供。 + +当服务类型设置为 `LoadBalancer` 时,Kubernetes 向集群中的 pod 提供与 `type=` 等效的功能,并通过使用 Kubernetes pod 的条目对负载均衡器(从外部到 Kubernetes)进行编程来扩展它。 Kubernetes 服务控制器自动创建外部负载均衡器,健康检查(如果需要),防火墙规则(如果需要),并获取云提供商分配的外部 IP 并将其填充到服务对象中。 + + +## 保留源 IP 时的注意事项和限制 + +GCE/AWS 负载均衡器不为其目标池提供权重。对于旧的 LB kube-proxy 规则来说,这不是一个问题,它可以在所有端点之间正确平衡。 + +使用新功能,外部流量不会在 pod 之间平均负载,而是在节点级别平均负载(因为 GCE/AWS 和其他外部 LB 实现无法指定每个节点的权重,因此它们的平衡跨所有目标节点,并忽略每个节点上的 pod 数量)。 + +但是,我们可以声明,对于 NumServicePods << NumNodes 或 NumServicePods >> NumNodes 时,即使没有权重,也会看到接近相等的分布。 + +一旦外部负载平衡器提供权重,就可以将此功能添加到 LB 编程路径中。 +*未来工作:1.4 版本不提供权重支持,但可能会在将来版本中添加* + +内部 pod 到 pod 的流量应该与 ClusterIP 服务类似,所有 pod 的概率相同。 + +{{% /capture %}} + + diff --git a/content/cn/docs/tasks/access-application-cluster/frontend.yaml b/content/zh/docs/tasks/access-application-cluster/frontend.yaml similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/frontend.yaml rename to content/zh/docs/tasks/access-application-cluster/frontend.yaml diff --git a/content/cn/docs/tasks/access-application-cluster/frontend/frontend.conf b/content/zh/docs/tasks/access-application-cluster/frontend/frontend.conf similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/frontend/frontend.conf rename to content/zh/docs/tasks/access-application-cluster/frontend/frontend.conf diff --git a/content/cn/docs/tasks/access-application-cluster/hello-service.yaml b/content/zh/docs/tasks/access-application-cluster/hello-service.yaml similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/hello-service.yaml rename to content/zh/docs/tasks/access-application-cluster/hello-service.yaml diff --git a/content/cn/docs/tasks/access-application-cluster/hello.yaml b/content/zh/docs/tasks/access-application-cluster/hello.yaml similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/hello.yaml rename to content/zh/docs/tasks/access-application-cluster/hello.yaml diff --git a/content/zh/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/zh/docs/tasks/access-application-cluster/list-all-running-container-images.md new file mode 100644 index 0000000000000..a376b4c0980d7 --- /dev/null +++ b/content/zh/docs/tasks/access-application-cluster/list-all-running-container-images.md @@ -0,0 +1,193 @@ + +--- +title: 列出集群中所有运行容器的镜像 +content_template: templates/task +weight: 100 +--- + +{{% capture overview %}} + + +本文展示如何使用 kubectl 来列出集群中所有运行 pod 的容器的镜像 + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + + +在本练习中,您将使用 kubectl 来获取集群中运行的所有 Pod,并格式化输出来提取每个 pod 中的容器列表。 + + +## 列出所有命名空间下的所有容器 + +- 使用 `kubectl get pods --all-namespaces` 获取所有命名空间下的所有 Pod +- 使用 `-o jsonpath={..image}` 来格式化输出,以仅包含容器镜像名称。 + 这将以递归方式从返回的 json 中解析出 `image` 字段。 + - 参阅 [jsonpath reference](/docs/user-guide/jsonpath/) 来获取更多关于如何使用 jsonpath 的信息。 +- 使用标准化工具来格式化输出:`tr`, `sort`, `uniq` + - 使用 `tr` 以用换行符替换空格 + - 使用 `sort` 来对结果进行排序 + - 使用 `uniq` 来聚合镜像计数 + +```sh +kubectl get pods --all-namespaces -o jsonpath="{..image}" |\ +tr -s '[[:space:]]' '\n' |\ +sort |\ +uniq -c +``` + + +上面的命令将递归获取所有返回项目的名为 `image` 的字段。 + +作为替代方案,可以使用 Pod 的镜像字段的绝对路径。这确保即使字段名称重复的情况下也能检索到正确的字段,例如,特定项目中的许多字段都称为 `name`: + +```sh +kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" +``` + + +jsonpath 解释如下: + +- `.items[*]`: 对于每个返回的值 +- `.spec`: 获取 spec +- `.containers[*]`: 对于每个容器 +- `.image`: 获取镜像 + +{{< note >}} + +**注意:** 按名字获取单个 Pod 时,例如 `kubectl get pod nginx`,路径的 `.items[*]` 部分应该省略,因为返回的是一个 Pod 而不是一个项目列表。 +{{< /note >}} + + +## 列出 Pod 中的容器 + +可以使用 `range` 操作进一步控制格式化,以单独操作每个元素。 + +```sh +kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\ +sort +``` + + +## 列出以 label 过滤后的 Pod 的所有容器 + +要获取匹配特定标签的 Pod,请使用 -l 参数。以下匹配仅与标签 `app=nginx` 相符的 Pod。 + +```sh +kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=nginx +``` + + +## 列出以命名空间过滤后的 Pod 的所有容器 + +要获取匹配特定命名空间的 Pod,请使用 namespace 参数。以下仅匹配 `kube-system` 命名空间下的 Pod。 + +```sh +kubectl get pods --namespace kube-system -o jsonpath="{..image}" +``` + + +## 使用 go-template 代替 jsonpath 来获取容器 + +作为 jsonpath 的替代,Kubectl 支持使用 [go-templates](https://golang.org/pkg/text/template/) 来格式化输出: + + +```sh +kubectl get pods --all-namespaces -o go-template --template="{{range .items}}{{range .spec.containers}}{{.image}} {{end}}{{end}}" +``` + + + +{{% /capture %}} + +{{% capture discussion %}} + +{{% /capture %}} + +{{% capture whatsnext %}} + + +### 参考 + +* [Jsonpath](/docs/user-guide/jsonpath/) 参考指南 +* [Go template](https://golang.org/pkg/text/template/) 参考指南 + +{{% /capture %}} + + diff --git a/content/zh/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md b/content/zh/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md new file mode 100644 index 0000000000000..3308069220a81 --- /dev/null +++ b/content/zh/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md @@ -0,0 +1,190 @@ + +--- +title: 提供对集群中应用程序的负载均衡访问 +content_template: templates/tutorial +weight: 50 +--- + +{{% capture overview %}} + + +本文展示如何创建一个 Kubernetes 服务对象,来提供负载均衡入口以访问集群内正在运行的应用程序。 + +{{% /capture %}} + + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + + +{{% capture objectives %}} + + +* 运行两个 Hello World 应用示例 +* 创建一个服务对象 +* 使用这个服务对象来访问正在运行的应用 + +{{% /capture %}} + + +{{% capture lessoncontent %}} + + +## 为在两个 pod 中运行的应用程序创建服务 + +1. 在您的集群中运行一个 Hello World 应用: + + ``` + kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080 + ``` + + +1. 列出运行 Hello World 应用的 pod: + + ``` + kubectl get pods --selector="run=load-balancer-example" + ``` + + + 输出应类似于: + + ``` + NAME READY STATUS RESTARTS AGE + hello-world-2189936611-8fyp0 1/1 Running 0 6m + hello-world-2189936611-9isq8 1/1 Running 0 6m + ``` + + +1. 创建一个服务对象来暴露这个 deployment: + + ``` + kubectl expose deployment --type=NodePort --name=example-service + ``` + + + 这里的 `` 是您的 deployment 的名称。 + +1. 显示您服务的 IP 地址: + + ``` + kubectl get services example-service + ``` + + + 输出展示了您服务的内部和外部 IP 地址。如果外部 IP 地址显示 ``,那么您需要重复运行以上命令。 + + {{< note >}} + + **注意:** 如果您使用 Minikube,那么您将不会获得外部 IP 地址。外部 IP 地址将保持 pending 状态。 + {{< /note >}} + + NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE + example-service 10.0.0.160 8080/TCP 40s + + +1. 使用您的服务对象来访问这个 Hello World 应用: + + curl :8080 + + + 这里的 `` 是您服务的外部 IP 地址。 + + 输出是来自应用的 hello 消息: + + Hello Kubernetes! + + {{< note >}} + + **注意:** 如果您使用 Minikube,输入以下命令: + {{< /note >}} + + kubectl cluster-info + kubectl describe services example-service + + + 输出将展示您的 Minikube 节点的 IP 地址和您服务的 NodePort 值。然后输入以下命令来访问这个 Hello World 应用: + + curl : + + + 这里的 `` 是您的 Minikube 节点的 IP 地址,`` 是您服务的 NodePort 值。 + + +## 使用服务配置文件 + +作为 `kubectl expose` 的替代方法,您可以使用 [服务配置文件](/docs/concepts/services-networking/service/) 来创建服务。 + + +{{% /capture %}} + + +{{% capture whatsnext %}} + + +学习更多关于如何 [通过服务连接应用](/docs/concepts/services-networking/connect-applications-service/)。 +{{% /capture %}} + + + diff --git a/content/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md new file mode 100644 index 0000000000000..6bccf05cda7d5 --- /dev/null +++ b/content/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -0,0 +1,222 @@ + +--- +title: 使用端口转发来访问集群中的应用 +content_template: templates/task +weight: 40 +--- + +{{% capture overview %}} + + +本文展示如何使用 `kubectl port-forward` 连接到在 Kubernetes 集群中运行的 Redis 服务。这种类型的连接对数据库调试很有用。 +{{% /capture %}} + + +{{% capture prerequisites %}} + +* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + + +* 安装 [redis-cli](http://redis.io/topics/rediscli)。 + +{{% /capture %}} + + +{{% capture steps %}} + + +## 创建 Redis deployment 和服务 + +1. 创建一个 Redis deployment: + + kubectl create -f https://k8s.io/docs/tutorials/stateless-application/guestbook/redis-master-deployment.yaml + + + 查看输出是否成功,以验证是否成功创建 deployment: + + deployment "redis-master" created + + + 当 pod 是 ready 时,您将得到: + + kubectl get pods + + NAME READY STATUS RESTARTS AGE + redis-master-765d459796-258hz 1/1 Running 0 50s + + kubectl get deployment + + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + redis-master 1 1 1 1 55s + + kubectl get rs + + NAME DESIRED CURRENT READY AGE + redis-master-765d459796 1 1 1 1m + + + +2. 创建一个 Redis 服务: + + kubectl create -f https://k8s.io/docs/tutorials/stateless-application/guestbook/redis-master-service.yaml + + + 查看输出是否成功,以验证是否成功创建服务: + + service "redis-master" created + + + 检查服务是否创建: + + kubectl get svc | grep redis + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + redis-master ClusterIP 10.0.0.213 6379/TCP 27s + + +3. 验证 Redis 服务是否运行在 pod 中并且监听 6379 端口: + + + kubectl get pods redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' + + + + 输出应该显示端口: + + 6379 + + + +## 转发一个本地端口到 pod 端口 + +1. 从 Kubernetes v1.10 开始,`kubectl port-forward` 允许使用资源名称(例如服务名称)来选择匹配的 pod 来进行端口转发。 + + kubectl port-forward redis-master-765d459796-258hz 6379:6379 + + + 这相当于 + + kubectl port-forward pods/redis-master-765d459796-258hz 6379:6379 + + + 或者 + + kubectl port-forward deployment/redis-master 6379:6379 + + + 或者 + + kubectl port-forward rs/redis-master 6379:6379 + + + 或者 + + kubectl port-forward svc/redis-master 6379:6379 + + + 以上所有命令都应该有效。输出应该类似于: + + I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:6379 -> 6379 + I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:6379 -> 6379 + + +2. 启动 Redis 命令行接口: + + redis-cli + + +3. 在 Redis 命令行提示符下,输入 `ping` 命令: + + 127.0.0.1:6379>ping + + + 成功的 ping 请求应该返回 PONG。 + +{{% /capture %}} + + +{{% capture discussion %}} + + +## 讨论 + +与本地 6379 端口建立的连接将转发到运行 Redis 服务器的 pod 的 6379 端口。通过此连接,您可以使用本地工作站来调试在 pod 中运行的数据库。 + +{{< warning >}} + +**警告:** 由于已知的限制,目前的端口转发仅适用于 TCP 协议。 +在 [issue 47862](https://github.com/kubernetes/kubernetes/issues/47862) 中正在跟踪对 UDP 协议的支持。 +{{< /warning >}} + +{{% /capture %}} + + +{{% capture whatsnext %}} + +学习更多关于 [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward)。 +{{% /capture %}} + + + diff --git a/content/cn/docs/tasks/access-application-cluster/redis-master.yaml b/content/zh/docs/tasks/access-application-cluster/redis-master.yaml similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/redis-master.yaml rename to content/zh/docs/tasks/access-application-cluster/redis-master.yaml diff --git a/content/zh/docs/tasks/access-application-cluster/service-access-application-cluster.md b/content/zh/docs/tasks/access-application-cluster/service-access-application-cluster.md new file mode 100644 index 0000000000000..908280e29d3d4 --- /dev/null +++ b/content/zh/docs/tasks/access-application-cluster/service-access-application-cluster.md @@ -0,0 +1,225 @@ + +--- +title: 使用服务来访问集群中的应用 +content_template: templates/tutorial +weight: 60 +--- + +{{% capture overview %}} + + +本文展示如何创建一个 Kubernetes 服务对象,能让外部客户端访问在集群中运行的应用。该服务为一个应用的两个运行实例提供负载均衡。 + +{{% /capture %}} + + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + + +{{% capture objectives %}} + + +* 运行 Hello World 应用的两个实例。 +* 创建一个服务对象来暴露一个 node port。 +* 使用服务对象来访问正在运行的应用。 + +{{% /capture %}} + + +{{% capture lessoncontent %}} + + +## 为运行在两个 pod 中的应用创建一个服务: + +1. 在您的集群中运行一个 Hello World 应用: + ```shell + kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080 + ``` + + 上面的命令创建一个 [Deployment](/docs/concepts/workloads/controllers/deployment/) 对象和一个关联的 [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) 对象。这个 ReplicaSet 有两个 [Pod](/docs/concepts/workloads/pods/pod/),每个 Pod 都运行着 Hello World 应用。 + + +1. 展示 Deployment 的信息: + ```shell + kubectl get deployments hello-world + kubectl describe deployments hello-world + ``` + + +1. 展示您的 ReplicaSet 对象信息: + ```shell + kubectl get replicasets + kubectl describe replicasets + ``` + + +1. 创建一个服务对象来暴露 deployment: + ```shell + kubectl expose deployment hello-world --type=NodePort --name=example-service + ``` + + +1. 展示服务信息: + ```shell + kubectl describe services example-service + ``` + + 输出类似于: + ```shell + Name: example-service + Namespace: default + Labels: run=load-balancer-example + Annotations: + Selector: run=load-balancer-example + Type: NodePort + IP: 10.32.0.16 + Port: 8080/TCP + TargetPort: 8080/TCP + NodePort: 31496/TCP + Endpoints: 10.200.1.4:8080,10.200.2.5:8080 + Session Affinity: None + Events: + ``` + + 注意服务中的 NodePort 值。例如在上面的输出中,NodePort 是 31496。 + + +1. 列出运行 Hello World 应用的 pod: + ```shell + kubectl get pods --selector="run=load-balancer-example" --output=wide + ``` + + 输出类似于: + ```shell + NAME READY STATUS ... IP NODE + hello-world-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1 + hello-world-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2 + ``` + +1. 获取运行 Hello World 的 pod 的其中一个节点的公共 IP 地址。如何获得此地址取决于您设置集群的方式。 + 例如,如果您使用的是 Minikube,则可以通过运行 `kubectl cluster-info` 来查看节点地址。 + 如果您使用的是 Google Compute Engine 实例,则可以使用 `gcloud compute instances list` 命令查看节点的公共地址。 + +1. 在您选择的节点上,创建一个防火墙规则以开放 node port 上的 TCP 流量。 + 例如,如果您的服务的 NodePort 值为 31568,请创建一个防火墙规则以允许 31568 端口上的 TCP 流量。 + 不同的云提供商提供了不同方法来配置防火墙规则。 + +1. 使用节点地址和 node port 来访问 Hello World 应用: + ```shell + curl http://: + ``` + + 这里的 `` 是您节点的公共 IP 地址,`` 是您服务的 NodePort 值。 + 对于请求成功的响应是一个 hello 消息: + ```shell + Hello Kubernetes! + ``` + + +## 使用服务配置文件 + +作为 `kubectl expose` 的替代方法,您可以使用 [服务配置文件](/docs/concepts/services-networking/service/) 来创建服务。 + +{{% /capture %}} + + +{{% capture cleanup %}} + + +想要删除服务,输入以下命令: + + kubectl delete services example-service + + +想要删除运行 Hello World 应用的 Deployment、ReplicaSet 和 Pod,输入以下命令: + + kubectl delete deployment hello-world + +{{% /capture %}} + + +{{% capture whatsnext %}} + + +学习更多关于如何 [通过服务连接应用](/docs/concepts/services-networking/connect-applications-service/)。 +{{% /capture %}} diff --git a/content/cn/docs/tasks/access-application-cluster/two-container-pod.yaml b/content/zh/docs/tasks/access-application-cluster/two-container-pod.yaml similarity index 100% rename from content/cn/docs/tasks/access-application-cluster/two-container-pod.yaml rename to content/zh/docs/tasks/access-application-cluster/two-container-pod.yaml diff --git a/content/zh/docs/tasks/access-kubernetes-api/setup-extension-api-server.md b/content/zh/docs/tasks/access-kubernetes-api/setup-extension-api-server.md new file mode 100644 index 0000000000000..f7b3fff81a32d --- /dev/null +++ b/content/zh/docs/tasks/access-kubernetes-api/setup-extension-api-server.md @@ -0,0 +1,112 @@ + +--- +title: 设置一个扩展的 API server +reviewers: +- lavalamp +- cheftako +- chenopis +content_template: templates/task +weight: 15 +--- + +{{% capture overview %}} + + +设置一个扩展的 API server 来使用聚合层以让 Kubernetes apiserver 使用其它 API 进行扩展,这些 API 不是核心 Kubernetes API 的一部分。 + +{{% /capture %}} + +{{% capture prerequisites %}} + + +* 您需要拥有一个运行的 Kubernetes 集群。 +* 您必须 [配置聚合层](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) 并且启用 apiserver 的相关参数。 + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + + +## 设置一个扩展的 api-server 来使用聚合层 + +以下步骤描述如何 *在一个高层次* 设置一个扩展的 apiserver。无论您使用的是 YAML 配置还是使用 API,这些步骤都适用。目前我们正在尝试区分出两者的区别。有关使用 YAML 配置的具体示例,您可以在 Kubernetes 库中查看 [sample-apiserver](https://github.com/kubernetes/sample-apiserver/blob/master/README.md)。 + +或者,您可以使用现有的第三方解决方案,例如 [apiserver-builder](https://github.com/Kubernetes-incubator/apiserver-builder/blob/master/README.md),它将生成框架并自动执行以下所有步骤。 + + +1. 确保启用了 APIService API(检查 `--runtime-config`)。默认应该是启用的,除非被特意关闭了。 +1. 您可能需要制定一个 RBAC 规则,以允许您添加 APIService 对象,或让您的集群管理员创建一个。(由于 API 扩展会影响整个集群,因此不建议在实时集群中对 API 扩展进行测试/开发/调试) +1. 创建 Kubernetes 命名空间,扩展的 api-service 将运行在该命名空间中。 +1. 创建(或获取)用来签署服务器证书的 CA 证书,扩展 api-server 中将使用该证书做 HTTPS 连接。 +1. 为 api-server 创建一个服务端的证书(或秘钥)以使用 HTTPS。这个证书应该由上述的 CA 签署。同时应该还要有一个 Kube DNS 名称的 CN,这是从 Kubernetes 服务派生而来的,格式为 `..svc`。 +1. 使用命名空间中的证书(或秘钥)创建一个 Kubernetes secret。 +1. 为扩展 api-server 创建一个 Kubernetes deployment,并确保以卷的方式挂载了 secret。它应该包含对扩展 api-server 镜像的引用。Deployment 也应该在同一个命名空间中。 +1. 确保您的扩展 apiserver 从该卷中加载了那些证书,并在 HTTPS 握手过程中使用它们。 +1. 在您的命令空间中创建一个 Kubernetes service account。 +1. 为资源允许的操作创建 Kubernetes 集群角色。 +1. 以您命令空间中的 service account 创建一个 Kubernetes 集群角色绑定,绑定到您刚创建的角色上。 +1. 以您命令空间中的 service account 创建一个 Kubernetes 集群角色绑定,绑定到 `system:auth-delegator` 集群角色,以将 auth 决策委派给 Kubernetes 核心 API 服务器。 +1. 以您命令空间中的 service account 创建一个 Kubernetes 集群角色绑定,绑定到 `extension-apiserver-authentication-reader` 角色。这将让您的扩展 api-server 能够访问 `extension-apiserver-authentication` configmap。 +1. 创建一个 Kubernetes apiservice。上述的 CA 证书应该使用 base64 编码,剥离新行并用作 apiservice 中的 spec.caBundle。这不应该是命名空间化的。如果使用了 [kube-aggregator API](https://github.com/kubernetes/kube-aggregator/),那么只需要传入 PEM 编码的 CA 绑定,因为 base 64 编码已经完成了。 +1. 使用 kubectl 来获得您的资源。它应该返回 "找不到资源"。这意味着一切正常,但您目前还没有创建该资源类型的对象。 + +{{% /capture %}} + +{{% capture whatsnext %}} + + +* 如果你还未配置,请 [配置聚合层](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) 并启用 apiserver 的相关参数。 +* 高级概述,请参阅 [使用聚合层扩展 Kubernetes API](/docs/concepts/api-extension/apiserver-aggregation)。 +* 了解如何 [使用 Custom Resource Definition 扩展 Kubernetes API](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)。 + +{{% /capture %}} + + + diff --git a/content/cn/docs/tasks/administer-cluster/access-cluster-services.md b/content/zh/docs/tasks/administer-cluster/access-cluster-services.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/access-cluster-services.md rename to content/zh/docs/tasks/administer-cluster/access-cluster-services.md diff --git a/content/cn/docs/tasks/administer-cluster/apply-resource-quota-limit.md b/content/zh/docs/tasks/administer-cluster/apply-resource-quota-limit.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/apply-resource-quota-limit.md rename to content/zh/docs/tasks/administer-cluster/apply-resource-quota-limit.md diff --git a/content/cn/docs/tasks/administer-cluster/calico-network-policy.md b/content/zh/docs/tasks/administer-cluster/calico-network-policy.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/calico-network-policy.md rename to content/zh/docs/tasks/administer-cluster/calico-network-policy.md diff --git a/content/cn/docs/tasks/administer-cluster/change-default-storage-class.md b/content/zh/docs/tasks/administer-cluster/change-default-storage-class.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/change-default-storage-class.md rename to content/zh/docs/tasks/administer-cluster/change-default-storage-class.md diff --git a/content/cn/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/zh/docs/tasks/administer-cluster/change-pv-reclaim-policy.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/change-pv-reclaim-policy.md rename to content/zh/docs/tasks/administer-cluster/change-pv-reclaim-policy.md diff --git a/content/cn/docs/tasks/administer-cluster/cluster-management.md b/content/zh/docs/tasks/administer-cluster/cluster-management.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cluster-management.md rename to content/zh/docs/tasks/administer-cluster/cluster-management.md diff --git a/content/cn/docs/tasks/administer-cluster/cpu-constraints-pod-2.yaml b/content/zh/docs/tasks/administer-cluster/cpu-constraints-pod-2.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-constraints-pod-2.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-constraints-pod-2.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-constraints-pod-3.yaml b/content/zh/docs/tasks/administer-cluster/cpu-constraints-pod-3.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-constraints-pod-3.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-constraints-pod-3.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-constraints-pod-4.yaml b/content/zh/docs/tasks/administer-cluster/cpu-constraints-pod-4.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-constraints-pod-4.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-constraints-pod-4.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-constraints-pod.yaml b/content/zh/docs/tasks/administer-cluster/cpu-constraints-pod.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-constraints-pod.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-constraints-pod.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-constraints.yaml b/content/zh/docs/tasks/administer-cluster/cpu-constraints.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-constraints.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-constraints.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-defaults-pod-2.yaml b/content/zh/docs/tasks/administer-cluster/cpu-defaults-pod-2.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-defaults-pod-2.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-defaults-pod-2.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-defaults-pod-3.yaml b/content/zh/docs/tasks/administer-cluster/cpu-defaults-pod-3.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-defaults-pod-3.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-defaults-pod-3.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-defaults-pod.yaml b/content/zh/docs/tasks/administer-cluster/cpu-defaults-pod.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-defaults-pod.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-defaults-pod.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-defaults.yaml b/content/zh/docs/tasks/administer-cluster/cpu-defaults.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-defaults.yaml rename to content/zh/docs/tasks/administer-cluster/cpu-defaults.yaml diff --git a/content/cn/docs/tasks/administer-cluster/cpu-management-policies.md b/content/zh/docs/tasks/administer-cluster/cpu-management-policies.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/cpu-management-policies.md rename to content/zh/docs/tasks/administer-cluster/cpu-management-policies.md diff --git a/content/cn/docs/tasks/administer-cluster/cpu-memory-limit.md b/content/zh/docs/tasks/administer-cluster/cpu-memory-limit.md similarity index 99% rename from content/cn/docs/tasks/administer-cluster/cpu-memory-limit.md rename to content/zh/docs/tasks/administer-cluster/cpu-memory-limit.md index d867fc14b720f..3988b9b9c8fd0 100644 --- a/content/cn/docs/tasks/administer-cluster/cpu-memory-limit.md +++ b/content/zh/docs/tasks/administer-cluster/cpu-memory-limit.md @@ -195,7 +195,7 @@ spec: 注意到这个 Pod 显式地指定了资源 *limits* 和 *requests*,所以它不会使用该 Namespace 的默认值。 -注意:在物理节点上默认安装的 Kubernetes 集群中,CPU 资源的 *limits* 是被强制使用的,该 Kubernetes 集群运行容器,除非管理员在部署 kublet 时使用了如下标志: +注意:在物理节点上默认安装的 Kubernetes 集群中,CPU 资源的 *limits* 是被强制使用的,该 Kubernetes 集群运行容器,除非管理员在部署 kubelet 时使用了如下标志: ```shell $ kubelet --help diff --git a/content/cn/docs/tasks/administer-cluster/declare-network-policy.md b/content/zh/docs/tasks/administer-cluster/declare-network-policy.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/declare-network-policy.md rename to content/zh/docs/tasks/administer-cluster/declare-network-policy.md diff --git a/content/cn/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/dns-custom-nameservers.md rename to content/zh/docs/tasks/administer-cluster/dns-custom-nameservers.md diff --git a/content/cn/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml b/content/zh/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml rename to content/zh/docs/tasks/administer-cluster/dns-horizontal-autoscaler.yaml diff --git a/content/zh/docs/tasks/administer-cluster/encrypt-data.md b/content/zh/docs/tasks/administer-cluster/encrypt-data.md new file mode 100644 index 0000000000000..c454ec9945251 --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/encrypt-data.md @@ -0,0 +1,323 @@ + +--- +reviewers: +- smarterclayton +title: 加密静态 Secret 数据 +content_template: templates/task +--- + +{{% capture overview %}} + +本文展示如何启用和配置静态 Secret 数据的加密 +{{% /capture %}} + +{{% capture prerequisites %}} + +* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + + +* 需要 Kubernetes 1.7.0 或者更高版本 + +* 需要 etcd v3 或者更高版本 + +* 静态数据加密在 1.7.0 中仍然是 alpha 版本,这意味着它可能会在没有通知的情况下进行更改。在升级到 1.8.0 之前,用户可能需要解密他们的数据。 + +{{% /capture %}} + +{{< toc >}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + + +## 配置并确定是否已启用静态数据加密 + +`kube-apiserver` 的参数 `--experimental-encryption-provider-config` 控制 API 数据在 etcd 中的加密方式。 +下面提供一个配置示例。 + +## 理解静态数据加密 + +```yaml +kind: EncryptionConfig +apiVersion: v1 +resources: + - resources: + - secrets + providers: + - identity: {} + - aesgcm: + keys: + - name: key1 + secret: c2VjcmV0IGlzIHNlY3VyZQ== + - name: key2 + secret: dGhpcyBpcyBwYXNzd29yZA== + - aescbc: + keys: + - name: key1 + secret: c2VjcmV0IGlzIHNlY3VyZQ== + - name: key2 + secret: dGhpcyBpcyBwYXNzd29yZA== + - secretbox: + keys: + - name: key1 + secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY= +``` + + +每个 `resources` 数组项目是一个单独的完整的配置。 `resources.resources` 字段是要加密的 Kubernetes 资源名称(`resource` 或 `resource.group`)的数组。 +`providers` 数组是可能的加密 provider 的有序列表。每个条目只能指定一个 provider 类型(可以是 `identity` 或 `aescbc`,但不能在同一个项目中同时指定)。 + +列表中的第一个提供者用于加密进入存储的资源。当从存储器读取资源时,与存储的数据匹配的所有提供者将尝试按顺序解密数据。 +如果由于格式或密钥不匹配而导致提供者无法读取存储的数据,则会返回一个错误,以防止客户端访问该资源。 + +**重要:** 如果通过加密配置无法读取资源(因为密钥已更改),唯一的方法是直接从基础 etcd 中删除该密钥。任何尝试读取资源的调用将会失败,直到它被删除或提供有效的解密密钥。 + +### Providers: + + +名称 | 加密类型 | 强度 | 速度 | 密钥长度 | 其它事项 +-----|------------|----------|-------|------------|--------------------- +`identity` | 无 | N/A | N/A | N/A | 不加密写入的资源。当设置为第一个 provider 时,资源将在新值写入时被解密。 +`aescbc` | 填充 PKCS#7 的 AES-CBC | 最强 | 快 | 32字节 | 建议使用的加密项,但可能比 `secretbox` 稍微慢一些。 +`secretbox` | XSalsa20 和 Poly1305 | 强 | 更快 | 32字节 | 较新的标准,在需要高度评审的环境中可能不被接受。 +`aesgcm` | 带有随机数的 AES-GCM | 必须每 200k 写入一次 | 最快 | 16, 24, 或者 32字节 | 建议不要使用,除非实施了自动密钥循环方案。 +`kms` | 使用信封加密方案:数据使用带有 PKCS#7 填充的 AES-CBC 通过 data encryption keys(DEK)加密,DEK 根据 Key Management Service(KMS)中的配置通过 key encryption keys(KEK)加密 | 最强 | 快 | 32字节 | 建议使用第三方工具进行密钥管理。为每个加密生成新的 DEK,并由用户控制 KEK 轮换来简化密钥轮换。[配置 KMS 提供程序](/docs/tasks/administer-cluster/kms-provider/) + +每个 provider 都支持多个密钥 - 在解密时会按顺序使用密钥,如果是第一个 provider,则第一个密钥用于加密。 + + +## 加密您的数据 + +创建一个新的加密配置文件: + +```yaml +kind: EncryptionConfig +apiVersion: v1 +resources: + - resources: + - secrets + providers: + - aescbc: + keys: + - name: key1 + secret: + - identity: {} +``` + + +遵循如下步骤来创建一个新的 secret: + +1. 生成一个 32 字节的随机密钥并进行 base64 编码。如果您在 Linux 或 Mac OS X 上,请运行以下命令: + + ``` + head -c 32 /dev/urandom | base64 + ``` + + +2. 将这个值放入到 secret 字段中。 +3. 设置 `kube-apiserver` 的 `--experimental-encryption-provider-config` 参数,将其指定到配置文件所在位置。 +4. 重启您的 API server。 + +**重要:** 您的配置文件包含可以解密 etcd 内容的密钥,因此您必须正确限制主设备的权限,以便只有能运行 kube-apiserver 的用户才能读取它。 + + + +## 验证数据是否被加密 + +数据在写入 etcd 时会被加密。重新启动你的 `kube-apiserver` 后,任何新创建或更新的密码在存储时都应该被加密。 +如果想要检查,你可以使用 `etcdctl` 命令行程序来检索你的加密内容。 + +1. 创建一个新的 secret,名称为 `secret1`,命名空间为 `default`: + + ``` + kubectl create secret generic secret1 -n default --from-literal=mykey=mydata + ``` + + +2. 使用 etcdctl 命令行,从 etcd 中读取 secret: + + ``` +    ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C + ``` + + + 这里的 `[...]` 是用来连接 etcd 服务的额外参数。 +3. 验证存储的密钥前缀是否为 `k8s:enc:aescbc:v1:`,这表明 `aescbc` provider 已加密结果数据。 +4. 通过 API 检索,验证 secret 是否被正确解密: + + ``` + kubectl describe secret secret1 -n default + ``` + + + 必须匹配 `mykey: mydata` + + + +## 确保所有 secret 都被加密 + +由于 secret 是在写入时被加密,因此对 secret 执行更新也会加密该内容。 + +``` +kubectl get secrets --all-namespaces -o json | kubectl replace -f - +``` + + +上面的命令读取所有 secret,然后使用服务端加密来进行更新。 +如果由于冲突写入而发生错误,请重试该命令。 +对于较大的集群,您可能希望通过命名空间或更新脚本来分割 secret。 + + + +## 回滚解密密钥 + +在不发生停机的情况下更改 secret 需要多步操作,特别是在有多个 `kube-apiserver` 进程正在运行的高可用部署的情况下。 + +1. 生成一个新密钥并将其添加为所有服务器上当前提供程序的第二个密钥条目 +2. 重新启动所有 `kube-apiserver` 进程以确保每台服务器都可以使用新密钥进行解密 +3. 将新密钥设置为 `keys` 数组中的第一个条目,以便在配置中使用其进行加密 +4. 重新启动所有 `kube-apiserver` 进程以确保每个服务器现在都使用新密钥进行加密 +5. 运行 `kubectl get secrets --all-namespaces -o json | kubectl replace -f -` 以用新密钥加密所有现有的秘密 +6. 在使用新密钥备份 etcd 后,从配置中删除旧的解密密钥并更新所有密钥 + +如果只有一个 `kube-apiserver`,第 2 步可能可以忽略。 + + + +## 解密所有数据 + +要禁用 rest 加密,请将 `identity` provider 作为配置中的第一个条目: + +```yaml +kind: EncryptionConfig +apiVersion: v1 +resources: + - resources: + - secrets + providers: + - identity: {} + - aescbc: + keys: + - name: key1 + secret: +``` + + +并重新启动所有 `kube-apiserver` 进程。然后运行命令 `kubectl get secrets --all-namespaces -o json | kubectl replace -f -` 强制解密所有 secret。 + +{{% /capture %}} + + diff --git a/content/cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/content/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md rename to content/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md diff --git a/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-9.md b/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-9.md new file mode 100644 index 0000000000000..56dae7aa976a8 --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-9.md @@ -0,0 +1,453 @@ +--- +reviewers: +- pipejakob +- luxas +- roberthbailey +- jbeda +title: 将 kubeadm 集群在 v1.8 版本到 v1.9 版本之间升级/降级 +content_template: templates/task +--- + + + +{{% capture overview %}} + + +本文主要描述如何将 `kubeadm` 集群从 1.8.x 版本升级到 1.9.x 版本,包括从 1.8.x 版本升级到 1.8.y 版本,和从版本 1.9.x 版本到 1.9.y 版本(`y > x`)。 +如果您目前安装的是集群是 1.7 版本,也可以查看[ kubeadm clusters 集群从 1.7 版本升级到 1.8 版本](/docs/tasks/administer-cluster/kubeadm-upgrade-1-8/) +{{% /capture %}} + +{{% capture prerequisites %}} + +升级之前: + +- 您需要先安装一个版本为 1.8.0 或更高版本的 `kubeadm` Kubernetes 集群。另外还需要禁用节点的交换分区。 +- 一定要认真阅读[发布说明](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md) +- `kubeadm upgrade` 可以更新 etcd。默认情况下,从 Kubernetes 1.8 版本升级到 1.9 版本时,`kubeadm upgrade` 也会升级 etcd 到 3.1.10 版本。这是由于 etcd3.1.10 是官方验证的 etcd 版本对于 kubernetes1.9。kubeadm 为您提供了自动化的升级过程。 +- 请注意,`kubeadm upgrade`命令 不会触及任何工作负载,只有 kubernetes 内部组件。作为最佳实践,您应当备份,因为备份相当的重要。例如,任何应用程序级别的状态(如应用程序可能依赖的数据库,如 mysql 或 mongoDB)必须预先备份。 +{{< caution >}} + +**注意:** 由于容器的具体哈希值改变了,所有的容器在升级之后会重新启动。 + +{{< /caution >}} + + +同时,也要注意只有小范围的升级是支持的。例如,您只可以从 1.8 版本升级到 1.9 版本,但是不能从 1.7 版本升级到 1.9 版本。 + + +{{% /capture %}} + +{{% capture steps %}} + +## 升级控制面板 + +在您的 master 节点上执行这些命令: + +1. 使用 `curl` 命令进行安装最新的版本的 `kubeadm` ,例如: +```shell +export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version +export ARCH=amd64 # or: arm, arm64, ppc64le, s390x +curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm +chmod a+rx /usr/bin/kubeadm +``` + +{{< caution >}} + +**注意:** 在您的系统上升级控制面板之前升级 `kubeadm` 包会导致升级失败。 +尽管 `kubeadm` ships 在 kubernetes 仓库中,手动安装 `kubeadm` 是重要的。kubeadm 团队在努力解决这种手动安装的限制。 +{{< /caution >}} + +验证 kubeadm 下载工作是否正常,并是否有达到预期的版本: +```shell +kubeadm version +``` + +2. 在master节点上运行如下命令: +```shell +kubeadm upgrade plan +``` + +可以得到类型的结果: + + + ```shell + [preflight] Running pre-flight checks + [upgrade] Making sure the cluster is healthy: + [upgrade/health] Checking API Server health: Healthy + [upgrade/health] Checking Node health: All Nodes are healthy + [upgrade/health] Checking Static Pod manifests exists on disk: All manifests exist on disk + [upgrade/config] Making sure the configuration is correct: + [upgrade/config] Reading configuration from the cluster... + [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' + [upgrade] Fetching available versions to upgrade to: + [upgrade/versions] Cluster version: v1.8.1 + [upgrade/versions] kubeadm version: v1.9.0 + [upgrade/versions] Latest stable version: v1.9.0 + [upgrade/versions] Latest version in the v1.8 series: v1.8.6 + + 在升级控制面板并使用 'kubeadm upgrade apply' 后,必须手动升级组件: + COMPONENT CURRENT AVAILABLE + Kubelet 1 x v1.8.1 v1.8.6 + + 升级到最新的 v1.8 系列的版本: + COMPONENT CURRENT AVAILABLE + API Server v1.8.1 v1.8.6 + Controller Manager v1.8.1 v1.8.6 + Scheduler v1.8.1 v1.8.6 + Kube Proxy v1.8.1 v1.8.6 + Kube DNS 1.14.4 1.14.5 + + 您可以通过以下的命令来进行升级: + kubeadm upgrade apply v1.8.6 + + _____________________________________________________________________ + + 在升级控制面板并使用 'kubeadm upgrade apply' 后,必须手动升级组件: + COMPONENT CURRENT AVAILABLE + Kubelet 1 x v1.8.1 v1.9.0 + + 升级到最新和稳定的版本: + COMPONENT CURRENT AVAILABLE + API Server v1.8.1 v1.9.0 + Controller Manager v1.8.1 v1.9.0 + Scheduler v1.8.1 v1.9.0 + Kube Proxy v1.8.1 v1.9.0 + Kube DNS 1.14.5 1.14.7 + + 您可以通过以下命令来进行升级: + kubeadm upgrade apply v1.9.0 + + 请注意:在您执行升级之前,您必须升级 kubeadm 到 v1.9.0 版本 +_____________________________________________________________________ +``` + + +`kubeadm upgrade plan` 命令检查您的集群是否处于可升级的状态并且获取可以以用户友好方式升级的版本。 + +检查 coreDNS 版本,包括 `--feature-gates=CoreDNS=true` 标志来验证存放 kube-dns 在某个位置的 coreDNS 版本。 + +3. 选择一个版本来进行升级和运行,例如: +```shell +kubeadm upgrade apply v1.9.0 +``` + +可以得到如下类似的输出: +```shell +[preflight] Running pre-flight checks. +[upgrade] Making sure the cluster is healthy: +[upgrade/config] Making sure the configuration is correct: +[upgrade/config] Reading configuration from the cluster... +[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' +[upgrade/version] You have chosen to upgrade to version "v1.9.0" +[upgrade/versions] Cluster version: v1.8.1 +[upgrade/versions] kubeadm version: v1.9.0 +[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y +[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler] +[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.9.0"... +[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804/etcd.yaml" +[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests502223003/etcd.yaml" +[upgrade/staticpods] Waiting for the kubelet to restart the component +[apiclient] Found 1 Pods for label selector component=etcd +[upgrade/staticpods] Component "etcd" upgraded successfully! +[upgrade/staticpods] Writing upgraded Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804" +[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804/kube-apiserver.yaml" +[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804/kube-controller-manager.yaml" +[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests802453804/kube-scheduler.yaml" +[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests502223003/kube-apiserver.yaml" +[upgrade/staticpods] Waiting for the kubelet to restart the component +[apiclient] Found 1 Pods for label selector component=kube-apiserver +[upgrade/staticpods] Component "kube-apiserver" upgraded successfully! +[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests502223003/kube-controller-manager.yaml" +[upgrade/staticpods] Waiting for the kubelet to restart the component +[apiclient] Found 1 Pods for label selector component=kube-controller-manager +[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! +[upgrade/staticpods] Moved upgraded manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests502223003/kube-scheduler.yaml" +[upgrade/staticpods] Waiting for the kubelet to restart the component +[apiclient] Found 1 Pods for label selector component=kube-scheduler +[upgrade/staticpods] Component "kube-scheduler" upgraded successfully! +[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace +[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials +[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token +[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster +[addons] Applied essential addon: kube-dns +[addons] Applied essential addon: kube-proxy + +[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.9.0". Enjoy! + +[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn. +``` + + +升级具有默认内部的 DNS 的 coreDNS 集群,调用具有 `--feature-gates=CoreDNS=true` 标记的 `kubeadm upgrade apply`。 +`kubeadm upgrade apply`按照如下进行: + +- 检查集群是否处于可升级状态: + -API服务器是否可达 + -所有的节点处于`Ready`状态 + -控制面板是健康的 +- 强制执行版本倾斜策略 +- 确保控制面板镜像可用或者可用于机器pull +- 升级控制面板组件或者回滚如果其中一个无法出现 +- 应用新的`kube-dns`和`kube-proxy`清单并强制创建所必须的RBAC规则 +- 创建API服务器新的证书和秘钥文件,并备份旧文件(如果它们即将在180天到期) + + +4. 手动升级定义网络(SDN)的软件 + + 容器网络接口(CNI)提供者具有升级说明指导。 + 检查这个[插件](/docs/concepts/cluster-administration/addons/)页面来找到 CNI 提供者和查看是否需要额外的升级步骤。 + + + +## 升级master和node包 + +在集群中涉及 `$HOST` 的每个主机,执行如下命令来升级 `kubelet` : + +1. 准备主机维修,并标记为不可调度和驱逐工作负载: +```shell +kubectl drain $HOST --ignore-daemonsets +``` + +当在 master 主机上运行这个命令,这个错误是可以预料的并且可以忽略(因为静态的 pods 运行在 master 上) +```shell +node "master" already cordoned +error: pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): etcd-kubeadm, kube-apiserver-kubeadm, kube-controller-manager-kubeadm, kube-scheduler-kubeadm +``` + +2. 在 `$HOST` 节点上使用特定的包管理器升级 kubernetes 包版本: + +如果主机运行 Debian-based 发行版如 ubuntu,运行如下: +```shell +apt-get update +apt-get upgrade +``` + +如果主机运行centos或者类似,运行如下: +```shell +yum update +``` + +现在 `kubelet` 新的版本运行在主机上。在` $HOST` 上使用如下命令验证: +```shell +systemctl status kubelet +``` + +3. 通过标记可计划的将主机从新联机: +```shell +kubectl uncordon $HOST +``` + +在所以主机升级 `kubelet` 后,通过从任意位置运行以下命令例如从集群外来验证所有节点是否可用: +```shell +kubectl get nodes +``` + +如果上面命令的 `STATUS` 列显示所有的主机的 `Ready`,就完成了。 + +## ##从坏的状态中恢复 + +如果 `kubeadm upgrade` 以某种方式失败了并无法回滚,原因有在执行过程中出现意外关机,可以再次运行 `kubeadm upgrade`,因为它是幂等的,并且最终确保实际状态是期待的状态。 + + + +可以使用 `kubeadm upgrade` 来更改运行的集群并使用具有 `--force` 参数的 `x.x.x --> x.x.x`,这样可以恢复坏的状态。 + +{{% /capture %}} + + diff --git a/content/cn/docs/tasks/administer-cluster/kubelet-config-file.md b/content/zh/docs/tasks/administer-cluster/kubelet-config-file.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/kubelet-config-file.md rename to content/zh/docs/tasks/administer-cluster/kubelet-config-file.md diff --git a/content/cn/docs/tasks/administer-cluster/memory-constraints-pod-2.yaml b/content/zh/docs/tasks/administer-cluster/memory-constraints-pod-2.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/memory-constraints-pod-2.yaml rename to content/zh/docs/tasks/administer-cluster/memory-constraints-pod-2.yaml diff --git a/content/cn/docs/tasks/administer-cluster/memory-constraints-pod-3.yaml b/content/zh/docs/tasks/administer-cluster/memory-constraints-pod-3.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/memory-constraints-pod-3.yaml rename to content/zh/docs/tasks/administer-cluster/memory-constraints-pod-3.yaml diff --git a/content/cn/docs/tasks/administer-cluster/memory-constraints-pod-4.yaml b/content/zh/docs/tasks/administer-cluster/memory-constraints-pod-4.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/memory-constraints-pod-4.yaml rename to content/zh/docs/tasks/administer-cluster/memory-constraints-pod-4.yaml diff --git a/content/cn/docs/tasks/administer-cluster/memory-constraints-pod.yaml b/content/zh/docs/tasks/administer-cluster/memory-constraints-pod.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/memory-constraints-pod.yaml rename to content/zh/docs/tasks/administer-cluster/memory-constraints-pod.yaml diff --git a/content/cn/docs/tasks/administer-cluster/memory-constraints.yaml b/content/zh/docs/tasks/administer-cluster/memory-constraints.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/memory-constraints.yaml rename to content/zh/docs/tasks/administer-cluster/memory-constraints.yaml diff --git a/content/cn/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml b/content/zh/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml similarity index 81% rename from content/cn/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml rename to content/zh/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml index aa80610d84492..1013293eddeba 100644 --- a/content/cn/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml +++ b/content/zh/docs/tasks/administer-cluster/memory-defaults-pod-2.yaml @@ -4,7 +4,7 @@ metadata: name: default-mem-demo-2 spec: containers: - - name: default-mem-demo-2-ctr + - name: defalt-mem-demo-2-ctr image: nginx resources: limits: diff --git a/content/cn/docs/tasks/administer-cluster/memory-defaults-pod-3.yaml b/content/zh/docs/tasks/administer-cluster/memory-defaults-pod-3.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/memory-defaults-pod-3.yaml rename to content/zh/docs/tasks/administer-cluster/memory-defaults-pod-3.yaml diff --git a/content/cn/docs/tasks/administer-cluster/memory-defaults-pod.yaml b/content/zh/docs/tasks/administer-cluster/memory-defaults-pod.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/memory-defaults-pod.yaml rename to content/zh/docs/tasks/administer-cluster/memory-defaults-pod.yaml diff --git a/content/cn/docs/tasks/administer-cluster/memory-defaults.yaml b/content/zh/docs/tasks/administer-cluster/memory-defaults.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/memory-defaults.yaml rename to content/zh/docs/tasks/administer-cluster/memory-defaults.yaml diff --git a/content/cn/docs/tasks/administer-cluster/my-scheduler.yaml b/content/zh/docs/tasks/administer-cluster/my-scheduler.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/my-scheduler.yaml rename to content/zh/docs/tasks/administer-cluster/my-scheduler.yaml diff --git a/content/zh/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md b/content/zh/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md new file mode 100644 index 0000000000000..78d0d8e675381 --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/network-policy-provider/calico-network-policy.md @@ -0,0 +1,86 @@ +--- +reviewers: +- caseydavenport +title: 使用 Calico 作为 NetworkPolicy +content_template: templates/task +weight: 10 +--- + +{{% capture overview %}} + +本页展示了两种在 Kubernetes 上快速创建 Calico 集群的方法。 +{{% /capture %}} + +{{% capture prerequisites %}} + + +决定您想部署一个[云](#在-Google-Kubernetes-Engine-GKE-上创建一个-Calico-集群) 还是 [本地](#使用-kubeadm-创建一个本地-Calico-集群) 集群。 +{{% /capture %}} + +{{% capture steps %}} + +## 在 Google Kubernetes Engine (GKE) 上创建一个 Calico 集群 + +**先决条件**: [gcloud](https://cloud.google.com/sdk/docs/quickstarts) + +1. 启动一个带有 Calico 的 GKE 集群,只需加上flag `--enable-network-policy`。 + + **语法** + ```shell + gcloud container clusters create [CLUSTER_NAME] --enable-network-policy + ``` + + **示例** + ```shell + gcloud container clusters create my-calico-cluster --enable-network-policy + ``` + +1. 使用如下命令验证部署是否正确。 + + ```shell + kubectl get pods --namespace=kube-system + ``` + + Calico 的 pods 名以 `calico` 打头,检查确认每个 pods 状态为 `Running`。 + + +## 使用 kubeadm 创建一个本地 Calico 集群 + +在15分钟内使用 kubeadm 得到一个本地单主机 Calico 集群,请参考 +[Calico 快速入门](https://docs.projectcalico.org/latest/getting-started/kubernetes/)。 + +{{% /capture %}} + + +{{% capture whatsnext %}} + +集群运行后,您可以按照 [声明 Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) 去尝试使用 Kubernetes NetworkPolicy。 +{{% /capture %}} diff --git a/content/zh/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/zh/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md new file mode 100644 index 0000000000000..61be3718b35ed --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -0,0 +1,133 @@ +--- +reviewers: +- danwent +title: 使用 Cilium 作为 NetworkPolicy +content_template: templates/task +weight: 20 +--- + +{{% capture overview %}} + + +本页展示了如何使用 Cilium 作为 NetworkPolicy。 + +关于 Cilium 的背景知识,请阅读 [Cilium 介绍](https://cilium.readthedocs.io/en/latest/intro)。 + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + + +## 在 Minikube 上部署 Cilium 用于基本测试 + +为了轻松熟悉 Cilium 您可以根据[Cilium Kubernetes 入门指南](https://docs.cilium.io/en/latest/gettingstarted/minikube/)在 minikube 中执行一个 cilium 的基本的 DaemonSet 安装。 + +在 minikube 中的安装配置使用一个简单的“一体化” YAML 文件,包括了 Cilium 的 DaemonSet 配置,连接 minikube 的 etcd 实例,以及适当的 RBAC 设置。 + +```shell +$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/cilium.yaml +configmap "cilium-config" created +secret "cilium-etcd-secrets" created +serviceaccount "cilium" created +clusterrolebinding "cilium" created +daemonset "cilium" created +clusterrole "cilium" created +``` + +入门指南其余的部分用一个示例应用说明了如何强制执行L3/L4(即 IP 地址+端口)的安全策略以及L7 (如 HTTP)的安全策略。 + + + +## 部署 Cilium 用于生产用途 +关于部署 Cilium 用于生产的详细说明,请见[Cilium Kubernetes 安装指南](https://cilium.readthedocs.io/en/latest/kubernetes/install/) +,此文档包括详细的需求、说明和生产用途 DaemonSet 文件示例。 + +{{% /capture %}} + +{{% capture discussion %}} + +## 了解 Cilium 组件 + +部署使用 Cilium 的集群会添加 Pods 到`kube-system`命名空间。 要查看此Pod列表,运行: + +```shell +kubectl get pods --namespace=kube-system +``` + + +您将看到像这样的 Pods 列表: + +```console +NAME DESIRED CURRENT READY NODE-SELECTOR AGE +cilium 1 1 1 2m +... +``` + +有两个主要组件需要注意: + +- 在集群中的每个节点上都会运行一个 `cilium` Pod,并利用Linux BPF执行网络策略管理该节点上进出 Pod 的流量。 +- 对于生产部署,Cilium 应该复用 Kubernetes 所使用的键值存储集群(如 etcd),其通常在Kubernetes 的 master 节点上运行。 +[Cilium Kubernetes安装指南](https://cilium.readthedocs.io/en/latest/kubernetes/install/) +包括了一个示例 DaemonSet,可以自定义指定此键值存储集群。 +简单的 minikube 的“一体化” DaemonSet 不需要这样的配置,因为它会自动连接到 minikube 的 etcd 实例。 + +{{% /capture %}} + +{{% capture whatsnext %}} + +群集运行后,您可以按照[声明网络策略](/docs/tasks/administer-cluster/declare-network-policy/) +用 Cilium 试用 Kubernetes NetworkPolicy。 +玩得开心,如果您有任何疑问,请联系我们 +[Cilium Slack Channel](https://cilium.herokuapp.com/)。 + +{{% /capture %}} + diff --git a/content/zh/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md b/content/zh/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md new file mode 100644 index 0000000000000..901184cb25241 --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/network-policy-provider/kube-router-network-policy.md @@ -0,0 +1,31 @@ +--- +reviewers: +- murali-reddy +title: 使用 Kube-router 作为 NetworkPolicy +content_template: templates/task +weight: 30 +--- + +{{% capture overview %}} + +本页展示了如何使用 [Kube-router](https://github.com/cloudnativelabs/kube-router) 作为 NetworkPolicy。 +{{% /capture %}} + +{{% capture prerequisites %}} + + +您需要拥有一个正在运行的 Kubernetes 集群。如果您还没有集群,可以使用任意的集群安装器如 Kops,Bootkube,Kubeadm 等创建一个。 +{{% /capture %}} + +{{% capture steps %}} + + +## 安装 Kube-router 插件 +Kube-router 插件自带一个Network Policy 控制器,监视来自于Kubernetes API server 的 NetworkPolicy 和 pods 的变化,根据策略指示配置 iptables 规则和 ipsets 来允许或阻止流量。请根据 [尝试通过集群安装器使用 Kube-router](https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers) 指南安装 Kube-router 插件。 +{{% /capture %}} + +{{% capture whatsnext %}} + +在您安装 Kube-router 插件后,可以根据 [声明 Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) 去尝试使用 Kubernetes NetworkPolicy。 +{{% /capture %}} diff --git a/content/zh/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md b/content/zh/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md new file mode 100644 index 0000000000000..5867bfec5c25d --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy.md @@ -0,0 +1,57 @@ +--- +reviewers: +- chrismarino +title: 使用 Romana 作为 NetworkPolicy +content_template: templates/task +weight: 40 +--- + +{{% capture overview %}} + + +本页展示如何使用 Romana 作为 NetworkPolicy。 + +{{% /capture %}} + +{{% capture prerequisites %}} + + +完成[kubeadm 入门指南](/docs/getting-started-guides/kubeadm/)中的1、2、3步。 + +{{% /capture %}} + +{{% capture steps %}} + +## 使用 kubeadm 安装 Romana + +按照[容器化安装指南](https://github.com/romana/romana/tree/master/containerize)获取 kubeadmin。 + +## 运用网络策略 + +使用以下的一种方式去运用网络策略: + +* [Romana 网络策略](https://github.com/romana/romana/wiki/Romana-policies) + * [Romana 网络策略例子](https://github.com/romana/core/blob/master/doc/policy.md) +* NetworkPolicy API + +{{% /capture %}} + +{{% capture whatsnext %}} + +Romana 安装完成后,您可以按照[声明 Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)去尝试使用 Kubernetes NetworkPolicy。 + +{{% /capture %}} diff --git a/content/zh/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md b/content/zh/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md new file mode 100644 index 0000000000000..26eb7434baed3 --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md @@ -0,0 +1,78 @@ +--- +reviewers: +- bboreham +title: 使用 Weave Net 作为 NetworkPolicy +content_template: templates/task +weight: 50 +--- + +{{% capture overview %}} + + + +本页展示了如何使用使用 Weave Net 作为 NetworkPolicy。 + +{{% /capture %}} + +{{% capture prerequisites %}} + +您需要拥有一个 Kubernetes 集群。按照[kubeadm 入门指南](/docs/getting-started-guides/kubeadm/)来引导一个。 +{{% /capture %}} + +{{% capture steps %}} + +## 安装 Weave Net 插件 + +按照[通过插件集成Kubernetes](https://www.weave.works/docs/net/latest/kube-addon/)指南。 + +Kubernetes 的 Weave Net 插件带有[网络策略控制器](https://www.weave.works/docs/net/latest/kube-addon/#npc),可自动监控 Kubernetes 所有名称空间中的任何 NetworkPolicy 注释。 配置`iptables`规则以允许或阻止策略指示的流量。 + + +## 测试安装 + +验证 weave 是否有效。 + +输入以下命令: + +```shell +kubectl get po -n kube-system -o wide +``` + + +输出类似这样: + +``` +NAME READY STATUS RESTARTS AGE IP NODE +weave-net-1t1qg 2/2 Running 0 9d 192.168.2.10 worknode3 +weave-net-231d7 2/2 Running 1 7d 10.2.0.17 worknodegpu +weave-net-7nmwt 2/2 Running 3 9d 192.168.2.131 masternode +weave-net-pmw8w 2/2 Running 0 9d 192.168.2.216 worknode2 +``` + + +每个 Node 都有一个 weave Pod,所有 Pod 都是`Running`和`2/2 READY`。(`2/2`表示每个Pod都有`weave`和`weave-npc`。) + +{{% /capture %}} + +{{% capture whatsnext %}} + + +安装Weave Net插件后,您可以按照[声明网络策略](/docs/tasks/administration-cluster/declare-network-policy/)来试用 Kubernetes NetworkPolicy。 如果您有任何疑问,请联系我们[#weave-community on Slack 或 Weave User Group](https://github.com/weaveworks/weave#getting-help)。 +{{% /capture %}} + diff --git a/content/cn/docs/tasks/administer-cluster/pod1.yaml b/content/zh/docs/tasks/administer-cluster/pod1.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/pod1.yaml rename to content/zh/docs/tasks/administer-cluster/pod1.yaml diff --git a/content/cn/docs/tasks/administer-cluster/pod2.yaml b/content/zh/docs/tasks/administer-cluster/pod2.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/pod2.yaml rename to content/zh/docs/tasks/administer-cluster/pod2.yaml diff --git a/content/cn/docs/tasks/administer-cluster/pod3.yaml b/content/zh/docs/tasks/administer-cluster/pod3.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/pod3.yaml rename to content/zh/docs/tasks/administer-cluster/pod3.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-mem-cpu-pod-2.yaml b/content/zh/docs/tasks/administer-cluster/quota-mem-cpu-pod-2.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-mem-cpu-pod-2.yaml rename to content/zh/docs/tasks/administer-cluster/quota-mem-cpu-pod-2.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-mem-cpu-pod.yaml b/content/zh/docs/tasks/administer-cluster/quota-mem-cpu-pod.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-mem-cpu-pod.yaml rename to content/zh/docs/tasks/administer-cluster/quota-mem-cpu-pod.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-mem-cpu.yaml b/content/zh/docs/tasks/administer-cluster/quota-mem-cpu.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-mem-cpu.yaml rename to content/zh/docs/tasks/administer-cluster/quota-mem-cpu.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-objects-pvc-2.yaml b/content/zh/docs/tasks/administer-cluster/quota-objects-pvc-2.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-objects-pvc-2.yaml rename to content/zh/docs/tasks/administer-cluster/quota-objects-pvc-2.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-objects-pvc.yaml b/content/zh/docs/tasks/administer-cluster/quota-objects-pvc.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-objects-pvc.yaml rename to content/zh/docs/tasks/administer-cluster/quota-objects-pvc.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-objects.yaml b/content/zh/docs/tasks/administer-cluster/quota-objects.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-objects.yaml rename to content/zh/docs/tasks/administer-cluster/quota-objects.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-pod-deployment.yaml b/content/zh/docs/tasks/administer-cluster/quota-pod-deployment.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-pod-deployment.yaml rename to content/zh/docs/tasks/administer-cluster/quota-pod-deployment.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-pod-namespace.md b/content/zh/docs/tasks/administer-cluster/quota-pod-namespace.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-pod-namespace.md rename to content/zh/docs/tasks/administer-cluster/quota-pod-namespace.md diff --git a/content/cn/docs/tasks/administer-cluster/quota-pod.yaml b/content/zh/docs/tasks/administer-cluster/quota-pod.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-pod.yaml rename to content/zh/docs/tasks/administer-cluster/quota-pod.yaml diff --git a/content/cn/docs/tasks/administer-cluster/quota-pvc-2.yaml b/content/zh/docs/tasks/administer-cluster/quota-pvc-2.yaml similarity index 100% rename from content/cn/docs/tasks/administer-cluster/quota-pvc-2.yaml rename to content/zh/docs/tasks/administer-cluster/quota-pvc-2.yaml diff --git a/content/cn/docs/tasks/administer-cluster/romana-network-policy.md b/content/zh/docs/tasks/administer-cluster/romana-network-policy.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/romana-network-policy.md rename to content/zh/docs/tasks/administer-cluster/romana-network-policy.md diff --git a/content/cn/docs/tasks/administer-cluster/static-pod.md b/content/zh/docs/tasks/administer-cluster/static-pod.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/static-pod.md rename to content/zh/docs/tasks/administer-cluster/static-pod.md diff --git a/content/zh/docs/tasks/administer-cluster/sysctl-cluster.md b/content/zh/docs/tasks/administer-cluster/sysctl-cluster.md new file mode 100644 index 0000000000000..9ca39883f3a5d --- /dev/null +++ b/content/zh/docs/tasks/administer-cluster/sysctl-cluster.md @@ -0,0 +1,329 @@ +--- +title: 在 Kubernetes 集群中使用 sysctl +reviewers: +- sttts +content_template: templates/task +--- + + + +{{% capture overview %}} +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + + +本文档介绍如何通过 sysctl 接口在 Kubernetes 集群中配置和使用内核参数。 + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + + +## 获取 Sysctl 的参数列表 + + +在 Linux 中,管理员可以通过 sysctl 接口修改内核运行时的参数。在 `/proc/sys/` 虚拟文件系统下存放许多内核参数。这些参数涉及了多个内核子系统,如: + + +- 内核子系统 (通常前缀为: `kernel.`) +- 网络子系统 (通常前缀为: `net.`) +- 虚拟内存子系统 (通常前缀为: `vm.`) +- MDADM 子系统 (通常前缀为: `dev.`) +- 更多子系统请参见 [内核文档](https://www.kernel.org/doc/Documentation/sysctl/README)。 + + +若要获取完整的参数列表,请执行以下命令 + +```shell +$ sudo sysctl -a +``` + + +## 启用非安全的 Sysctl 参数 + + +sysctl 参数分为 _安全_ 和 _非安全的_。_安全_ sysctl 参数除了需要设置恰当的命名空间外,在同一 node 上的不同 Pod 之间也必须是 _相互隔离的_。这意味着在 Pod 上设置 _安全_ sysctl 参数 + + +- 必须不能影响到节点上的其他 Pod +- 必须不能损害节点的健康 +- 必须不允许使用超出 Pod 的资源限制的 CPU 或内存资源。 + + +至今为止,大多数 _有命名空间的_ sysctl 参数不一定被认为是 _安全_ 的。以下几种 sysctl 参数是 _安全的_: + +- `kernel.shm_rmid_forced`, +- `net.ipv4.ip_local_port_range`, +- `net.ipv4.tcp_syncookies`. + +{{< note >}} + +**注意**: 示例中的 `net.ipv4.tcp_syncookies` 在Linux 内核 4.4 或更低的版本中是无命名空间的。 +{{< /note >}} + + +在未来的 Kubernetes 版本中,若kubelet 支持更好的隔离机制,则上述列表中将会列出更多 _安全的_ sysctl 参数。 + + +所有 _安全的_ sysctl 参数都默认启用。 + + +所有 _非安全的_ sysctl 参数都默认禁用,且必须由集群管理员在每个节点上手动开启。那些设置了不安全 sysctl 参数的 Pod 仍会被调度,但无法正常启动。 + + +参考上述警告,集群管理员只有在一些非常特殊的情况下(如:高可用或实时应用调整),才可以启用特定的 _非安全的_ sysctl 参数。如需启用 _非安全的_ sysctl 参数,请您在每个节点上分别设置 kubelet 命令行参数,例如: + +```shell +$ kubelet --allowed-unsafe-sysctls \ + 'kernel.msg*,net.ipv4.route.min_pmtu' ... +``` + +如果您使用 minikube,可以通过 `extra-config` 参数来配置: + +```shell +$ minikube start --extra-config="kubelet.AllowedUnsafeSysctls=kernel.msg*,net.ipv4.route.min_pmtu"... +``` + +只有 _有命名空间的_ sysctl 参数可以通过该方式启用。 + + +## 设置 Pod 的 Sysctl 参数 + + +目前,在 Linux 内核中,有许多的 sysctl 参数都是 _有命名空间的_ 。 这就意味着可以为节点上的每个 Pod 分别去设置它们的 sysctl 参数。 在 Kubernetes 中,只有那些有命名空间的 sysctl 参数可以通过 Pod 的 securityContext 对其进行配置。 + + +以下列出有命名空间的 sysctl 参数,在未来的 Linux 内核版本中,此列表可能会发生变化。 + +- `kernel.shm*`, +- `kernel.msg*`, +- `kernel.sem`, +- `fs.mqueue.*`, +- `net.*`. + + +没有命名空间的 sysctl 参数称为 _节点级别的_ sysctl 参数。 如果需要对其进行设置,则必须在每个节点的操作系统上手动地去配置它们,或者通过在 DaemonSet 中运行特权模式容器来配置。 + + +可使用 pod 的 securityContext 来配置有命名空间的 sysctl 参数,securityContext 应用于同一个 pod 中的所有容器。 + + +此示例中,使用 Pod SecurityContext 来对一个安全的 sysctl 参数 `kernel.shm_rmid_forced` 以及两个非安全的 sysctl 参数 `net.ipv4.route.min_pmtu`和 `kernel.msgmax` 进行设置。在 Pod 规格中对 _安全的_ 和 _非安全的_ sysctl 参数不做区分。 + +{{< warning >}} + +为了避免破坏操作系统的稳定性,请您在了解变更后果之后再修改 sysctl 参数。 +{{< /warning >}} + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: sysctl-example +spec: + securityContext: + sysctls: + - name: kernel.shm_rmid_forced + value: "0" + - name: net.ipv4.route.min_pmtu + value: "552" + - name: kernel.msgmax + value: "65536" + ... +``` +{{% /capture %}} + +{{% capture discussion %}} + +{{< warning >}} + +**警告**:由于 _非安全的_ sysctl 参数其本身具有不稳定性,在使用 _非安全的_ sysctl 参数时可能会导致一些严重问题,如容器的错误行为、机器资源不足或节点被完全破坏,用户需自行承担风险。 +{{< /warning >}} + + +最佳实践方案是将集群中具有特殊 sysctl 设置的节点视为 _受感染的_,并且只调度需要使用到特殊 sysctl 设置的 Pod 到这些节点上。 建议使用 Kubernetes 的 [ _taints 和 toleration_ 特性](/docs/reference/generated/kubectl/kubectl-commands/#taint) 来实现它。 + + +设置了 _非安全的_ sysctl 参数的 pod,在禁用了以下两种 _非安全的_ sysctl 参数配置的节点上启动都会失败。与 _节点级别的_ sysctl 一样,建议开启 +[_taints 和 toleration_ 特性](/docs/reference/generated/kubectl/kubectl-commands/#taint) 或 +[taints on nodes](/docs/concepts/configuration/taint-and-toleration/) +以便将 Pod 调度到正确的节点之上。 + +## PodSecurityPolicy + + +您可以通过在 PodSecurityPolicy 的 `forbiddenSysctls` 和/或 `allowedUnsafeSysctls` 字段中,指定 sysctl 或填写 sysctl 匹配模式来进一步为 Pod 设置 sysctl 参数。sysctl 参数匹配模式以 `*` 字符结尾,如 `kernel.*`。 单独的 `*` 字符匹配所有 sysctl 参数。 + + +所有 _安全的_ sysctl 参数都默认启用。 + + +`forbiddenSysctls` 和 `allowedUnsafeSysctls` 的值都是字符串列表类型,可以添加 sysctl 参数名称,也可以添加 sysctl 参数匹配模式(以`*`结尾)。 只填写 `*` 则匹配所有的 sysctl 参数。 + + +`forbiddenSysctls` 字段用于禁用特定的 sysctl 参数。 您可以在列表中禁用安全和非安全的 sysctl 参数的组合。 要禁用所有的 sysctl 参数,请设置为 `*`。 + + +如果要在 `allowedUnsafeSysctls` 字段中指定一个非安全的 sysctl 参数,并且它在`forbiddenSysctls` 字段中未被禁用,则可以在 Pod 中通过 PodSecurityPolicy 启用该 sysctl 参数。 若要在 PodSecurityPolicy 中开启所有非安全的 sysctl 参数,请设 `allowedUnsafeSysctls` 字段值为 `*`。 + + +`allowedUnsafeSysctls` 与 `forbiddenSysctls` 两字段的配置不能重叠,否则这就意味着存在某个 sysctl 参数既被启用又被禁用。 + +{{< warning >}} + +**警告**:如果您通过 PodSecurityPolicy 中的 `allowedUnsafeSysctls` 字段将非安全的 sysctl 参数列入白名单,但该 sysctl 参数未通过 kubelet 命令行参数 `--allowed-unsafe-sysctls` 在节点上将其列入白名单,则设置了这个 sysctl 参数的 Pod 将会启动失败。 +{{< /warning >}} + + +以下示例设置启用了以 `kernel.msg` 为前缀的非安全的 sysctl 参数,以及禁用了 sysctl 参数 `kernel.shm_rmid_forced`。 + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: sysctl-psp +spec: + allowedUnsafeSysctls: + - kernel.msg* + forbiddenSysctls: + - kernel.shm_rmid_forced + ... +``` + +{{% /capture %}} diff --git a/content/cn/docs/tasks/administer-cluster/weave-network-policy.md b/content/zh/docs/tasks/administer-cluster/weave-network-policy.md similarity index 100% rename from content/cn/docs/tasks/administer-cluster/weave-network-policy.md rename to content/zh/docs/tasks/administer-cluster/weave-network-policy.md diff --git a/content/zh/docs/tasks/configure-pod-container/assign-pods-nodes.md b/content/zh/docs/tasks/configure-pod-container/assign-pods-nodes.md new file mode 100644 index 0000000000000..ec3952530487e --- /dev/null +++ b/content/zh/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -0,0 +1,135 @@ +--- +title: 将 Pod 分配给节点 +content_template: templates/task +weight: 120 +--- + + +{{% capture overview %}} + +此页面显示如何将 Kubernetes Pod 分配给 Kubernetes 集群中的特定节点。 +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + + +## 给节点添加标签 + + +1. 列出集群中的节点 + + kubectl get nodes + + + 输出类似如下: + + NAME STATUS AGE VERSION + worker0 Ready 1d v1.6.0+fff5156 + worker1 Ready 1d v1.6.0+fff5156 + worker2 Ready 1d v1.6.0+fff5156 + + +1. 选择其中一个节点,为它添加标签: + + kubectl label nodes disktype=ssd + + + `` 是你选择的节点的名称。 + + +1. 验证你选择的节点是否有 `disktype=ssd` 标签: + + kubectl get nodes --show-labels + + + + 输出类似如下: + + NAME STATUS AGE VERSION LABELS + worker0 Ready 1d v1.6.0+fff5156 ...,disktype=ssd,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.6.0+fff5156 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.6.0+fff5156 ...,kubernetes.io/hostname=worker2 + + + 在前面的输出中,你可以看到 `worker0` 节点有 `disktype=ssd` 标签。 + + +## 创建一个调度到你选择的节点的 pod + + +该 pod 配置文件描述了一个拥有节点选择器 `disktype: ssd` 的 pod。这表明该 pod 将被调度到 +有 `disktype=ssd` 标签的节点。 + +{{< codenew file="pods/pod-nginx.yaml" >}} + + +1. 使用该配置文件去创建一个 pod,该 pod 将被调度到你选择的节点上: + + kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml + + +1. 验证 pod 是不是运行在你选择的节点上: + + kubectl get pods --output=wide + + + 输出类似如下: + + NAME READY STATUS RESTARTS AGE IP NODE + nginx 1/1 Running 0 13s 10.200.0.4 worker0 + +{{% /capture %}} + +{{% capture whatsnext %}} + +了解更多关于 +[标签和选择器](/docs/concepts/overview/working-with-objects/labels/)。 +{{% /capture %}} diff --git a/content/cn/docs/tasks/configure-pod-container/cpu-request-limit-2.yaml b/content/zh/docs/tasks/configure-pod-container/cpu-request-limit-2.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/cpu-request-limit-2.yaml rename to content/zh/docs/tasks/configure-pod-container/cpu-request-limit-2.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/cpu-request-limit.yaml b/content/zh/docs/tasks/configure-pod-container/cpu-request-limit.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/cpu-request-limit.yaml rename to content/zh/docs/tasks/configure-pod-container/cpu-request-limit.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/exec-liveness.yaml b/content/zh/docs/tasks/configure-pod-container/exec-liveness.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/exec-liveness.yaml rename to content/zh/docs/tasks/configure-pod-container/exec-liveness.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/http-liveness.yaml b/content/zh/docs/tasks/configure-pod-container/http-liveness.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/http-liveness.yaml rename to content/zh/docs/tasks/configure-pod-container/http-liveness.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/init-containers.yaml b/content/zh/docs/tasks/configure-pod-container/init-containers.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/init-containers.yaml rename to content/zh/docs/tasks/configure-pod-container/init-containers.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/lifecycle-events.yaml b/content/zh/docs/tasks/configure-pod-container/lifecycle-events.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/lifecycle-events.yaml rename to content/zh/docs/tasks/configure-pod-container/lifecycle-events.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/mem-limit-range.yaml b/content/zh/docs/tasks/configure-pod-container/mem-limit-range.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/mem-limit-range.yaml rename to content/zh/docs/tasks/configure-pod-container/mem-limit-range.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/memory-request-limit-2.yaml b/content/zh/docs/tasks/configure-pod-container/memory-request-limit-2.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/memory-request-limit-2.yaml rename to content/zh/docs/tasks/configure-pod-container/memory-request-limit-2.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/memory-request-limit-3.yaml b/content/zh/docs/tasks/configure-pod-container/memory-request-limit-3.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/memory-request-limit-3.yaml rename to content/zh/docs/tasks/configure-pod-container/memory-request-limit-3.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/memory-request-limit.yaml b/content/zh/docs/tasks/configure-pod-container/memory-request-limit.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/memory-request-limit.yaml rename to content/zh/docs/tasks/configure-pod-container/memory-request-limit.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/oir-pod-2.yaml b/content/zh/docs/tasks/configure-pod-container/oir-pod-2.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/oir-pod-2.yaml rename to content/zh/docs/tasks/configure-pod-container/oir-pod-2.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/oir-pod.yaml b/content/zh/docs/tasks/configure-pod-container/oir-pod.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/oir-pod.yaml rename to content/zh/docs/tasks/configure-pod-container/oir-pod.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/opaque-integer-resource.md b/content/zh/docs/tasks/configure-pod-container/opaque-integer-resource.md similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/opaque-integer-resource.md rename to content/zh/docs/tasks/configure-pod-container/opaque-integer-resource.md diff --git a/content/cn/docs/tasks/configure-pod-container/pod-redis.yaml b/content/zh/docs/tasks/configure-pod-container/pod-redis.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/pod-redis.yaml rename to content/zh/docs/tasks/configure-pod-container/pod-redis.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/pod.yaml b/content/zh/docs/tasks/configure-pod-container/pod.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/pod.yaml rename to content/zh/docs/tasks/configure-pod-container/pod.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/private-reg-pod.yaml b/content/zh/docs/tasks/configure-pod-container/private-reg-pod.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/private-reg-pod.yaml rename to content/zh/docs/tasks/configure-pod-container/private-reg-pod.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/projected-volume.yaml b/content/zh/docs/tasks/configure-pod-container/projected-volume.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/projected-volume.yaml rename to content/zh/docs/tasks/configure-pod-container/projected-volume.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/qos-pod-2.yaml b/content/zh/docs/tasks/configure-pod-container/qos-pod-2.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/qos-pod-2.yaml rename to content/zh/docs/tasks/configure-pod-container/qos-pod-2.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/qos-pod-3.yaml b/content/zh/docs/tasks/configure-pod-container/qos-pod-3.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/qos-pod-3.yaml rename to content/zh/docs/tasks/configure-pod-container/qos-pod-3.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/qos-pod-4.yaml b/content/zh/docs/tasks/configure-pod-container/qos-pod-4.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/qos-pod-4.yaml rename to content/zh/docs/tasks/configure-pod-container/qos-pod-4.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/qos-pod.yaml b/content/zh/docs/tasks/configure-pod-container/qos-pod.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/qos-pod.yaml rename to content/zh/docs/tasks/configure-pod-container/qos-pod.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/rq-compute-resources.yaml b/content/zh/docs/tasks/configure-pod-container/rq-compute-resources.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/rq-compute-resources.yaml rename to content/zh/docs/tasks/configure-pod-container/rq-compute-resources.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/security-context-2.yaml b/content/zh/docs/tasks/configure-pod-container/security-context-2.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/security-context-2.yaml rename to content/zh/docs/tasks/configure-pod-container/security-context-2.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/security-context-3.yaml b/content/zh/docs/tasks/configure-pod-container/security-context-3.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/security-context-3.yaml rename to content/zh/docs/tasks/configure-pod-container/security-context-3.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/security-context-4.yaml b/content/zh/docs/tasks/configure-pod-container/security-context-4.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/security-context-4.yaml rename to content/zh/docs/tasks/configure-pod-container/security-context-4.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/security-context.yaml b/content/zh/docs/tasks/configure-pod-container/security-context.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/security-context.yaml rename to content/zh/docs/tasks/configure-pod-container/security-context.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/task-pv-claim.yaml b/content/zh/docs/tasks/configure-pod-container/task-pv-claim.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/task-pv-claim.yaml rename to content/zh/docs/tasks/configure-pod-container/task-pv-claim.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/task-pv-pod.yaml b/content/zh/docs/tasks/configure-pod-container/task-pv-pod.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/task-pv-pod.yaml rename to content/zh/docs/tasks/configure-pod-container/task-pv-pod.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/task-pv-volume.yaml b/content/zh/docs/tasks/configure-pod-container/task-pv-volume.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/task-pv-volume.yaml rename to content/zh/docs/tasks/configure-pod-container/task-pv-volume.yaml diff --git a/content/cn/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml b/content/zh/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml similarity index 100% rename from content/cn/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml rename to content/zh/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml diff --git a/content/zh/docs/tasks/debug-application-cluster/audit.md b/content/zh/docs/tasks/debug-application-cluster/audit.md new file mode 100644 index 0000000000000..fe551d011db76 --- /dev/null +++ b/content/zh/docs/tasks/debug-application-cluster/audit.md @@ -0,0 +1,612 @@ +--- +reviewers: +- soltysh +- sttts +- ericchiang +content_template: templates/concept +title: Auditing +--- + +{{% capture overview %}} + +{{< feature-state state="beta" >}} + + +Kubernetes 审计功能提供了与安全相关的按时间顺序排列的记录集,记录单个用户、管理员或系统其他组件影响系统的活动顺序。 +它能帮助集群管理员处理以下问题: + + + - 发生了什么? + - 什么时候发生的? + - 谁触发的? + - 活动发生在哪个(些)对象上? + - 在哪观察到的? + - 它从哪触发的? + - 活动的后续处理行为是什么? + +{{% /capture %}} + +{{< toc >}} + +{{% capture body %}} + + +[Kube-apiserver][kube-apiserver] 执行审计。每个执行阶段的每个请求都会生成一个事件,然后根据特定策略对事件进行预处理并写入后端。 +您可以在 [设计方案][auditing-proposal] 中找到更多详细信息。 +该策略确定记录的内容并且在后端存储记录。当前的后端支持日志文件和 webhook。 + + + +每个请求都可以用相关的 "stage" 记录。已知的 stage 有: + +- `RequestReceived` - 事件的 stage 将在审计处理器接收到请求后,并且在委托给其余处理器之前生成。 + +- `ResponseStarted` - 在响应消息的头部发送后,但是响应消息体发送前。这个 stage 仅为长时间运行的请求生成(例如 watch)。 + +- `ResponseComplete` - 当响应消息体完成并且没有更多数据需要传输的时候。 +- `Panic` - 当 panic 发生时生成。 + +{{< note >}} + +**注意** 审计日志记录功能会增加 API server 的内存消耗,因为需要为每个请求存储审计所需的某些上下文。 +此外,内存消耗取决于审计日志记录的配置。 +{{< /note >}} + + +## 审计策略 + +审计政策定义了关于应记录哪些事件以及应包含哪些数据的规则。审计策略对象结构在 [`audit.k8s.io` API 组][auditing-api] 中定义。 +处理事件时,将按顺序与规则列表进行比较。第一个匹配规则设置事件的 [审计级别][auditing-level]。已知的审计级别有: + + +- `None` - 符合这条规则的日志将不会记录。 +- `Metadata` - 记录请求的 metadata(请求的用户、timestamp、resource、verb 等等),但是不记录请求或者响应的消息体。 +- `Request` - 记录事件的 metadata 和请求的消息体,但是不记录响应的消息体。这不适用于非资源类型的请求。 +- `RequestResponse` - 记录事件的 metadata,请求和响应的消息体。这不适用于非资源类型的请求。 + + +您可以使用 `--audit-policy-file` 标志将包含策略的文件传递给 [kube-apiserver][kube-apiserver]。如果不设置该标志,则不记录事件。 +注意 `rules` 字段 __必须__ 在审计策略文件中提供。没有(0)规则的策略将被视为非法配置。 + +以下是一个审计策略文件的示例: + +{{< codenew file="audit/audit-policy.yaml" >}} + + +您可以使用最低限度的审计策略文件在 `Metadata` 级别记录所有请求: + +```yaml +# Log all requests at the Metadata level. +apiVersion: audit.k8s.io/v1beta1 +kind: Policy +rules: +- level: Metadata +``` + + +管理员构建自己的审计配置文件时,应使用 [GCE 使用的审计配置文件][gce-audit-profile] 作为参考。 + + +## 审计后端 + +审计后端实现将审计事件导出到外部存储。 +[Kube-apiserver][kube-apiserver] 提供两个后端: + +- Log 后端,将事件写入到磁盘 +- Webhook 后端,将事件发送到外部 API + +在这两种情况下,审计事件结构均由 `audit.k8s.io` API 组中的 API 定义。当前版本的 API 是 [`v1beta1`][auditing-api]。 + +{{< note >}} + +**注意:** 在 patch 请求的情况下,请求的消息体需要是一个 JSON 串指定 patch 操作,而不是一个完整的 Kubernetes API 对象 JSON 串。 +例如,以下的示例是一个合法的 patch 请求消息体,该请求对应 `/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`。 + +```json +[ + { + "op": "replace", + "path": "/spec/parallelism", + "value": 0 + }, + { + "op": "remove", + "path": "/spec/template/spec/containers/0/terminationMessagePolicy" + } +] +``` +{{< /note >}} + + +### Log 后端 + +Log 后端将审计事件写入 JSON 格式的文件。您可以使用以下 [kube-apiserver][kube-apiserver] 标志配置 Log 审计后端: + + +- `--audit-log-path` 指定用来写入审计事件的日志文件路径。不指定此标志会禁用日志后端。`-` 意味着标准化 +- `--audit-log-maxage` 定义了保留旧审计日志文件的最大天数 +- `--audit-log-maxbackup` 定义了要保留的审计日志文件的最大数量 +- `--audit-log-maxsize` 定义审计日志文件的最大大小(兆字节) + + +### Webhook 后端 + +Webhook 后端将审计事件发送到远程 API,该远程 API 应该暴露与 [kube-apiserver][kube-apiserver] 相同的API。 +您可以使用如下 kube-apiserver 标志来配置 webhook 审计后端: + + +- `--audit-webhook-config-file` webhook 配置文件的路径。Webhook 配置文件实际上是一个 [kubeconfig][kubeconfig]。 +- `--audit-webhook-initial-backoff` 指定在第一次失败后重发请求等待的时间。随后的请求将以指数退避重试。 + +webhook 配置文件使用 kubeconfig 格式指定服务的远程地址和用于连接它的凭据。 + +### Batching + + + +log 和 webhook 后端都支持 batch。以 webhook 为例,以下是可用参数列表。要获取 log 后端的同样参数,请在参数名称中将 `webhook` 替换为 `log`。 +默认情况下,在 `webhook` 中启用 batch,在 `log` 中禁用 batch。同样,默认情况下,在 `webhook` 中启用限制,在 `log` 中禁用限制。 + +- `--audit-webhook-mode` 定义缓存策略,可选值如下: + - `batch` - 以批处理缓存事件和异步的过程。这是默认值。 + - `blocking` - 阻止 API server 处理每个单独事件的响应。 + + +以下参数仅用于 `batch` 模式。 + +- `--audit-webhook-batch-buffer-size` 定义 batch 之前要缓存的事件数。 + 如果传入事件的速率溢出缓存区,则会丢弃事件。 +- `--audit-webhook-batch-max-size` 定义一个 batch 中的最大事件数。 +- `--audit-webhook-batch-max-wait` 无条件 batch 队列中的事件前等待的最大事件。 +- `--audit-webhook-batch-throttle-qps` 每秒生成的最大 batch 平均值。 +- `--audit-webhook-batch-throttle-burst` 在达到允许的 QPS 前,同一时刻允许存在的最大 batch 生成数。 + + +#### 参数调整 + +需要设置参数以适应 apiserver 上的负载。 + + +例如,如果 kube-apiserver 每秒收到 100 个请求,并且每个请求仅在 `ResponseStarted` 和 `ResponseComplete` 阶段进行审计,则应该考虑每秒生成约 200 个审计事件。 +假设批处理中最多有 100 个事件,则应将限制级别设置为至少 2 个 QPS。 +假设后端最多需要 5 秒钟来写入事件,您应该设置缓冲区大小以容纳最多 5 秒的事件,即 10 个 batch,即 1000 个事件。 + + +但是,在大多数情况下,默认参数应该足够了,您不必手动设置它们。您可以查看 kube-apiserver 公开的以下 Prometheus 指标,并在日志中监控审计子系统的状态。 + +- `apiserver_audit_event_total` 包含所有暴露的审计事件数量的指标。 +- `apiserver_audit_error_total` 在暴露时由于发生错误而被丢弃的事件的数量。 + + +## 多集群配置 + +如果您通过 [aggregation layer][kube-aggregator] 对 Kubernetes API 进行扩展,那么您也可以为聚合的 apiserver 设置审计日志。 +想要这么做,您需要以上述的格式给聚合的 apiserver 配置参数,并且配置日志管道以采用审计日志。不同的 apiserver 可以配置不同的审计配置和策略。 + + +## 日志选择器示例 + +### 使用 fluentd 从日志文件中选择并且分发审计日志 + +[Fluentd][fluentd] 是一个开源的数据采集器,可以从统一的日志层中采集。 +在以下示例中,我们将使用 fluentd 来按照命名空间划分审计事件。 + +1. 在 kube-apiserver node 节点上安装 [fluentd, fluent-plugin-forest and fluent-plugin-rewrite-tag-filter][fluentd_install_doc] +1. 为 fluentd 创建一个配置文件 + + ```none + $ cat < /etc/fluentd/config + # fluentd conf runs in the same host with kube-apiserver + + @type tail + # audit log path of kube-apiserver + path /var/log/audit + pos_file /var/log/audit.pos + format json + time_key time + time_format %Y-%m-%dT%H:%M:%S.%N%z + tag audit + + + + #https://github.com/fluent/fluent-plugin-rewrite-tag-filter/issues/13 + type record_transformer + enable_ruby + + namespace ${record["objectRef"].nil? ? "none":(record["objectRef"]["namespace"].nil? ? "none":record["objectRef"]["namespace"])} + + + + + # route audit according to namespace element in context + @type rewrite_tag_filter + rewriterule1 namespace ^(.+) ${tag}.$1 + + + + @type record_transformer + remove_keys namespace + + + + @type forest + subtype file + remove_prefix audit + + + ``` + + +1. 启动 fluentd + + ```shell + $ fluentd -c /etc/fluentd/config -vv + ``` + + +1. 给 kube-apiserver 配置以下参数并启动: + + ```shell + --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/kube-audit --audit-log-format=json + ``` + + +1. 在 `/var/log/audit-*.log` 文件中检查不同命名空间的审计事件 + + +### 使用 logstash 采集并分发 webhook 后端的审计事件 + +[Logstash][logstash] 是一个开源的、服务器端的数据处理工具。在下面的示例中,我们将使用 logstash 采集 webhook 后端的审计事件,并且将来自不同用户的事件存入不同的文件。 + +1. 安装 [logstash][logstash_install_doc] +1. 为 logstash 创建配置文件 + + ```none + $ cat < /etc/logstash/config + input{ + http{ + #TODO, figure out a way to use kubeconfig file to authenticate to logstash + #https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html#plugins-inputs-http-ssl + port=>8888 + } + } + filter{ + split{ + # Webhook audit backend sends several events together with EventList + # split each event here. + field=>[items] + # We only need event subelement, remove others. + remove_field=>[headers, metadata, apiVersion, "@timestamp", kind, "@version", host] + } + mutate{ + rename => {items=>event} + } + } + output{ + file{ + # Audit events from different users will be saved into different files. + path=>"/var/log/kube-audit-%{[event][user][username]}/audit" + } + } + ``` + + +1. 启动 logstash + + ```shell + $ bin/logstash -f /etc/logstash/config --path.settings /etc/logstash/ + ``` + + +1. 为 kube-apiserver webhook 审计后端创建一个 [kubeconfig 文件](/docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig/) + + ```none + $ cat < /etc/kubernetes/audit-webhook-kubeconfig + apiVersion: v1 + clusters: + - cluster: + server: http://:8888 + name: logstash + contexts: + - context: + cluster: logstash + user: "" + name: default-context + current-context: default-context + kind: Config + preferences: {} + users: [] + EOF + ``` + + +1. 为 kube-apiserver 配置以下参数并启动: + + ```shell + --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-webhook-config-file=/etc/kubernetes/audit-webhook-kubeconfig + ``` + + +1. 在 logstash node 节点的 `/var/log/kube-audit-*/audit` 目录中检查审计事件 + +注意到,除了文件输出插件外,logstash 还有其它多种输出可以让用户路由不同的数据。例如,用户可以将审计事件发送给支持全文搜索和分析的 elasticsearch 插件。 + + +## 传统的审计 + +__注意:__ 传统审计已被弃用,自 1.8 版本以后默认禁用,并且将会在 1.12 版本中彻底移除。 +如果想要回退到传统的审计功能,请使用 [kube-apiserver][kube-apiserver] 中 feature gate 的 `AdvancedAuditing` 功能来禁用高级审核功能: + +``` +--feature-gates=AdvancedAuditing=false +``` + + +在传统格式中,每个审计文件条目包含两行: + +1. 请求行包含唯一 ID 以匹配响应和请求元数据,例如源 IP、请求用户、模拟信息和请求的资源等。 +2. 响应行包含与请求行和响应代码相匹配的唯一 ID。 + +``` +2017-03-21T03:57:09.106841886-04:00 AUDIT: id="c939d2a7-1c37-4ef1-b2f7-4ba9b1e43b53" ip="127.0.0.1" method="GET" user="admin" groups="\"system:masters\",\"system:authenticated\"" as="" asgroups="" namespace="default" uri="/api/v1/namespaces/default/pods" +2017-03-21T03:57:09.108403639-04:00 AUDIT: id="c939d2a7-1c37-4ef1-b2f7-4ba9b1e43b53" response="200" +``` + + +### 配置 + +[Kube-apiserver][kube-apiserver] 提供以下选项,负责配置审核日志的位置和处理方式: + + +- `audit-log-path` - 使审计日志指向请求被记录到的文件,'-' 表示标准输出。 +- `audit-log-maxage` - 根据文件名中编码的时间戳指定保留旧审计日志文件的最大天数。 +- `audit-log-maxbackup` - 指定要保留的旧审计日志文件的最大数量。 +- `audit-log-maxsize` - 指定审核日志文件的最大大小(兆字节)。默认为100MB。 + + +如果审核日志文件已经存在,则 Kubernetes 会将新的审核日志附加到该文件。 +否则,Kubernetes 会在您在 `audit-log-path` 中指定的位置创建一个审计日志文件。 +如果审计日志文件超过了您在 `audit-log-maxsize` 中指定的大小,则 Kubernetes 将通过在文件名(在文件扩展名之前)附加当前时间戳并重新创建一个新的审计日志文件来重命名当前日志文件。 +Kubernetes 可能会在创建新的日志文件时删除旧的日志文件; 您可以通过指定 `audit-log-maxbackup` 和 `audit-log-maxage` 选项来配置保留多少文件以及它们的保留时间。 + +[kube-apiserver]: /docs/admin/kube-apiserver +[auditing-proposal]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/auditing.md +[auditing-api]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1beta1/types.go +[gce-audit-profile]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh#L735 +[kubeconfig]: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/ +[fluentd]: http://www.fluentd.org/ +[fluentd_install_doc]: http://docs.fluentd.org/v0.12/articles/quickstart#step1-installing-fluentd +[logstash]: https://www.elastic.co/products/logstash +[logstash_install_doc]: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html +[kube-aggregator]: /docs/concepts/api-extension/apiserver-aggregation + +{{% /capture %}} diff --git a/content/cn/docs/tasks/debug-application-cluster/debug-application.md b/content/zh/docs/tasks/debug-application-cluster/debug-application.md similarity index 100% rename from content/cn/docs/tasks/debug-application-cluster/debug-application.md rename to content/zh/docs/tasks/debug-application-cluster/debug-application.md diff --git a/content/cn/docs/tasks/debug-application-cluster/debug-cluster.md b/content/zh/docs/tasks/debug-application-cluster/debug-cluster.md similarity index 100% rename from content/cn/docs/tasks/debug-application-cluster/debug-cluster.md rename to content/zh/docs/tasks/debug-application-cluster/debug-cluster.md diff --git a/content/cn/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/zh/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md similarity index 100% rename from content/cn/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md rename to content/zh/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md diff --git a/content/cn/docs/tasks/debug-application-cluster/debug-stateful-set.md b/content/zh/docs/tasks/debug-application-cluster/debug-stateful-set.md similarity index 100% rename from content/cn/docs/tasks/debug-application-cluster/debug-stateful-set.md rename to content/zh/docs/tasks/debug-application-cluster/debug-stateful-set.md diff --git a/content/cn/docs/tasks/inject-data-application/commands.yaml b/content/zh/docs/tasks/inject-data-application/commands.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/commands.yaml rename to content/zh/docs/tasks/inject-data-application/commands.yaml diff --git a/content/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml b/content/zh/docs/tasks/inject-data-application/dapi-envars-container.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/dapi-envars-container.yaml rename to content/zh/docs/tasks/inject-data-application/dapi-envars-container.yaml diff --git a/content/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml b/content/zh/docs/tasks/inject-data-application/dapi-envars-pod.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/dapi-envars-pod.yaml rename to content/zh/docs/tasks/inject-data-application/dapi-envars-pod.yaml diff --git a/content/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml b/content/zh/docs/tasks/inject-data-application/dapi-volume-resources.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml rename to content/zh/docs/tasks/inject-data-application/dapi-volume-resources.yaml diff --git a/content/cn/docs/tasks/inject-data-application/dapi-volume.yaml b/content/zh/docs/tasks/inject-data-application/dapi-volume.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/dapi-volume.yaml rename to content/zh/docs/tasks/inject-data-application/dapi-volume.yaml diff --git a/content/cn/docs/tasks/inject-data-application/define-command-argument-container.md b/content/zh/docs/tasks/inject-data-application/define-command-argument-container.md similarity index 100% rename from content/cn/docs/tasks/inject-data-application/define-command-argument-container.md rename to content/zh/docs/tasks/inject-data-application/define-command-argument-container.md diff --git a/content/cn/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/zh/docs/tasks/inject-data-application/define-environment-variable-container.md similarity index 96% rename from content/cn/docs/tasks/inject-data-application/define-environment-variable-container.md rename to content/zh/docs/tasks/inject-data-application/define-environment-variable-container.md index 3b5a721ebc1b0..e422fc750d9cb 100644 --- a/content/cn/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/zh/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -63,7 +63,7 @@ content_template: templates/task {{% capture whatsnext %}} -* 有关环境变量的更多信息,请参阅[这里](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/)。 +* 有关环境变量的更多信息,请参阅[这里](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/)。 * 有关如何通过环境变量来使用Secret,请参阅[这里](/docs/user-guide/secrets/#using-secrets-as-environment-variables)。 * 关于[EnvVarSource](/docs/api-reference/{{< param "version" >}}/#envvarsource-v1-core)资源的信息。 diff --git a/content/cn/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/zh/docs/tasks/inject-data-application/distribute-credentials-secure.md similarity index 100% rename from content/cn/docs/tasks/inject-data-application/distribute-credentials-secure.md rename to content/zh/docs/tasks/inject-data-application/distribute-credentials-secure.md diff --git a/content/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md b/content/zh/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md similarity index 98% rename from content/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md rename to content/zh/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md index dd9ee032a95c4..a230f57afdfab 100644 --- a/content/cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md +++ b/content/zh/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md @@ -22,7 +22,7 @@ content_template: templates/task 有两种方式可以将Pod和Container字段呈现给运行中的容器: -* [环境变量](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) +* [环境变量](/docs/tasks/configure-pod-container/environment-variable-expose-pod-information/) * DownwardAPIVolumeFile 这两种呈现Pod和Container字段的方式都称为*Downward API*。 diff --git a/content/cn/docs/tasks/inject-data-application/envars.yaml b/content/zh/docs/tasks/inject-data-application/envars.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/envars.yaml rename to content/zh/docs/tasks/inject-data-application/envars.yaml diff --git a/content/cn/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md b/content/zh/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md similarity index 100% rename from content/cn/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md rename to content/zh/docs/tasks/inject-data-application/environment-variable-expose-pod-information.md diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-allow-db-merged.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-allow-db.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-allow-db.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-allow-db.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-allow-db.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-configmap.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-configmap.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-configmap.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-configmap.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-conflict-pod.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-conflict-preset.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-conflict-preset.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-conflict-preset.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-conflict-preset.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-merged.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-merged.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-merged.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-merged.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-multi-merged.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-multi-merged.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-multi-merged.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-multi-merged.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-pod.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-pod.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-pod.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-pod.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-preset.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-preset.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-preset.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-preset.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-proxy.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-proxy.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-proxy.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-proxy.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-replicaset-merged.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset-replicaset.yaml b/content/zh/docs/tasks/inject-data-application/podpreset-replicaset.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset-replicaset.yaml rename to content/zh/docs/tasks/inject-data-application/podpreset-replicaset.yaml diff --git a/content/cn/docs/tasks/inject-data-application/podpreset.md b/content/zh/docs/tasks/inject-data-application/podpreset.md similarity index 100% rename from content/cn/docs/tasks/inject-data-application/podpreset.md rename to content/zh/docs/tasks/inject-data-application/podpreset.md diff --git a/content/cn/docs/tasks/inject-data-application/secret-envars-pod.yaml b/content/zh/docs/tasks/inject-data-application/secret-envars-pod.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/secret-envars-pod.yaml rename to content/zh/docs/tasks/inject-data-application/secret-envars-pod.yaml diff --git a/content/cn/docs/tasks/inject-data-application/secret-pod.yaml b/content/zh/docs/tasks/inject-data-application/secret-pod.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/secret-pod.yaml rename to content/zh/docs/tasks/inject-data-application/secret-pod.yaml diff --git a/content/cn/docs/tasks/inject-data-application/secret.yaml b/content/zh/docs/tasks/inject-data-application/secret.yaml similarity index 100% rename from content/cn/docs/tasks/inject-data-application/secret.yaml rename to content/zh/docs/tasks/inject-data-application/secret.yaml diff --git a/content/zh/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/zh/docs/tasks/job/fine-parallel-processing-work-queue.md new file mode 100755 index 0000000000000..4d4d3432dca14 --- /dev/null +++ b/content/zh/docs/tasks/job/fine-parallel-processing-work-queue.md @@ -0,0 +1,380 @@ +--- +cn-approvers: +- linyouchong +title: 使用工作队列进行精细的并行处理 +content_template: templates/task +weight: 40 +--- + + +{{% capture overview %}} + + +在这个例子中,我们会运行一个Kubernetes Job,其中的 Pod 会运行多个并行工作进程。 + + +在这个例子中,当每个pod被创建时,它会从一个任务队列中获取一个工作单元,处理它,然后重复,直到到达队列的尾部。 + + + +下面是这个示例的步骤概述 + + +1. **启动存储服务用于保存工作队列。** 在这个例子中,我们使用 Redis 来存储工作项。在上一个例子中,我们使用了 RabbitMQ。在这个例子中,由于 AMQP 不能为客户端提供一个良好的方法来检测一个有限长度的工作队列是否为空,我们使用了 Redis 和一个自定义的工作队列客户端库。在实践中,您可能会设置一个类似于 Redis 的存储库,并将其同时用于多项任务或其他事务的工作队列。 + +1. **创建一个队列,然后向其中填充消息。** 每个消息表示一个将要被处理的工作任务。在这个例子中,消息只是一个我们将用于进行长度计算的整数。 + +1. **启动一个 Job 对队列中的任务进行处理**。这个 Job 启动了若干个 Pod 。每个 Pod 从消息队列中取出一个工作任务,处理它,然后重复,直到到达队列的尾部。 + +{{% /capture %}} + +{{< toc >}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + + +熟秋基础知识,非并行方式运行 [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/)。 + +{{% /capture %}} + +{{% capture steps %}} + + +## 启动 Redis + + +对于这个例子,为了简单起见,我们将启动一个单实例的 Redis。 +了解如何部署一个可伸缩、高可用的 Redis 例子,请查看 [Redis 样例](https://github.com/kubernetes/examples/tree/master/guestbook) + + +如果您在使用本文档库的源代码目录,您可以进入如下目录,然后启动一个临时的 Pod 用于运行 Redis 和 一个临时的 service 以便我们能够找到这个 Pod + +```shell +$ cd content/en/examples/application/job/redis +$ kubectl create -f ./redis-pod.yaml +pod/redis-master created +$ kubectl create -f ./redis-service.yaml +service/redis created +``` + + +如果您没有使用本文档库的源代码目录,您可以直接下载如下文件: + +- [`redis-pod.yaml`](/examples/application/job/redis/redis-pod.yaml) +- [`redis-service.yaml`](/examples/application/job/redis/redis-service.yaml) +- [`Dockerfile`](/examples/application/job/redis/Dockerfile) +- [`job.yaml`](/examples/application/job/redis/job.yaml) +- [`rediswq.py`](/examples/application/job/redis/rediswq.py) +- [`worker.py`](/examples/application/job/redis/worker.py) + + +## 使用任务填充队列 + + +现在,让我们往队列里添加一些“任务”。在这个例子中,我们的任务只是一些将被打印出来的字符串。 + + +启动一个临时的可交互的 pod 用于运行 Redis 命令行界面。 + +```shell +$ kubectl run -i --tty temp --image redis --command "/bin/sh" +Waiting for pod default/redis2-c7h78 to be running, status is Pending, pod ready: false +Hit enter for command prompt +``` + + +现在按回车键,启动 redis 命令行界面,然后创建一个存在若干个工作项的列表。 + +``` +# redis-cli -h redis +redis:6379> rpush job2 "apple" +(integer) 1 +redis:6379> rpush job2 "banana" +(integer) 2 +redis:6379> rpush job2 "cherry" +(integer) 3 +redis:6379> rpush job2 "date" +(integer) 4 +redis:6379> rpush job2 "fig" +(integer) 5 +redis:6379> rpush job2 "grape" +(integer) 6 +redis:6379> rpush job2 "lemon" +(integer) 7 +redis:6379> rpush job2 "melon" +(integer) 8 +redis:6379> rpush job2 "orange" +(integer) 9 +redis:6379> lrange job2 0 -1 +1) "apple" +2) "banana" +3) "cherry" +4) "date" +5) "fig" +6) "grape" +7) "lemon" +8) "melon" +9) "orange" +``` + + +因此,这个键为 `job2` 的列表就是我们的工作队列。 + + +注意:如果您还没有正确地配置 Kube DNS,您可能需要将上面的第一步改为 `redis-cli -h $REDIS_SERVICE_HOST`。 + + + +创建镜像 + + +现在我们已经准备好创建一个我们要运行的镜像 + + +我们会使用一个带有 redis 客户端的 python 工作程序从消息队列中读出消息。 + + +这里提供了一个简单的 Redis 工作队列客户端库,叫 rediswq.py ([下载](/examples/application/job/redis/rediswq.py))。 + + +Job 中每个 Pod 内的 “工作程序” 使用工作队列客户端库获取工作。如下: + +{{< codenew language="python" file="application/job/redis/worker.py" >}} + + +如果您在使用本文档库的源代码目录,请将当前目录切换到 `content/en/examples/application/job/redis/`。否则,请点击链接下载 [`worker.py`](/examples/application/job/redis/worker.py)、 [`rediswq.py`](/examples/application/job/redis/rediswq.py) 和 [`Dockerfile`](/examples/application/job/redis/Dockerfile)。然后构建镜像: + +```shell +docker build -t job-wq-2 . +``` + + +### Push 镜像 + + +对于 [Docker Hub](https://hub.docker.com/),请先用您的用户名给镜像打上标签,然后使用下面的命令 push 您的镜像到仓库。请将 `` 替换为您自己的用户名。 + +```shell +docker tag job-wq-2 /job-wq-2 +docker push /job-wq-2 +``` + + +您需要将镜像 push 到一个公共仓库或者 [配置集群访问您的私有仓库](/docs/concepts/containers/images/)。 + + +如果您使用的是 [Google Container +Registry](https://cloud.google.com/tools/container-registry/),请先用您的 project ID 给您的镜像打上标签,然后 push 到 GCR 。请将 `` 替换为您自己的 project ID + +```shell +docker tag job-wq-2 gcr.io//job-wq-2 +gcloud docker -- push gcr.io//job-wq-2 +``` + + +## 定义一个 Job + + +这是 job 定义: + +{{< codenew file="application/job/redis/job.yaml" >}} + + +请确保将 job 模板中的 `gcr.io/myproject` 更改为您自己的路径。 + + +在这个例子中,每个 pod 处理了队列中的多个项目,直到队列中没有项目时便退出。因为是由工作程序自行检测工作队列是否为空,并且 Job 控制器不知道工作队列的存在,所以依赖于工作程序在完成工作时发出信号。工作程序以成功退出的形式发出信号表示工作队列已经为空。所以,只要有任意一个工作程序成功退出,控制器就知道工作已经完成了,所有的 Pod 将很快会退出。因此,我们将 Job 的 completion count 设置为 1 。尽管如此,Job 控制器还是会等待其它 Pod 完成。 + + + +## 运行 Job + + +现在运行这个 Job : + +```shell +kubectl create -f ./job.yaml +``` + + +稍等片刻,然后检查这个 Job。 + +```shell +$ kubectl describe jobs/job-wq-2 +Name: job-wq-2 +Namespace: default +Selector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f +Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f + job-name=job-wq-2 +Annotations: +Parallelism: 2 +Completions: +Start Time: Mon, 11 Jan 2016 17:07:59 -0800 +Pods Statuses: 1 Running / 0 Succeeded / 0 Failed +Pod Template: + Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f + job-name=job-wq-2 + Containers: + c: + Image: gcr.io/exampleproject/job-wq-2 + Port: + Environment: + Mounts: + Volumes: +Events: + FirstSeen LastSeen Count From SubobjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8 + + +$ kubectl logs pods/job-wq-2-7r7b2 +Worker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f +Initial queue state: empty=False +Working on banana +Working on date +Working on lemon +``` + + +您可以看到,其中的一个 pod 处理了若干个工作单元。 + +{{% /capture %}} + +{{% capture discussion %}} + + +## 其它 + + +如果您不方便运行一个队列服务或者修改您的容器用于运行一个工作队列,您可以考虑其它的 [job 模式](/docs/concepts/jobs/run-to-completion-finite-workloads/#job-patterns)。 + + +如果您有连续的后台处理业务,那么可以考虑使用 `replicationController` 来运行您的后台业务,和运行一个类似 [https://github.com/resque/resque](https://github.com/resque/resque) 的后台处理库。 + +{{% /capture %}} diff --git a/content/cn/docs/tasks/manage-daemon/rollback-daemon-set.md b/content/zh/docs/tasks/manage-daemon/rollback-daemon-set.md similarity index 100% rename from content/cn/docs/tasks/manage-daemon/rollback-daemon-set.md rename to content/zh/docs/tasks/manage-daemon/rollback-daemon-set.md diff --git a/content/cn/docs/tasks/manage-gpus/scheduling-gpus.md b/content/zh/docs/tasks/manage-gpus/scheduling-gpus.md similarity index 100% rename from content/cn/docs/tasks/manage-gpus/scheduling-gpus.md rename to content/zh/docs/tasks/manage-gpus/scheduling-gpus.md diff --git a/content/cn/docs/tasks/manage-hugepages/scheduling-hugepages.md b/content/zh/docs/tasks/manage-hugepages/scheduling-hugepages.md similarity index 100% rename from content/cn/docs/tasks/manage-hugepages/scheduling-hugepages.md rename to content/zh/docs/tasks/manage-hugepages/scheduling-hugepages.md diff --git a/content/cn/docs/tasks/run-application/deployment-patch-demo.yaml b/content/zh/docs/tasks/run-application/deployment-patch-demo.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/deployment-patch-demo.yaml rename to content/zh/docs/tasks/run-application/deployment-patch-demo.yaml diff --git a/content/cn/docs/tasks/run-application/deployment-scale.yaml b/content/zh/docs/tasks/run-application/deployment-scale.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/deployment-scale.yaml rename to content/zh/docs/tasks/run-application/deployment-scale.yaml diff --git a/content/cn/docs/tasks/run-application/deployment-update.yaml b/content/zh/docs/tasks/run-application/deployment-update.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/deployment-update.yaml rename to content/zh/docs/tasks/run-application/deployment-update.yaml diff --git a/content/cn/docs/tasks/run-application/deployment.yaml b/content/zh/docs/tasks/run-application/deployment.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/deployment.yaml rename to content/zh/docs/tasks/run-application/deployment.yaml diff --git a/content/cn/docs/tasks/run-application/gce-volume.yaml b/content/zh/docs/tasks/run-application/gce-volume.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/gce-volume.yaml rename to content/zh/docs/tasks/run-application/gce-volume.yaml diff --git a/content/cn/docs/tasks/run-application/mysql-configmap.yaml b/content/zh/docs/tasks/run-application/mysql-configmap.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/mysql-configmap.yaml rename to content/zh/docs/tasks/run-application/mysql-configmap.yaml diff --git a/content/cn/docs/tasks/run-application/mysql-deployment.yaml b/content/zh/docs/tasks/run-application/mysql-deployment.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/mysql-deployment.yaml rename to content/zh/docs/tasks/run-application/mysql-deployment.yaml diff --git a/content/cn/docs/tasks/run-application/mysql-services.yaml b/content/zh/docs/tasks/run-application/mysql-services.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/mysql-services.yaml rename to content/zh/docs/tasks/run-application/mysql-services.yaml diff --git a/content/cn/docs/tasks/run-application/mysql-statefulset.yaml b/content/zh/docs/tasks/run-application/mysql-statefulset.yaml similarity index 100% rename from content/cn/docs/tasks/run-application/mysql-statefulset.yaml rename to content/zh/docs/tasks/run-application/mysql-statefulset.yaml diff --git a/content/cn/docs/tasks/run-application/rolling-update-replication-controller.md b/content/zh/docs/tasks/run-application/rolling-update-replication-controller.md similarity index 100% rename from content/cn/docs/tasks/run-application/rolling-update-replication-controller.md rename to content/zh/docs/tasks/run-application/rolling-update-replication-controller.md diff --git a/content/cn/docs/tasks/run-application/run-single-instance-stateful-application.md b/content/zh/docs/tasks/run-application/run-single-instance-stateful-application.md similarity index 100% rename from content/cn/docs/tasks/run-application/run-single-instance-stateful-application.md rename to content/zh/docs/tasks/run-application/run-single-instance-stateful-application.md diff --git a/content/cn/docs/tasks/run-application/run-stateless-application-deployment.md b/content/zh/docs/tasks/run-application/run-stateless-application-deployment.md similarity index 100% rename from content/cn/docs/tasks/run-application/run-stateless-application-deployment.md rename to content/zh/docs/tasks/run-application/run-stateless-application-deployment.md diff --git a/content/cn/docs/tasks/run-application/scale-stateful-set.md b/content/zh/docs/tasks/run-application/scale-stateful-set.md similarity index 100% rename from content/cn/docs/tasks/run-application/scale-stateful-set.md rename to content/zh/docs/tasks/run-application/scale-stateful-set.md diff --git a/content/cn/docs/tasks/tls/certificate-rotation.md b/content/zh/docs/tasks/tls/certificate-rotation.md similarity index 100% rename from content/cn/docs/tasks/tls/certificate-rotation.md rename to content/zh/docs/tasks/tls/certificate-rotation.md diff --git a/content/cn/docs/templates/index.md b/content/zh/docs/templates/index.md similarity index 100% rename from content/cn/docs/templates/index.md rename to content/zh/docs/templates/index.md diff --git a/content/cn/docs/tutorials/configuration/configure-redis-using-configmap.md b/content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md similarity index 100% rename from content/cn/docs/tutorials/configuration/configure-redis-using-configmap.md rename to content/zh/docs/tutorials/configuration/configure-redis-using-configmap.md diff --git a/content/cn/docs/tutorials/kubernetes-basics/_index.html b/content/zh/docs/tutorials/kubernetes-basics/_index.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/_index.html rename to content/zh/docs/tutorials/kubernetes-basics/_index.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/cluster-interactive.html b/content/zh/docs/tutorials/kubernetes-basics/cluster-interactive.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/cluster-interactive.html rename to content/zh/docs/tutorials/kubernetes-basics/cluster-interactive.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/cluster-intro.html b/content/zh/docs/tutorials/kubernetes-basics/cluster-intro.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/cluster-intro.html rename to content/zh/docs/tutorials/kubernetes-basics/cluster-intro.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/deploy-interactive.html b/content/zh/docs/tutorials/kubernetes-basics/deploy-interactive.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/deploy-interactive.html rename to content/zh/docs/tutorials/kubernetes-basics/deploy-interactive.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/deploy-intro.html b/content/zh/docs/tutorials/kubernetes-basics/deploy-intro.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/deploy-intro.html rename to content/zh/docs/tutorials/kubernetes-basics/deploy-intro.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/explore-interactive.html b/content/zh/docs/tutorials/kubernetes-basics/explore-interactive.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/explore-interactive.html rename to content/zh/docs/tutorials/kubernetes-basics/explore-interactive.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/explore-intro.html b/content/zh/docs/tutorials/kubernetes-basics/explore-intro.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/explore-intro.html rename to content/zh/docs/tutorials/kubernetes-basics/explore-intro.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/expose-interactive.html b/content/zh/docs/tutorials/kubernetes-basics/expose-interactive.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/expose-interactive.html rename to content/zh/docs/tutorials/kubernetes-basics/expose-interactive.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/expose-intro.html b/content/zh/docs/tutorials/kubernetes-basics/expose-intro.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/expose-intro.html rename to content/zh/docs/tutorials/kubernetes-basics/expose-intro.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/scale-interactive.html b/content/zh/docs/tutorials/kubernetes-basics/scale-interactive.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/scale-interactive.html rename to content/zh/docs/tutorials/kubernetes-basics/scale-interactive.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/scale-intro.html b/content/zh/docs/tutorials/kubernetes-basics/scale-intro.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/scale-intro.html rename to content/zh/docs/tutorials/kubernetes-basics/scale-intro.html diff --git a/content/zh/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/zh/docs/tutorials/kubernetes-basics/scale/scale-intro.html new file mode 100644 index 0000000000000..aeff0243828bd --- /dev/null +++ b/content/zh/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -0,0 +1,136 @@ + +--- +title: 运行应用程序的多个实例 +weight: 10 +--- + + + + + +
+
+
+
+ +

目的

+
    + +
  • 用 kubectl 扩缩应用程序
  • +
+
+ +
+ +

扩缩应用程序

+ + +

在之前的模块中,我们创建了一个 Deployment,然后通过 Service让其可以开放访问。Deployment 仅为跑这个应用程序创建了一个 Pod。 当流量增加时,我们需要扩容应用程序满足用户需求。

+ + +

扩缩 是通过改变 Deployment 中的副本数量来实现的。

+ +
+
+
+ +

小结:

+
    + +
  • 扩缩一个 Deployment
  • +
+
+
+ +

在运行 kubectl run 命令时,你可以通过设置 --replicas 参数来设置 Deployment 的副本数。

+
+
+
+
+ +
+
+ +

扩缩概述

+
+
+ +
+
+
+ +
+
+ +
+ +
+
+ + +

扩展 Deployment 将创建新的 Pods,并将资源调度请求分配到有可用资源的节点上,收缩 会将 Pods 数量减少至所需的状态。Kubernetes 还支持 Pods 的自动缩放,但这并不在本教程的讨论范围内。将 Pods 数量收缩到0也是可以的,但这会终止 Deployment 上所有已经部署的 Pods。

+ + +

运行应用程序的多个实例需要在它们之间分配流量。服务 (Service)有一种负载均衡器类型,可以将网络流量均衡分配到外部可访问的 Pods 上。服务将会一直通过端点来监视 Pods 的运行,保证流量只分配到可用的 Pods 上。

+ +
+
+
+ +

扩缩是通过改变 Deployment 中的副本数量来实现的。

+
+
+
+ +
+ +
+
+ +

一旦有了多个应用实例,就可以没有宕机地滚动更新。我们将会在下面的模块中介绍这些。现在让我们使用在线终端来体验一下应用程序的扩缩过程。

+
+
+
+ + + +
+ +
+ + + diff --git a/content/cn/docs/tutorials/kubernetes-basics/update-interactive.html b/content/zh/docs/tutorials/kubernetes-basics/update-interactive.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/update-interactive.html rename to content/zh/docs/tutorials/kubernetes-basics/update-interactive.html diff --git a/content/cn/docs/tutorials/kubernetes-basics/update-intro.html b/content/zh/docs/tutorials/kubernetes-basics/update-intro.html similarity index 100% rename from content/cn/docs/tutorials/kubernetes-basics/update-intro.html rename to content/zh/docs/tutorials/kubernetes-basics/update-intro.html diff --git a/content/cn/docs/tutorials/object-management-kubectl/imperative-object-management-command.md b/content/zh/docs/tutorials/object-management-kubectl/imperative-object-management-command.md similarity index 100% rename from content/cn/docs/tutorials/object-management-kubectl/imperative-object-management-command.md rename to content/zh/docs/tutorials/object-management-kubectl/imperative-object-management-command.md diff --git a/content/cn/docs/tutorials/object-management-kubectl/object-management.md b/content/zh/docs/tutorials/object-management-kubectl/object-management.md similarity index 100% rename from content/cn/docs/tutorials/object-management-kubectl/object-management.md rename to content/zh/docs/tutorials/object-management-kubectl/object-management.md diff --git a/content/cn/docs/tutorials/services/source-ip.md b/content/zh/docs/tutorials/services/source-ip.md similarity index 95% rename from content/cn/docs/tutorials/services/source-ip.md rename to content/zh/docs/tutorials/services/source-ip.md index c4b4bd34e9e17..77da6af9d071c 100644 --- a/content/cn/docs/tutorials/services/source-ip.md +++ b/content/zh/docs/tutorials/services/source-ip.md @@ -116,7 +116,7 @@ command=GET ## Type=NodePort 类型 Services 的 Source IP -对于 Kubernetes 1.5,发送给类型为 [Type=NodePort](/docs/user-guide/services/#nodeport) Services 的数据包默认进行源地址 NAT。你可以创建一个 `NodePort` Service 来进行测试: +对于 Kubernetes 1.5,发送给类型为 [Type=NodePort](/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT。你可以创建一个 `NodePort` Service 来进行测试: ```console $ kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort @@ -210,7 +210,7 @@ client_address=104.132.1.79 ## Type=LoadBalancer 类型 Services 的 Source IP -对于 Kubernetes 1.5,发送给类型为 [Type=LoadBalancer](/docs/user-guide/services/#nodeport) Services 的数据包默认进行源地址 NAT,这是由于所有处于 `Ready` 状态的 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP(如前面章节所述)。 +对于 Kubernetes 1.5,发送给类型为 [Type=LoadBalancer](/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT,这是由于所有处于 `Ready` 状态的 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP(如前面章节所述)。 你可以通过在一个 loadbalancer 上暴露这个 source-ip-app 来进行测试。 diff --git a/content/cn/docs/tutorials/stateful-application/Dockerfile b/content/zh/docs/tutorials/stateful-application/Dockerfile similarity index 100% rename from content/cn/docs/tutorials/stateful-application/Dockerfile rename to content/zh/docs/tutorials/stateful-application/Dockerfile diff --git a/content/cn/docs/tutorials/stateful-application/FETCH_HEAD b/content/zh/docs/tutorials/stateful-application/FETCH_HEAD similarity index 100% rename from content/cn/docs/tutorials/stateful-application/FETCH_HEAD rename to content/zh/docs/tutorials/stateful-application/FETCH_HEAD diff --git a/content/cn/docs/tutorials/stateful-application/basic-stateful-set.md b/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md similarity index 99% rename from content/cn/docs/tutorials/stateful-application/basic-stateful-set.md rename to content/zh/docs/tutorials/stateful-application/basic-stateful-set.md index 9a5bfd9ed3af7..81a6b1b3566cb 100644 --- a/content/cn/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/zh/docs/tutorials/stateful-application/basic-stateful-set.md @@ -107,9 +107,10 @@ web-1 1/1 Running 0 18s 请注意在 `web-0` Pod 处于 [Running和Ready](/docs/user-guide/pod-states) 状态后 `web-1` Pod 才会被启动。 - + ## StatefulSet 中的 Pod diff --git a/content/cn/docs/tutorials/stateful-application/cassandra-service.yaml b/content/zh/docs/tutorials/stateful-application/cassandra-service.yaml similarity index 100% rename from content/cn/docs/tutorials/stateful-application/cassandra-service.yaml rename to content/zh/docs/tutorials/stateful-application/cassandra-service.yaml diff --git a/content/cn/docs/tutorials/stateful-application/cassandra-statefulset.yaml b/content/zh/docs/tutorials/stateful-application/cassandra-statefulset.yaml similarity index 100% rename from content/cn/docs/tutorials/stateful-application/cassandra-statefulset.yaml rename to content/zh/docs/tutorials/stateful-application/cassandra-statefulset.yaml diff --git a/content/cn/docs/tutorials/stateful-application/cassandra.md b/content/zh/docs/tutorials/stateful-application/cassandra.md similarity index 100% rename from content/cn/docs/tutorials/stateful-application/cassandra.md rename to content/zh/docs/tutorials/stateful-application/cassandra.md diff --git a/content/cn/docs/tutorials/stateful-application/dev b/content/zh/docs/tutorials/stateful-application/dev similarity index 100% rename from content/cn/docs/tutorials/stateful-application/dev rename to content/zh/docs/tutorials/stateful-application/dev diff --git a/content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md similarity index 98% rename from content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md rename to content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index 5d521f02c71e2..030d2c24e5428 100644 --- a/content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -15,7 +15,7 @@ approvers: * [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) 定义持久化磁盘(磁盘生命周期不和 Pods 绑定)。 * [Services](https://kubernetes.io/docs/concepts/services-networking/service/) 使得 Pods 能够找到其它 Pods。 -* [External Load Balancers](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) 对外暴露 Services。 +* [External Load Balancers](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer) 对外暴露 Services。 * [Deployments](http://kubernetes.io/docs/user-guide/deployments/) 确保 Pods 持续运行。 * [Secrets](http://kubernetes.io/docs/user-guide/secrets/) 保存敏感密码信息。 @@ -66,7 +66,7 @@ kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/m Kubernetes本质是模块化的,可以在各种环境中运行。但并不是所有集群都相同。此处是本示例的一些要求: * 需要 1.2 版本以上的 Kubernetes,以使用更新的特性,例如 PV Claims 和 Deployments。运行 `kubectl version` 来查看你的集群版本。 * [Cluster DNS](https://github.com/kubernetes/dns) 将被用于服务发现。 -* 一个 [external load balancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) 将被用于接入 WordPress。 +* 一个 [external load balancer](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer) 将被用于接入 WordPress。 * 使用了 [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。你必须创建集群中需要的 Persistent Volumes。本示例将展示两种类型的 volume 的创建方法,但是任何类型的 volume 都是足够使用的。 diff --git a/content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/local-volumes.yaml b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/local-volumes.yaml similarity index 100% rename from content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/local-volumes.yaml rename to content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/local-volumes.yaml diff --git a/content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/mysql-deployment.yaml b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/mysql-deployment.yaml similarity index 100% rename from content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/mysql-deployment.yaml rename to content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/mysql-deployment.yaml diff --git a/content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/wordpress-deployment.yaml b/content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/wordpress-deployment.yaml similarity index 100% rename from content/cn/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/wordpress-deployment.yaml rename to content/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/wordpress-deployment.yaml diff --git a/content/cn/docs/tutorials/stateful-application/web.yaml b/content/zh/docs/tutorials/stateful-application/web.yaml similarity index 100% rename from content/cn/docs/tutorials/stateful-application/web.yaml rename to content/zh/docs/tutorials/stateful-application/web.yaml diff --git a/content/cn/docs/tutorials/stateful-application/webp.yaml b/content/zh/docs/tutorials/stateful-application/webp.yaml similarity index 100% rename from content/cn/docs/tutorials/stateful-application/webp.yaml rename to content/zh/docs/tutorials/stateful-application/webp.yaml diff --git a/content/cn/docs/tutorials/stateful-application/zookeeper.md b/content/zh/docs/tutorials/stateful-application/zookeeper.md similarity index 100% rename from content/cn/docs/tutorials/stateful-application/zookeeper.md rename to content/zh/docs/tutorials/stateful-application/zookeeper.md diff --git a/content/cn/docs/tutorials/stateful-application/zookeeper.yaml b/content/zh/docs/tutorials/stateful-application/zookeeper.yaml similarity index 100% rename from content/cn/docs/tutorials/stateful-application/zookeeper.yaml rename to content/zh/docs/tutorials/stateful-application/zookeeper.yaml diff --git a/content/cn/docs/user-guide/bad-nginx-deployment.yaml b/content/zh/docs/user-guide/bad-nginx-deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/bad-nginx-deployment.yaml rename to content/zh/docs/user-guide/bad-nginx-deployment.yaml diff --git a/content/cn/docs/user-guide/curlpod.yaml b/content/zh/docs/user-guide/curlpod.yaml similarity index 100% rename from content/cn/docs/user-guide/curlpod.yaml rename to content/zh/docs/user-guide/curlpod.yaml diff --git a/content/cn/docs/user-guide/deployment.yaml b/content/zh/docs/user-guide/deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/deployment.yaml rename to content/zh/docs/user-guide/deployment.yaml diff --git a/content/cn/docs/user-guide/docker-cli-to-kubectl.md b/content/zh/docs/user-guide/docker-cli-to-kubectl.md similarity index 100% rename from content/cn/docs/user-guide/docker-cli-to-kubectl.md rename to content/zh/docs/user-guide/docker-cli-to-kubectl.md diff --git a/content/cn/docs/user-guide/ingress.yaml b/content/zh/docs/user-guide/ingress.yaml similarity index 100% rename from content/cn/docs/user-guide/ingress.yaml rename to content/zh/docs/user-guide/ingress.yaml diff --git a/content/cn/docs/user-guide/job.yaml b/content/zh/docs/user-guide/job.yaml similarity index 100% rename from content/cn/docs/user-guide/job.yaml rename to content/zh/docs/user-guide/job.yaml diff --git a/content/cn/docs/user-guide/jsonpath.md b/content/zh/docs/user-guide/jsonpath.md similarity index 100% rename from content/cn/docs/user-guide/jsonpath.md rename to content/zh/docs/user-guide/jsonpath.md diff --git a/content/cn/docs/user-guide/kubectl-overview.md b/content/zh/docs/user-guide/kubectl-overview.md similarity index 100% rename from content/cn/docs/user-guide/kubectl-overview.md rename to content/zh/docs/user-guide/kubectl-overview.md diff --git a/content/cn/docs/user-guide/multi-pod.yaml b/content/zh/docs/user-guide/multi-pod.yaml similarity index 100% rename from content/cn/docs/user-guide/multi-pod.yaml rename to content/zh/docs/user-guide/multi-pod.yaml diff --git a/content/cn/docs/user-guide/new-nginx-deployment.yaml b/content/zh/docs/user-guide/new-nginx-deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/new-nginx-deployment.yaml rename to content/zh/docs/user-guide/new-nginx-deployment.yaml diff --git a/content/cn/docs/user-guide/nginx-app.yaml b/content/zh/docs/user-guide/nginx-app.yaml similarity index 100% rename from content/cn/docs/user-guide/nginx-app.yaml rename to content/zh/docs/user-guide/nginx-app.yaml diff --git a/content/cn/docs/user-guide/nginx-deployment.yaml b/content/zh/docs/user-guide/nginx-deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/nginx-deployment.yaml rename to content/zh/docs/user-guide/nginx-deployment.yaml diff --git a/content/cn/docs/user-guide/nginx-init-containers.yaml b/content/zh/docs/user-guide/nginx-init-containers.yaml similarity index 100% rename from content/cn/docs/user-guide/nginx-init-containers.yaml rename to content/zh/docs/user-guide/nginx-init-containers.yaml diff --git a/content/cn/docs/user-guide/nginx-lifecycle-deployment.yaml b/content/zh/docs/user-guide/nginx-lifecycle-deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/nginx-lifecycle-deployment.yaml rename to content/zh/docs/user-guide/nginx-lifecycle-deployment.yaml diff --git a/content/cn/docs/user-guide/nginx-probe-deployment.yaml b/content/zh/docs/user-guide/nginx-probe-deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/nginx-probe-deployment.yaml rename to content/zh/docs/user-guide/nginx-probe-deployment.yaml diff --git a/content/cn/docs/user-guide/nginx-secure-app.yaml b/content/zh/docs/user-guide/nginx-secure-app.yaml similarity index 100% rename from content/cn/docs/user-guide/nginx-secure-app.yaml rename to content/zh/docs/user-guide/nginx-secure-app.yaml diff --git a/content/cn/docs/user-guide/nginx-svc.yaml b/content/zh/docs/user-guide/nginx-svc.yaml similarity index 100% rename from content/cn/docs/user-guide/nginx-svc.yaml rename to content/zh/docs/user-guide/nginx-svc.yaml diff --git a/content/cn/docs/user-guide/pod-w-message.yaml b/content/zh/docs/user-guide/pod-w-message.yaml similarity index 100% rename from content/cn/docs/user-guide/pod-w-message.yaml rename to content/zh/docs/user-guide/pod-w-message.yaml diff --git a/content/cn/docs/user-guide/pod.yaml b/content/zh/docs/user-guide/pod.yaml similarity index 100% rename from content/cn/docs/user-guide/pod.yaml rename to content/zh/docs/user-guide/pod.yaml diff --git a/content/cn/docs/user-guide/redis-deployment.yaml b/content/zh/docs/user-guide/redis-deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/redis-deployment.yaml rename to content/zh/docs/user-guide/redis-deployment.yaml diff --git a/content/cn/docs/user-guide/redis-resource-deployment.yaml b/content/zh/docs/user-guide/redis-resource-deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/redis-resource-deployment.yaml rename to content/zh/docs/user-guide/redis-resource-deployment.yaml diff --git a/content/cn/docs/user-guide/redis-secret-deployment.yaml b/content/zh/docs/user-guide/redis-secret-deployment.yaml similarity index 100% rename from content/cn/docs/user-guide/redis-secret-deployment.yaml rename to content/zh/docs/user-guide/redis-secret-deployment.yaml diff --git a/content/cn/docs/user-guide/run-my-nginx.yaml b/content/zh/docs/user-guide/run-my-nginx.yaml similarity index 100% rename from content/cn/docs/user-guide/run-my-nginx.yaml rename to content/zh/docs/user-guide/run-my-nginx.yaml diff --git a/content/cn/docs/whatisk8s.md b/content/zh/docs/whatisk8s.md similarity index 100% rename from content/cn/docs/whatisk8s.md rename to content/zh/docs/whatisk8s.md diff --git a/layouts/shortcodes/language-repos-list.html b/layouts/shortcodes/language-repos-list.html deleted file mode 100644 index 293efa1ffc942..0000000000000 --- a/layouts/shortcodes/language-repos-list.html +++ /dev/null @@ -1,38 +0,0 @@ -{{- $languages := .Site.Home.AllTranslations }} - - - - - - - - - - {{- range $languages.ByWeight }} - {{- $name := .Language.LanguageName }} - {{- $code := string .Language }} - {{- $repo := printf "https://github.com/%s" (index .Site.Data.repos $code) }} - - - - - - {{- end }} - -
- Language - - Language code - - Repository -
- - {{ $name }} - - - {{ $code }} - - - {{ $repo }} - -
\ No newline at end of file