diff --git a/content/ko/docs/setup/independent/create-cluster-kubeadm.md b/content/ko/docs/setup/independent/create-cluster-kubeadm.md index f658956e1b6b1..806bf369d1e88 100644 --- a/content/ko/docs/setup/independent/create-cluster-kubeadm.md +++ b/content/ko/docs/setup/independent/create-cluster-kubeadm.md @@ -28,7 +28,7 @@ scope. You can install _kubeadm_ very easily on operating systems that support installing deb or rpm packages. The responsible SIG for kubeadm, [SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle), provides these packages pre-built for you, -but you may also on other OSes. +but you may also build them from source for other OSes. ### kubeadm Maturity @@ -315,14 +315,10 @@ Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements). -```shell -kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml -``` -Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`, but until `flannel v0.11.0` is released -you need to use the following manifest that supports all the architectures: +Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`. ```shell -kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml +kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml ``` For more information about `flannel`, see [the CoreOS flannel repository on GitHub diff --git a/content/ko/docs/setup/independent/high-availability.md b/content/ko/docs/setup/independent/high-availability.md index d7cde10d82a08..64b1ee93ce3e8 100644 --- a/content/ko/docs/setup/independent/high-availability.md +++ b/content/ko/docs/setup/independent/high-availability.md @@ -391,6 +391,7 @@ done ```sh kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml kubeadm alpha phase controlplane all --config kubeadm-config.yaml + kubeadm alpha phase kubelet config annotate-cri --config kubeadm-config.yaml kubeadm alpha phase mark-master --config kubeadm-config.yaml ``` diff --git a/content/ko/docs/setup/independent/install-kubeadm.md b/content/ko/docs/setup/independent/install-kubeadm.md index df670972c5c93..2b4413ea8d01b 100644 --- a/content/ko/docs/setup/independent/install-kubeadm.md +++ b/content/ko/docs/setup/independent/install-kubeadm.md @@ -23,11 +23,11 @@ see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-clust - HypriotOS v1.0.1+ - Container Linux (tested with 1800.6.0) * 2 GB or more of RAM per machine (any less will leave little room for your apps) -* 2 CPUs or more +* 2 CPUs or more * Full network connectivity between all machines in the cluster (public or private network is fine) * Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-the-mac-address-and-product-uuid-are-unique-for-every-node) for more details. * Certain ports are open on your machines. See [here](#check-required-ports) for more details. -* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. +* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. {{% /capture %}} @@ -87,12 +87,12 @@ The container runtime used by default is Docker, which is enabled through the bu Other CRI-based runtimes include: -- [cri-containerd](https://github.com/containerd/cri-containerd) +- [containerd](https://github.com/containerd/cri) (CRI plugin built into containerd) - [cri-o](https://github.com/kubernetes-incubator/cri-o) - [frakti](https://github.com/kubernetes/frakti) - [rkt](https://github.com/kubernetes-incubator/rktlet) -Refer to the [CRI installation instructions](/docs/setup/cri.md) for more information. +Refer to the [CRI installation instructions](/docs/setup/cri) for more information. ## Installing kubeadm, kubelet and kubectl @@ -105,10 +105,10 @@ You will install these packages on all of your machines: * `kubectl`: the command line util to talk to your cluster. -kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will -need to ensure they match the version of the Kubernetes control panel you want +kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will +need to ensure they match the version of the Kubernetes control panel you want kubeadm to install for you. If you do not, there is a risk of a version skew occurring that -can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the +can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version. For example, kubelets running 1.7.0 should be fully compatible with a 1.8.0 API server, but not vice versa. @@ -119,7 +119,7 @@ This is because kubeadm and Kubernetes require [special attention to upgrade](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/). {{}} -For more information on version skews, please read our +For more information on version skews, please read our [version skew policy](/docs/setup/independent/create-cluster-kubeadm/#version-skew-policy). {{< tabs name="k8s_install" >}} @@ -128,7 +128,7 @@ For more information on version skews, please read our apt-get update && apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat </etc/apt/sources.list.d/kubernetes.list -deb http://apt.kubernetes.io/ kubernetes-xenial main +deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl @@ -147,18 +147,24 @@ repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF + +# Set SELinux in permissive mode (effectively disabling it) setenforce 0 +sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config + yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes + systemctl enable kubelet && systemctl start kubelet ``` **Note:** - - Disabling SELinux by running `setenforce 0` is required to allow containers to access the host filesystem, which is required by pod networks for example. + - Setting SELinux in permissive mode by running `setenforce 0` and `sed ...` effectively disables it. + This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet. - Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure `net.bridge.bridge-nf-call-iptables` is set to 1 in your `sysctl` config, e.g. - + ```bash cat < /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 @@ -245,7 +251,3 @@ If you are running into difficulties with kubeadm, please consult our [troublesh * [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/) {{% /capture %}} - - - - diff --git a/content/ko/docs/setup/minikube.md b/content/ko/docs/setup/minikube.md index 1507cce2da0eb..0246b91cc66d0 100644 --- a/content/ko/docs/setup/minikube.md +++ b/content/ko/docs/setup/minikube.md @@ -60,11 +60,34 @@ NAME READY STATUS RESTARTS AGE hello-minikube-3383150820-vctvh 1/1 Running 0 13s # We can see that the pod is now Running and we will now be able to curl it: $ curl $(minikube service hello-minikube --url) -CLIENT VALUES: -client_address=192.168.99.1 -command=GET -real path=/ -... + + +Hostname: hello-minikube-7c77b68cff-8wdzq + +Pod Information: + -no pod information available- + +Server values: + server_version=nginx: 1.13.3 - lua: 10008 + +Request Information: + client_address=172.17.0.1 + method=GET + real path=/ + query= + request_version=1.1 + request_scheme=http + request_uri=http://192.168.99.100:8080/ + +Request Headers: + accept=*/* + host=192.168.99.100:30674 + user-agent=curl/7.47.0 + +Request Body: + -no body in request- + + $ kubectl delete services hello-minikube service "hello-minikube" deleted $ kubectl delete deployment hello-minikube @@ -191,13 +214,6 @@ To switch back to this context later, run this command: `kubectl config use-cont #### 쿠버네티스 버전 지정 -Minikube supports running multiple different versions of Kubernetes. You can -access a list of all available versions via - -``` -minikube get-k8s-versions -``` - You can specify the specific version of Kubernetes for Minikube to use by adding the `--kubernetes-version` string to the `minikube start` command. For example, to run version `v1.7.3`, you would run the following: diff --git a/content/ko/docs/setup/multiple-zones.md b/content/ko/docs/setup/multiple-zones.md index ea1d053fc3fef..50b8352fd9f6d 100644 --- a/content/ko/docs/setup/multiple-zones.md +++ b/content/ko/docs/setup/multiple-zones.md @@ -122,10 +122,10 @@ and `failure-domain.beta.kubernetes.io/zone` for the zone: NAME STATUS ROLES AGE VERSION LABELS -kubernetes-master Ready,SchedulingDisabled 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master -kubernetes-minion-87j9 Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 -kubernetes-minion-9vlv Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv -kubernetes-minion-a12q Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q +kubernetes-master Ready,SchedulingDisabled 6m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master +kubernetes-minion-87j9 Ready 6m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 +kubernetes-minion-9vlv Ready 6m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-a12q Ready 6m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q ``` ### 두번째 영역에 더 많은 노드 추가하기 @@ -157,13 +157,13 @@ in us-central1-b: > kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS -kubernetes-master Ready,SchedulingDisabled 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master -kubernetes-minion-281d Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d -kubernetes-minion-87j9 Ready 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 -kubernetes-minion-9vlv Ready 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv -kubernetes-minion-a12q Ready 17m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q -kubernetes-minion-pp2f Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f -kubernetes-minion-wf8i Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i +kubernetes-master Ready,SchedulingDisabled 16m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master +kubernetes-minion-281d Ready 2m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d +kubernetes-minion-87j9 Ready 16m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 +kubernetes-minion-9vlv Ready 16m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-a12q Ready 17m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q +kubernetes-minion-pp2f Ready 2m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f +kubernetes-minion-wf8i Ready 2m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i ``` ### 볼륨 어피니티 @@ -284,9 +284,9 @@ Node: kubernetes-minion-olsh/10.240.0.11 > kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels NAME STATUS ROLES AGE VERSION LABELS -kubernetes-minion-9vlv Ready 34m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv -kubernetes-minion-281d Ready 20m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d -kubernetes-minion-olsh Ready 3m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh +kubernetes-minion-9vlv Ready 34m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv +kubernetes-minion-281d Ready 20m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d +kubernetes-minion-olsh Ready 3m v1.12.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh ``` diff --git a/content/ko/docs/setup/pick-right-solution.md b/content/ko/docs/setup/pick-right-solution.md index 5ce3aefad8458..e8dc7cd394369 100644 --- a/content/ko/docs/setup/pick-right-solution.md +++ b/content/ko/docs/setup/pick-right-solution.md @@ -30,6 +30,8 @@ a Kubernetes cluster from scratch. * [Minikube](/docs/setup/minikube/) is the recommended method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. +* [microk8s](https://microk8s.io/) provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command. + * [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) can use VirtualBox on your machine to deploy Kubernetes to one or more VMs for development and test scenarios. Scales to full multi-node cluster. * [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers) is a Terraform/Packer/BASH based Infrastructure as Code (IaC) scripts to create a seven node (1 Boot, 1 Master, 1 Management, 1 Proxy and 3 Workers) LXD cluster on Linux Host. @@ -91,6 +93,7 @@ few commands. These solutions are actively developed and have active community s * [Madcore.Ai](https://madcore.ai/) * [Oracle Container Engine for K8s](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm) * [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) +* [Giant Swarm](https://giantswarm.io) * [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/) * [Stackpoint.io](/docs/setup/turnkey/stackpoint/) * [Tectonic by CoreOS](https://coreos.com/tectonic) @@ -101,11 +104,13 @@ few commands. * [Agile Stacks](https://www.agilestacks.com/products/kubernetes) * [APPUiO](https://appuio.ch) +* [GKE On-Prem | Google Cloud](https://cloud.google.com/gke-on-prem/) * [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/) * [Kontena Pharos](https://kontena.io/pharos/) * [Kubermatic](https://www.loodse.com) * [Kublr](https://kublr.com/) * [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) +* [Giant Swarm](https://giantswarm.io) * [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/) * [SUSE CaaS Platform](https://www.suse.com/products/caas-platform) * [SUSE Cloud Application Platform](https://www.suse.com/products/cloud-application-platform/) diff --git a/content/ko/docs/setup/scratch.md b/content/ko/docs/setup/scratch.md index ea0c97c8fcfa9..c8351e5e21a2b 100644 --- a/content/ko/docs/setup/scratch.md +++ b/content/ko/docs/setup/scratch.md @@ -194,7 +194,7 @@ We recommend that you use the etcd version which is provided in the Kubernetes b were tested extensively with this version of etcd and not with any other version. The recommended version number can also be found as the value of `TAG` in `kubernetes/cluster/images/etcd/Makefile`. -For the miniumum recommended version of etcd, refer to +For the minimum recommended version of etcd, refer to [Configuring and Updating etcd](/docs/tasks/administer-cluster/configure-upgrade-etcd/) The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry): diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md index e6cedf8a89bcb..cef4959c23fad 100644 --- a/content/ko/docs/tutorials/hello-minikube.md +++ b/content/ko/docs/tutorials/hello-minikube.md @@ -37,13 +37,13 @@ menu: {{< note >}} **참고:** macOS 10.13 버전으로 업데이트 후 `brew update`를 실행 시 Homebrew에서 다음과 같은 오류가 발생할 경우에는, - ``` + ```shell Error: /usr/local is not writable. You should change the ownership and permissions of /usr/local back to your user account: sudo chown -R $(whoami) /usr/local ``` Homebrew를 다시 설치하여 문제를 해결할 수 있다. - ``` + ```shell /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` {{< /note >}} @@ -159,15 +159,22 @@ node server.js 8080 포트를 열고, `server.js` 파일을 이미지에 복사하고 Node.js 서버를 시작한다. -이 튜토리얼은 Minikube를 사용하기 때문에, Docker 이미지를 레지스트리로 Push하는 대신, -Minikube VM과 동일한 Docker 호스트를 사용하면 이미지를 단순히 빌드하기만 해도 -이미지가 자동으로 (역주: Minikube에서 사용할 수 있는 위치에) 생긴다. 이를 위해서, -다음의 커맨드를 사용해서 Minikube Docker 데몬을 사용할 수 있도록 한다. +기본적으로, Docker는 로컬 머신의 Docker 레지스트리에 이미지를 생성하고 저장한다. +이 튜토리얼에서는, 로컬 머신의 Docker 레지스트리를 사용하지 않고 Minikube의 +VM 인스턴스 _속에서_ 구동 중인 Docker 데몬의 레지스트리를 사용한다. 'docker' 명령이 +Minikube의 Docker 데몬을 가르키도록 하려면 다음과 같이 입력한다. (unix 셀) ```shell eval $(minikube docker-env) ``` +powershell에서는 다음과 같이 입력한다. +```shell +minikube docker-env | Invoke-Expression +``` + + + {{< note >}} **참고:** 나중에 Minikube 호스트를 더 이상 사용하고 싶지 않은 경우, `eval $ (minikube docker-env -u)`를 실행하여 변경을 되돌릴 수 있다. @@ -179,6 +186,22 @@ Minikube Docker 데몬을 사용하여 Docker 이미지를 빌드한다. (마지 docker build -t hello-node:v1 . ``` +Minikube의 Docker 레지스트리에 이미지가 있는 것을 확인한다. + +```shell +minikube ssh docker images +``` + +Output: + +```shell +REPOSITORY TAG IMAGE ID CREATED SIZE +hello-node v1 f82485ca953c 3 minutes ago 655MB +... +node 6.9.2 faaadb4aaf9b 20 months ago 655MB +``` + + 이제 Minikube VM에서 빌드한 이미지를 실행할 수 있다. ## 디플로이먼트 만들기 diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 3868b7d18074f..7b8c344961b1f 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -86,11 +86,11 @@

쿠버네티스에 첫 번째 애플리케이션 배

디플로이먼트를 생성할 때, 애플리케이션에 대한 컨테이너 이미지와 구동시키고자 하는 복제 수를 지정해야 한다. 디플로이먼트를 업데이트해서 이런 정보를 나중에 변경할 수 있다. 모듈 - 56의 부트캠프에서 어떻게 + 56의 부트캠프에서 어떻게 스케일하고 업데이트하는지에 대해 다룬다.

+
@@ -103,10 +103,9 @@

쿠버네티스에 첫 번째 애플리케이션 배
-

우리의 첫 번째 디플로이먼트로, Docker 컨테이너로 패키지된 Node.js - 애플리케이션을 사용해보자. 소스 코드와 Dockerfile은 GitHub - 저장소에서 찾을 수 있다.

+

우리의 첫 번째 디플로이먼트로, Docker 컨테이너로 패키지된 Node.js 애플리케이션을 사용해보자. + Node.js 애플리케이션을 작성하고 Docker 컨테이너를 배포하기 위해서, + Hello Minikube 튜토리얼의 지시를 따른다.

이제 디플로이먼트를 이해했으니, 온라인 튜토리얼을 통해 우리의 첫 번째 애플리케이션을 배포해보자!

diff --git a/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html b/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html index 0b489f8c4302c..398fd149e3b26 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/update/update-intro.html @@ -10,6 +10,7 @@ +
diff --git a/content/ko/examples/minikube/Dockerfile b/content/ko/examples/minikube/Dockerfile index 34b1f40f528ca..1fe745295a47f 100644 --- a/content/ko/examples/minikube/Dockerfile +++ b/content/ko/examples/minikube/Dockerfile @@ -1,4 +1,4 @@ -FROM node:6.9.2 +FROM node:6.14.2 EXPOSE 8080 COPY server.js . CMD node server.js