diff --git a/chapters/docker-kubernetes.adoc b/chapters/docker-kubernetes.adoc index b8205ec..260a99c 100644 --- a/chapters/docker-kubernetes.adoc +++ b/chapters/docker-kubernetes.adoc @@ -116,17 +116,18 @@ NAME LABELS STATUS + [source, text] ---- -> ./cluster/kubectl.sh get nodes -NAME LABELS STATUS -10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready kubernetes> ./cluster/kubectl.sh get po -POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE -kube-dns-v1-jnvez 172.17.0.2 10.245.1.3/10.245.1.3 k8s-app=kube-dns,kubernetes.io/cluster-service=true,version=v1 Running 6 minutes - skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 4 minutes last termination: exit code 2 - kube2sky gcr.io/google_containers/kube2sky:1.7 Running 4 minutes - etcd gcr.io/google_containers/etcd:2.0.9 Running 5 minutes -kube-scheduler-kubernetes-master kubernetes-master/ Pending 2 seconds - kube-scheduler gcr.io/google_containers/kube-scheduler:d1f640dfb379f64daf3ae44286014821 +POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE +etcd-server-kubernetes-master kubernetes-master/ Running 2 minutes + etcd-container gcr.io/google_containers/etcd:2.0.9 Running 2 minutes +kube-apiserver-kubernetes-master kubernetes-master/ Running 2 minutes + kube-apiserver gcr.io/google_containers/kube-apiserver:465b93ab80b30057f9c2ef12f30450c3 Running 2 minutes +kube-dns-v1-lxdof 10.245.1.3/ k8s-app=kube-dns,kubernetes.io/cluster-service=true,version=v1 Pending 2 minutes + etcd gcr.io/google_containers/etcd:2.0.9 + kube2sky gcr.io/google_containers/kube2sky:1.7 + skydns gcr.io/google_containers/skydns:2015-03-11-001 +kube-scheduler-kubernetes-master kubernetes-master/ Running 2 minutes + kube-scheduler gcr.io/google_containers/kube-scheduler:d1f640dfb379f64daf3ae44286014821 Running 2 minutes ---- + . Check the list of services running: @@ -174,26 +175,45 @@ Any Service that a Pod wants to access must be created before the Pod itself, or ./cluster/kubectl.sh create -f ../../attendees/kubernetes/mysql-service.yaml ---- + +It uses the following configuration file: ++ +[source, yaml] +---- +id: mysql +kind: Service +apiVersion: v1beta3 +metadata: + name: mysql + labels: + name: mysql + context: docker-k8s-lab +spec: + ports: + - port: 3306 + selector: + name: mysql + labels: + name: mysql +---- ++ . Check that the service is created: + [source, text] ---- -> ./cluster/kubectl.sh get se -NAME LABELS SELECTOR IP(S) PORT(S) -kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.247.0.10 53/UDP - 53/TCP -kubernetes component=apiserver,provider=kubernetes 10.247.0.2 443/TCP -kubernetes-ro component=apiserver,provider=kubernetes 10.247.0.1 80/TCP -mysql name=mysql name=mysql 10.247.82.83 3306/TCP +> ./cluster/kubectl.sh get se -l context=docker-k8s-lab +NAME LABELS SELECTOR IP(S) PORT(S) +mysql context=docker-k8s-lab,name=mysql name=mysql 10.247.141.208 3306/TCP ---- + -. When a Pod is run on a node, the kubelet adds a set of environment variables for each active Service. +Note that the label used during the creation is used to query the service. ++ +When a Pod is run on a node, the kubelet adds a set of environment variables for each active Service. + It supports both Docker links compatible variables and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. + Our service name is ``mysql'' and so `MYSQL_SERVICE_HOST` and `MYSQL_SERVICE_PORT` variables are available to other pods. -TODO: Consider adding DNS support as explained at: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#dns +Send a Pull Request for https://github.com/javaee-samples/docker-java/issues/62[#62]. #### Start MySQL Replication Controller @@ -201,8 +221,8 @@ TODO: Consider adding DNS support as explained at: https://github.com/GoogleClou + [source, text] ---- -> ./cluster/kubectl.sh create --v=5 -f ../../attendees/kubernetes/mysql.yaml -I0616 13:46:49.767091 4376 defaults.go:174] creating security context for container mysql +> ./cluster/kubectl.sh --v=5 create -f ../../attendees/kubernetes/mysql.yaml +I0616 19:41:55.441461 8346 defaults.go:174] creating security context for container mysql replicationcontrollers/mysql ---- + @@ -213,17 +233,19 @@ It uses the following configuration file: kind: ReplicationController apiVersion: v1beta3 metadata: - name: mysql-server + name: mysql labels: - name: mysql-server + name: mysql + context: docker-k8s-lab spec: replicas: 1 selector: - name: mysql-server + name: mysql template: metadata: labels: - name: mysql-server + name: mysql + context: docker-k8s-lab spec: containers: - name: mysql @@ -238,19 +260,29 @@ spec: - name: MYSQL_ROOT_PASSWORD value: supersecret ports: - - containerPort: 3360 + - containerPort: 3306 + hostPort: 3306 ---- + +Once again, the ``docker-k8s-lab'' label is used. This simplifies querying the created pods later on. ++ . Verify MySQL replication controller as: + [source, text] ---- -> ./cluster/kubectl.sh get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -kube-dns-v1 etcd gcr.io/google_containers/etcd:2.0.9 k8s-app=kube-dns,version=v1 1 - kube2sky gcr.io/google_containers/kube2sky:1.7 - skydns gcr.io/google_containers/skydns:2015-03-11-001 -mysql-server mysql mysql:latest name=mysql-server 1 +> ./cluster/kubectl.sh get rc -l context=docker-k8s-lab +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS +mysql mysql mysql:latest name=mysql 1 +---- ++ +. Check the status of MySQL pod as: ++ +[source, text] +---- +> ./cluster/kubectl.sh get po -l context=docker-k8s-lab +POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE +mysql-7lq67 10.245.1.3/ context=docker-k8s-lab,name=mysql Pending About a minute + mysql mysql:latest ---- #### Start WildFly Replication Controller @@ -259,8 +291,8 @@ mysql-server mysql mysql:latest n + [source, text] ---- -> ./cluster/kubectl.sh --v=5 create -f ../../attendees/kubernetes/wildfly.yaml -I0605 16:25:41.990260 66897 defaults.go:174] creating security context for container wildfly +> ./cluster/kubectl.sh --v=5 create -f ../../attendees/kubernetes/wildfly.yaml +I0616 18:59:00.563099 7849 defaults.go:174] creating security context for container wildfly replicationcontrollers/wildfly ---- + @@ -274,6 +306,7 @@ metadata: name: wildfly labels: name: wildfly + context: docker-k8s-lab spec: replicas: 1 selector: @@ -282,27 +315,52 @@ spec: metadata: labels: name: wildfly-server + context: docker-k8s-lab spec: containers: - name: wildfly image: arungupta/wildfly-mysql-javaee7:k8s ports: - containerPort: 8080 + hostPort: 8080 ---- + -. Verify WildFly replication controller as: +. Verify WildFly replication controller using ``docker-k8s-lab'' label as: + [source, text] ---- -> ./cluster/kubectl.sh get rc -CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS -kube-dns-v1 etcd gcr.io/google_containers/etcd:2.0.9 k8s-app=kube-dns,version=v1 1 - kube2sky gcr.io/google_containers/kube2sky:1.7 - skydns gcr.io/google_containers/skydns:2015-03-11-001 -mysql-server mysql mysql:latest name=mysql-server 1 -wildfly wildfly arungupta/wildfly-mysql-javaee7:knetes name=wildfly-server 1 +> ./cluster/kubectl.sh get rc -l context=docker-k8s-lab +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS +mysql mysql mysql:latest name=mysql 1 +wildfly wildfly arungupta/wildfly-mysql-javaee7:k8s name=wildfly-server 1 +---- ++ +. Check the status of WildFly pod as: ++ +[source, text] +---- +> ./cluster/kubectl.sh get pod -l context=docker-k8s-lab +POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE +mysql-7lq67 10.245.1.3/ context=docker-k8s-lab,name=mysql Pending 3 minutes + mysql mysql:latest +wildfly-o0nw6 10.245.1.3/ context=docker-k8s-lab,name=wildfly-server Pending 45 seconds + wildfly arungupta/wildfly-mysql-javaee7:k8s +---- + +Make sure the status of both WildFly and MySQL pod is changed to ``running''. It will look like: + +[source, text] +---- +> ./cluster/kubectl.sh get pod -l context=docker-k8s-lab +POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE +mysql-7lq67 172.17.0.9 10.245.1.3/10.245.1.3 context=docker-k8s-lab,name=mysql Running 14 minutes + mysql mysql:latest Running 10 minutes +wildfly-o0nw6 172.17.0.10 10.245.1.3/10.245.1.3 context=docker-k8s-lab,name=wildfly-server Running 11 minutes + wildfly arungupta/wildfly-mysql-javaee7:k8s Running 26 seconds ---- +NOTE: Takes a while for all the pods to start. It took ~25 minutes on a 16 GB, i7 Mac OS X. + ### Access Java EE Application http://:8080/employees/resources/employees @@ -314,7 +372,7 @@ http://:8080/employees/resources/employees ### Application Logs -. Login to the Minion-1 VM: +. Login to Minion-1 VM: + [source, text] ---- @@ -357,41 +415,13 @@ docker logs ### Delete Kubernetes Resources -. Delete WildFly repliation controller as: -+ -[source, text] ----- -> ./cluster/kubectl.sh --v=5 delete -f ../../attendees/kubernetes/wildfly.yaml -I0605 16:39:09.152694 67149 defaults.go:174] creating security context for container wildfly -replicationcontrollers/wildfly ----- -+ -. Delete MySQL replication controller as: -+ -[source, text] ----- -> ./cluster/kubectl.sh --v=5 delete -f ../../attendees/kubernetes/mysql.yaml -I0605 17:54:26.042191 67742 defaults.go:174] creating security context for container mysql -replicationcontrollers/mysql-server ----- -+ -. Delete MySQL service as: -+ -[source, text] ----- -> ./cluster/kubectl.sh --v=5 delete -f ../../attendees/kubernetes/mysql-service.yaml -services/mysql ----- - -Alternatively, all services and replication controllers can be assigned a label and deleted as: +Individual resources (service, replication controller, or pod) can be deleted by using `delete` command instead of `create` command. Alternatively, all services and replication controllers can be deleted using a label as: [source, text] ---- -kubectl delete -l services,pods name=docker-lab +kubectl delete -l se,po context=docker-k8s-lab ---- -Send a PR for the last code: https://github.com/javaee-samples/docker-java/issues/59 - ### Stop Kubernetes Cluster [source, text] @@ -447,50 +477,5 @@ b1809bdabd9c gcr.io/google_containers/pause:0.8.0 + . Log out of master. -#### Kubernetes Minion -. Check the minions: -+ -[source, text] ----- -kubernetes> ./cluster/kubectl.sh get minions ----- -+ -This is not giving the expected output and filed as https://github.com/GoogleCloudPlatform/kubernetes/issues/9271. -+ -. Docker and Kubelet are running in minion and can be verified by logging in. -+ -Log in to the minion as: -+ -[source, text] ----- -cluster> vagrant ssh minion-1 -Last login: Thu Jun 4 19:30:03 2015 from 10.0.2.2 -[vagrant@kubernetes-minion-1 ~]$ ----- -+ -. Check the status of Docker: -+ -[source, text] ----- -[vagrant@kubernetes-minion-1 ~]$ sudo systemctl status docker -docker.service - Docker Application Container Engine - Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled) - Active: active (running) since Thu 2015-06-04 19:29:44 UTC; 1h 24min ago - Docs: http://docs.docker.com - Main PID: 2651 (docker) - CGroup: /system.slice/docker.service - └─2651 /usr/bin/docker -d --selinux-enabled - -Jun 04 20:53:41 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:41Z" level="info" msg="-job containers() = OK (0)" -Jun 04 20:53:41 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:41Z" level="info" msg="GET /containers/json" -Jun 04 20:53:41 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:41Z" level="info" msg="+job containers()" -Jun 04 20:53:41 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:41Z" level="info" msg="-job containers() = OK (0)" -Jun 04 20:53:42 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:42Z" level="info" msg="GET /containers/json" -Jun 04 20:53:42 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:42Z" level="info" msg="+job containers()" -Jun 04 20:53:42 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:42Z" level="info" msg="-job containers() = OK (0)" -Jun 04 20:53:46 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:46Z" level="info" msg="GET /version" -Jun 04 20:53:46 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:46Z" level="info" msg="+job version()" -Jun 04 20:53:46 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:46Z" level="info" msg="-job version() = OK (0)" -----