Skip to content

Latest commit

 

History

History
449 lines (394 loc) · 19.3 KB

docker-kubernetes.adoc

File metadata and controls

449 lines (394 loc) · 19.3 KB

Java EE Application on Kubernetes Cluster

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
— github.com/GoogleCloudPlatform/kubernetes/

Kubernetes, or ``k8s'' in short, allows the user to provide declarative primitives for the desired state, for example “need 5 WildFly servers and 1 MySQL server running”. Kubernetes self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers then ensure this state is met. The user just define the state and Kubernetes ensures that the state is met at all times on the cluster.

How is it related to Docker?

Docker provides the lifecycle management of containers. A Docker image defines a build time representation of the runtime containers. There are commands to start, stop, restart, link, and perform other lifecycle methods on these containers. Kubernetes uses Docker to package, instantiate, and run containerized applications.

How does Kubernetes simplify containerized application deployment?

A typical application would have a cluster of containers across multiple hosts. For example, your web tier (for example Undertow) might run on a set of containers. Similarly, your application tier (for example, WildFly) would run on a different set of containers. The web tier would need to delegate the request to application tier. In some cases, or at least to begin with, you may have your web and application server packaged together in the same set of containers. The database tier would generally run on a separate tier anyway. These containers would need to talk to each other. Using any of the solutions mentioned above would require scripting to start the containers, and monitoring/bouncing if something goes down. Kubernetes does all of that for the user after the application state has been defined.

Key Concepts

At a very high level, there are three key concepts:

  1. Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical collection of containers that belong to an application.

  2. Master is the central control point that provides a unified view of the cluster. There is a single master node that control multiple minions.

  3. Minion is a worker node that run tasks as delegated by the master. Minions can run one or more pods. It provides an application-specific “virtual host” in a containerized environment.

A picture is always worth a thousand words and so this is a high-level logical block diagram for Kubernetes:

kubernetes key concepts
Figure 1. Kubernetes Key Concepts

After the 50,000 feet view, lets fly a little lower at 30,000 feet and take a look at how Kubernetes make all of this happen. There are a few key components at Master and Minion that make this happen.

  1. Replication Controller is a resource at Master that ensures that requested number of pods are running on minions at all times.

  2. Service is an object on master that provides load balancing across a replicated group of pods. Label is an arbitrary key/value pair in a distributed watchable storage that the Replication Controller uses for service discovery.

  3. Kubelet Each minion runs services to run containers and be managed from the master. In addition to Docker, Kubelet is another key service installed there. It reads container manifests as YAML files that describes a pod. Kubelet ensures that the containers defined in the pods are started and continue running.

  4. Master serves RESTful Kubernetes API that validate and configure Pod, Service, and Replication Controller.

Start Kubernetes Cluster

  1. Download Kubernetes from http://dockerlab:8082/kubernetes.tar.gz (version 0.18.1.1)

  2. Setup a cluster as:

    cd kubernetes
    
    export KUBERNETES_PROVIDER=vagrant
    ./cluster/kube-up.sh

    The KUBERNETES_PROVIDER environment variable tells all of the various cluster management scripts which variant to use.

    Note
    This will take a few minutes, so be patience! Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes.

    It shows the output as:

    Starting cluster using provider: vagrant
    ... calling verify-prereqs
    ... calling kube-up
    Using credentials: vagrant:vagrant
    
    . . .
    
    Cluster validation succeeded
    Done, listing cluster services:
    
    Kubernetes master is running at https://10.245.1.2
    KubeDNS is running at https://10.245.1.2/api/v1beta3/proxy/namespaces/default/services/kube-dns

    Note down the address for Kubernetes master, https://10.245.1.2 in this case.

  3. Verify the Kubernetes cluster as:

    kubernetes> vagrant status
    Current machine states:
    
    master                    running (virtualbox)
    minion-1                  running (virtualbox)
    
    This environment represents multiple VMs. The VMs are all listed
    above with their current state. For more information about a specific
    VM, run `vagrant status NAME`.

    By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).

    Note
    By default, only one minion is created. This can be manipulated by setting an environment variable NUM_MINIONS variable to an integer before invoking kube-up.sh script.

    By default, each VM in the cluster is running Fedora, Kubelet is installed into ``systemd'', and all other Kubernetes services are running as containers on Master.

  4. Access https://10.245.1.2 (or whatever IP address is assigned to your kubernetes cluster start up log). This may present the warning as shown below:

    kubernetes master default output certificate

    Click on Advanced'' and then on Proceed to 10.245.1.2'' to see the output as:

    kubernetes master default output
    Figure 2. Kubernetes Output from Master

    Use vagrant'' as the username and vagrant'' as the password.

Deploy Java EE Application

Pods, and the IP addresses assigned to them, are ephemeral. If a pod dies then Kubernetes will recreate that pod because of its self-healing features, but it might recreate it on a different host. Even if it is on the same host, a different IP address could be assigned to it. And so any application cannot rely upon the IP address of the pod.

Kubernetes services is an abstraction which defines a logical set of pods. A service is typically back-ended by one or more physical pods (associated using labels), and it has a permanent IP address that can be used by other pods/applications. For example, WildFly pod can not directly connect to a MySQL pod but can connect to MySQL service. In essence, Kubernetes service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends.

kubernetes services
Figure 3. Kubernetes Services
Note
In this case, all the pods are running on a single minion. This is because, that is the default number for a Kubernetes cluster. The pod can very be on another minion if more minions exist in the cluster.

Any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated.

Start MySQL Service

  1. Start MySQL service as:

    ./cluster/kubectl.sh create -f ../../attendees/kubernetes/mysql-service.yaml
  2. Check that the service is created:

    > ./cluster/kubectl.sh get services
    NAME            LABELS                                                                           SELECTOR           IP(S)          PORT(S)
    kube-dns        k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS   k8s-app=kube-dns   10.247.0.10    53/UDP
                                                                                                                                       53/TCP
    kubernetes      component=apiserver,provider=kubernetes                                          <none>             10.247.0.2     443/TCP
    kubernetes-ro   component=apiserver,provider=kubernetes                                          <none>             10.247.0.1     80/TCP
    mysql-service   name=mysql-service                                                               name=mysql         10.247.222.0   3306/TCP
  3. When a Pod is run on a node, the kubelet adds a set of environment variables for each active Service.

    It supports both Docker links compatible variables and simpler {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores.

    Our service name is `mysql'' and so `MYSQL_SERVICE_HOST and MYSQL_SERVICE_PORT variables are available to other pods.

Start MySQL Replication Controller

  1. Start MySQL replication controller as:

    > ./cluster/kubectl.sh  --v=5  create -f ../../attendees/kubernetes/mysql.yaml
    I0605 16:16:06.724597   66780 defaults.go:174] creating security context for container mysql
    replicationcontrollers/mysql-server

    It uses the following configuration file:

    kind: ReplicationController
    apiVersion: v1beta3
    metadata:
      name: mysql-server
      labels:
        name: mysql-server
    spec:
      replicas: 1
      selector:
        name: mysql-server
      template:
        metadata:
          labels:
            name: mysql-server
        spec:
          containers:
            - name: mysql
              image: mysql:latest
              env:
                - name: MYSQL_USER
                  value: mysql
                - name: MYSQL_PASSWORD
                  value: mysql
                - name: MYSQL_DATABASE
                  value: sample
                - name: MYSQL_ROOT_PASSWORD
                  value: supersecret
              ports:
                - containerPort: 3360
  2. Verify MySQL replication controller as:

    > ./cluster/kubectl.sh get rc
    CONTROLLER     CONTAINER(S)   IMAGE(S)                                         SELECTOR                      REPLICAS
    kube-dns-v1    etcd           gcr.io/google_containers/etcd:2.0.9              k8s-app=kube-dns,version=v1   1
                   kube2sky       gcr.io/google_containers/kube2sky:1.7
                   skydns         gcr.io/google_containers/skydns:2015-03-11-001
    mysql-server   mysql          mysql:latest                                     name=mysql-server             1

Start WildFly Replication Controller

  1. Start WildFly replication controller as:

    > ./cluster/kubectl.sh  --v=5  create -f ../../attendees/kubernetes/wildfly.yaml
    I0605 16:25:41.990260   66897 defaults.go:174] creating security context for container wildfly
    replicationcontrollers/wildfly

    It uses the following configuration file:

    kind: ReplicationController
    apiVersion: v1beta3
    metadata:
      name: wildfly
      labels:
        name: wildfly
    spec:
      replicas: 1
      selector:
        name: wildfly-server
      template:
        metadata:
          labels:
            name: wildfly-server
        spec:
          containers:
            - name: wildfly
              image: arungupta/wildfly-mysql-javaee7:knetes
              ports:
                - containerPort: 8080
  2. Verify WildFly replication controller as:

    > ./cluster/kubectl.sh get rc
    CONTROLLER     CONTAINER(S)   IMAGE(S)                                         SELECTOR                      REPLICAS
    kube-dns-v1    etcd           gcr.io/google_containers/etcd:2.0.9              k8s-app=kube-dns,version=v1   1
                   kube2sky       gcr.io/google_containers/kube2sky:1.7
                   skydns         gcr.io/google_containers/skydns:2015-03-11-001
    mysql-server   mysql          mysql:latest                                     name=mysql-server             1
    wildfly        wildfly        arungupta/wildfly-mysql-javaee7:knetes           name=wildfly-server           1

TODO: debug this

Self-healing Pods

  1. Delete the WildFly pod

  2. Wait for k8s to restart the pod because of RC

Application Logs

  1. Login to the Minion-1 VM:

    > vagrant ssh minion-1
    Last login: Fri Jun  5 23:01:36 2015 from 10.0.2.2
    [vagrant@kubernetes-minion-1 ~]$
  2. Log in as root:

    [vagrant@kubernetes-minion-1 ~]$ su -
    Password:
    [root@kubernetes-minion-1 ~]#

    Default root password for VM images created by Vagrant is ``vagrant''.

  3. See the list of Docker containers running on this VM:

    docker ps
  4. View WildFly log as:

    docker logs $(docker ps | grep arungupta/wildfly | awk '{print $1}')
  5. View MySQL log as:

    docker logs <CID>

Delete Kubernetes Resources

  1. Delete WildFly repliation controller as:

    > ./cluster/kubectl.sh --v=5  delete -f ../../attendees/kubernetes/wildfly.yaml
    I0605 16:39:09.152694   67149 defaults.go:174] creating security context for container wildfly
    replicationcontrollers/wildfly
  2. Delete MySQL replication controller as:

    > ./cluster/kubectl.sh --v=5 delete -f ../../attendees/kubernetes/mysql.yaml
    I0605 17:54:26.042191   67742 defaults.go:174] creating security context for container mysql
    replicationcontrollers/mysql-server
  3. Delete MySQL service as:

    > ./cluster/kubectl.sh --v=5 delete -f ../../attendees/kubernetes/mysql-service.yaml
    services/mysql

Alternatively, all services and replication controllers can be assigned a label and deleted as:

kubectl delete -l services,pods name=docker-lab

Send a PR?

Stop Kubernetes Cluster

> ./cluster/kube-down.sh
Bringing down cluster using provider: vagrant
==> minion-1: Forcing shutdown of VM...
==> minion-1: Destroying VM and associated drives...
==> master: Forcing shutdown of VM...
==> master: Destroying VM and associated drives...
Done

Debug Kubernetes (OPTIONAL)

Kubernetes Master

  1. Log in to the master as:

    > vagrant ssh master
    Last login: Thu Jun  4 19:30:04 2015 from 10.0.2.2
    [vagrant@kubernetes-master ~]$
  2. Log in as root:

    [vagrant@kubernetes-master ~]$ su -
    Password:
    Last login: Thu Jun  4 19:25:41 UTC 2015
    [root@kubernetes-master ~]

    Default root password for VM images created by Vagrant is ``vagrant''.

  3. Check the containers running on master:

    CONTAINER ID        IMAGE                                                                               COMMAND                CREATED             STATUS              PORTS               NAMES
    2b92c80630d5        gcr.io/google_containers/etcd:2.0.9                                                 "/usr/local/bin/etcd   5 hours ago         Up 5 hours                              k8s_etcd-container.ec4297e5_etcd-server-kubernetes-master_default_3595ac402f3a17c29dab95f3e0f64c76_56fa3dce
    64c375f8030b        gcr.io/google_containers/kube-apiserver:465b93ab80b30057f9c2ef12f30450c3            "/bin/sh -c '/usr/lo   5 hours ago         Up 5 hours                              k8s_kube-apiserver.f4e485e1_kube-apiserver-kubernetes-master_default_c6b19d563bdbcfb0af80b57377ee905c_2f16c239
    d7d9d40bd479        gcr.io/google_containers/kube-controller-manager:572696d43ca87cd1fe0c774bac3a5f4b   "/bin/sh -c '/usr/lo   5 hours ago         Up 5 hours                              k8s_kube-controller-manager.70259e73_kube-controller-manager-kubernetes-master_default_8f8db766ebc90a00a99244c362284cf1_6eff7640
    13251c4df211        gcr.io/google_containers/kube-scheduler:d1f640dfb379f64daf3ae44286014821            "/bin/sh -c '/usr/lo   5 hours ago         Up 5 hours                              k8s_kube-scheduler.f53b6329_kube-scheduler-kubernetes-master_default_1f3b1657f7f1af67ce9f929d78c64695_de632a80
    b1809bdabd9c        gcr.io/google_containers/pause:0.8.0                                                "/pause"               5 hours ago         Up 5 hours                              k8s_POD.e4cc795_kube-apiserver-kubernetes-master_default_c6b19d563bdbcfb0af80b57377ee905c_767dadb1
    280baf845b00        gcr.io/google_containers/pause:0.8.0                                                "/pause"               5 hours ago         Up 5 hours                              k8s_POD.e4cc795_kube-scheduler-kubernetes-master_default_1f3b1657f7f1af67ce9f929d78c64695_52a4ca74
    615a314a35bf        gcr.io/google_containers/pause:0.8.0                                                "/pause"               5 hours ago         Up 5 hours                              k8s_POD.e4cc795_kube-controller-manager-kubernetes-master_default_8f8db766ebc90a00a99244c362284cf1_97cc1739
    7a554eea05f3        gcr.io/google_containers/pause:0.8.0                                                "/pause"               5 hours ago         Up 5 hours                              k8s_POD.e4cc795_etcd-server-kubernetes-master_default_3595ac402f3a17c29dab95f3e0f64c76_593b9807
  4. Log out of master.

Kubernetes Minion

  1. Check the minions:

    kubernetes> ./cluster/kubectl.sh get minions

    This is not giving the expected output and filed as kubernetes/kubernetes#9271.

  2. Docker and Kubelet are running in minion and can be verified by logging in.

    Log in to the minion as:

    cluster> vagrant ssh minion-1
    Last login: Thu Jun  4 19:30:03 2015 from 10.0.2.2
    [vagrant@kubernetes-minion-1 ~]$
  3. Check the status of Docker:

    [vagrant@kubernetes-minion-1 ~]$ sudo systemctl status docker
    docker.service - Docker Application Container Engine
       Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
       Active: active (running) since Thu 2015-06-04 19:29:44 UTC; 1h 24min ago
         Docs: http://docs.docker.com
     Main PID: 2651 (docker)
       CGroup: /system.slice/docker.service
               └─2651 /usr/bin/docker -d --selinux-enabled
    
    Jun 04 20:53:41 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:41Z" level="info" msg="-job containers() = OK (0)"
    Jun 04 20:53:41 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:41Z" level="info" msg="GET /containers/json"
    Jun 04 20:53:41 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:41Z" level="info" msg="+job containers()"
    Jun 04 20:53:41 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:41Z" level="info" msg="-job containers() = OK (0)"
    Jun 04 20:53:42 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:42Z" level="info" msg="GET /containers/json"
    Jun 04 20:53:42 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:42Z" level="info" msg="+job containers()"
    Jun 04 20:53:42 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:42Z" level="info" msg="-job containers() = OK (0)"
    Jun 04 20:53:46 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:46Z" level="info" msg="GET /version"
    Jun 04 20:53:46 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:46Z" level="info" msg="+job version()"
    Jun 04 20:53:46 kubernetes-minion-1 docker[2651]: time="2015-06-04T20:53:46Z" level="info" msg="-job version() = OK (0)"