Skip to content

Commit

Permalink
docs/getting-started/rkt: Rewrite, reorder, reformat
Browse files Browse the repository at this point in the history
Update the rkt getting started guide.
Add information about rkt stage1 images.
Fix broken/outdated links.
Address reviews on prior kubernetes#725.
Fix the heading heirarchy.
Edit language/clarity.
Reformat markdown source for plaintext legibility.

Supersedes kubernetes#725.
  • Loading branch information
Josh Wood committed Jun 27, 2016
1 parent b1fc471 commit bccfbc6
Showing 1 changed file with 104 additions and 72 deletions.
176 changes: 104 additions & 72 deletions docs/getting-started-guides/rkt/index.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,25 @@
---
---

This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as a container runtime.
This document describes how to run Kubernetes using [rkt](https://github.com/coreos/rkt) as the container runtime.

### Prerequisite
### Prerequisites

- [systemd](http://www.freedesktop.org/wiki/Software/systemd/) should be installed on the machine and should be enabled.
The minimum version required for Kubernetes 1.3 is `219`.
*(Note that systemd is not required by rkt itself, we are using it here to monitor and manage the pods launched by kubelet.)*
* [Systemd](http://www.freedesktop.org/wiki/Software/systemd/) must be installed and enabled. The minimum systemd version required for Kubernetes v1.3 is `219`. Systemd is used to monitor and manage the pods launched by kubelet.

- Install the latest rkt release according to the instructions [here](https://github.com/coreos/rkt).
The minimum version required is [v1.9.1](https://github.com/coreos/rkt/releases/tag/v1.9.1).
* [Install the latest rkt release](https://coreos.com/rkt/docs/latest/trying-out-rkt.html). The minimum rkt version required is [v1.9.1](https://github.com/coreos/rkt/releases/tag/v1.9.1). The [CoreOS Linux alpha channel](https://coreos.com/releases/) ships with a recent rkt release, and you can easily [upgrade rkt on CoreOS](https://coreos.com/rkt/docs/latest/install-rkt-in-coreos.html), if necessary.

- The [rkt API service](http://coreos.com/rkt/docs/latest/subcommands/api-service.html) must be running on the node.
* The [rkt API service](https://coreos.com/rkt/docs/latest/subcommands/api-service.html) must be running on the node.

### Setup network
### Set up pod networking

You can configure the Kubernetes networking using its own `kubenet` and `CNI` [network
plugins](http://kubernetes.io/docs/admin/network-plugins/) by setting the kubelet's `--network-plugin` and `--network-plugin-dir` flag.
In addition, rkt supports using rkt's [Contained Networking](https://coreos.com/rkt/docs/latest/networking.html#contained-mode).
You can configure Kubernetes pod networking using the usual Kubernetes `kubenet` and `CNI` [network plugins](http://kubernetes.io/docs/admin/network-plugins/) by setting the kubelet's `--network-plugin` and `--network-plugin-dir` options appropriately.

##### Use rkt's Contained Networking
In addition, rkt supports using its own [*contained network*](https://coreos.com/rkt/docs/latest/networking/overview.html#contained-mode), flannel SDN networking, or some provider networks.

In this mode, rkt will attempt to join pods into a network named `rkt.kubernetes.io`.
To use rkt's contained networking, you can leave the `--network-plugin` to empty, and put a network config file under one of the rkt's [config directories](https://github.com/coreos/rkt/blob/master/Documentation/configuration.md#command-line-flags), for example:
#### rkt contained network

With the *contained network*, rkt will attempt to join pods to a network named `rkt.kubernetes.io`. The contained network is rkt's default, so you can leave the kubelet `--network-plugin` option empty if using this network. Write a network config file under one of rkt's [config directories](https://coreos.com/rkt/docs/latest/configuration.html#command-line-flags), for example:

```shell
$ cat <<EOF >/etc/rkt/net.d/k8s_network_example.conf
Expand All @@ -47,16 +43,15 @@ $ cat <<EOF >/etc/rkt/net.d/k8s_network_example.conf
EOF
```

However, there are a small number of caveats you should be aware of when using rkt's networking:
There are some caveats when using the contained network:

* You must create an appropriate CNI configuration file with a network name of `rkt.kubernetes.io`.
* The downwards API and environment variable substitution will not contain the pod IP.
* The `/etc/hosts` file will not contain your own hostname (though `/etc/hostname` is populated).
* The downwards API and environment variable substitution will not contain the pod IP address.
* The `/etc/hosts` file will not contain the pod's own hostname, although `/etc/hostname` is populated.

##### Use flannel
#### flannel

While it's recommended that you configure flannel using kubernetes' CNI support, you can also configure it using rkt's contained networking.
An example flannel/CNI config file looks like this:
While it is recommended to configure flannel using the Kubernetes CNI support, you can alternatively configure flannel to directly provide the subnet for rkt's contained network. An example flannel/CNI config file looks like this:

```shell
$ cat <<EOF >/etc/rkt/net.d/k8s_flannel_example.conf
Expand All @@ -70,110 +65,147 @@ $ cat <<EOF >/etc/rkt/net.d/k8s_flannel_example.conf
EOF
```

For more information on flannel configuration, please read [CNI/flannel README](https://github.com/containernetworking/cni/blob/master/Documentation/flannel.md).
For more information on flannel configuration, see the [CNI/flannel README](https://github.com/containernetworking/cni/blob/master/Documentation/flannel.md).

##### Use Google Compute Engine (GCE) network
#### Google Compute Engine (GCE) network

Each VM on GCE has an additional 256 IP addresses routed to it, so it is possible to forego flannel in smaller clusters.
This can most easily be done by using the builtin kubenet plugin, by setting the kubelet flag `--network-plugin=kubenet`.
Each VM on GCE has an additional 256 IP addresses routed to it, so it is possible to forego flannel in smaller clusters. This can most easily be done by setting the kubelet option `--network-plugin=kubenet`.

### Launch a local cluster
### Launch a local Kubernetes cluster with the rkt runtime

To use rkt as the container runtime, we need to supply the following flags to kubelet:
To use rkt as the container runtime in a local Kubernetes cluster, supply the following flags to the kubelet:

- `--container-runtime=rkt` chooses the container runtime to use.
- `--rkt-api-endpoint=HOST:PORT` sets the endpoint of the rkt API service.
Leave empty to use the default one (`localhost:15441`).
- `--rkt-path=$PATH_TO_RKT_BINARY` sets the path of rkt binary.
Leave empty to use the first rkt in $PATH.
- `--rkt-stage1-image` sets the name of the stage1 image, e.g. coreos.com/rkt/stage1-coreos.
Leave empty to use the default stage1 image in the rkt's configuration.
* `--container-runtime=rkt` Set the node's container runtime to rkt.
* `--rkt-api-endpoint=HOST:PORT` Set the endpoint of the rkt API service. Default: `localhost:15441`.
* `--rkt-path=$PATH_TO_RKT_BINARY` Set the path of the rkt binary. Optional. If empty, look for `rkt` in `$PATH`.
* `--rkt-stage1-image` Set the name of the stage1 image, e.g. `coreos.com/rkt/stage1-coreos`. Optional. If not set, the default cgroups isolation stage1 image is used.

If you are using the [hack/local-up-cluster.sh](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/hack/local-up-cluster.sh) script to launch the local cluster, then you can edit the environment variable `CONTAINER_RUNTIME`, `RKT_PATH` and `RKT_STAGE1_IMAGE` to set these flags, the `RKT_PATH` and `RKT_STAGE1_IMAGE` are optional if you have `rkt` in your `$PATH` with appropriate configuration.
If you are using the [hack/local-up-cluster.sh](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/hack/local-up-cluster.sh) script to launch the local cluster, you can edit the environment variables `CONTAINER_RUNTIME`, `RKT_PATH`, and `RKT_STAGE1_IMAGE` to set these flags. `RKT_PATH` and `RKT_STAGE1_IMAGE` are optional if `rkt` is in `$PATH` with appropriate configuration.

```shell
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=$NAME_OF_THE_STAGE1_IMAGE
$ export RKT_PATH=<rkt_binary_path>
$ export RKT_STAGE1_IMAGE=<stage1-name>
```

Then we can launch the local cluster using the script:
Now you can launch the cluster using the `local-up-cluster.sh` script:

```shell
$ hack/local-up-cluster.sh
```

We are also working on setting up rkt as the container runtime for [minikube](https://github.com/kubernetes/minikube/issues/168).

### Launch a CoreOS/rkt cluster on Google Compute Engine (GCE)
### Launch a rktnetes cluster on Google Compute Engine (GCE)

This section outlines using the `kube-up` script to launch a CoreOS/rkt cluster on GCE.

Here we provide instruction on how to use the `kube-up` script to launch a CoreOS/rkt cluster on GCE.
In order to do that, you need to specify the OS distribution, project, image:
First, specify the OS distribution, your GCE project id, and the instance images for the master and nodes. Set the `KUBE_CONTAINER_RUNTIME` to `rkt`:

```shell
$ export KUBE_OS_DISTRIBUTION=coreos
$ export KUBE_GCE_MASTER_PROJECT=coreos-cloud
$ export KUBE_GCE_MASTER_PROJECT=<my-proj-id>
$ export KUBE_GCE_MASTER_IMAGE=<image_id>
$ export KUBE_GCE_NODE_PROJECT=coreos-cloud
$ export KUBE_GCE_NODE_PROJECT=<my-proj-id>
$ export KUBE_GCE_NODE_IMAGE=<image_id>
$ export KUBE_CONTAINER_RUNTIME=rkt
```

You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
Optionally, set the version of rkt by setting `KUBE_RKT_VERSION`:

```shell
$ export KUBE_RKT_VERSION=1.9.1
```

Then you can launch the cluster by:
Optionally, select an alternative [stage1 isolator](#modular-isolation-with-interchangeable-stage1) for the container runtime by setting `KUBE_RKT_STAGE1_IMAGE`:

```shell
$ cluster/kube-up.sh
$ export KUBE_RKT_STAGE1_IMAGE=<stage1-name>
```

### Launch a CoreOS/rkt cluster on AWS
Then you can launch the cluster with:

`kube-up` for AWS is currently unsupported.
Instead, we recommend you to refer the [Kubernetes on AWS guide](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html) to launch a CoreOS/rkt cluster on AWS.
```shell
$ cluster/kube-up.sh
```

### Deploy apps to your cluster
### Launch a rktnetes cluster on AWS

The `kube-up` script is not yet supported on AWS. Instead, we recommend following the [Kubernetes on AWS guide](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html) to launch a CoreOS Kubernetes cluster on AWS, then setting kubelet options as above.

### Deploy apps to the cluster

After creating the cluster, you can start deploying applications. For an introductory example, [deploy a simple nginx web server](/docs/user-guide/simple-nginx). Note that this example did not have to be modified for use with a "rktnetes" cluster. More examples can be found in the [Kubernetes examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/).

### Modular isolation with interchangeable stage1

rkt executes containers in an interchangeable isolation environment. This facility is called the [*stage1* image](https://coreos.com/rkt/docs/latest/devel/architecture.html#stage-1). There are currently three supported rkt stage1 images:

* `systemd-nspawn` stage1, the default. Isolates running containers with Linux kernel namespaces and cgroups in a manner similar to the default container runtime.
* [`lkvm` stage1](https://coreos.com/rkt/docs/latest/running-lkvm-stage1.html), runs containers inside a KVM hypervisor-managed virtual machine.
* [`fly stage1`](https://coreos.com/rkt/docs/latest/running-fly-stage1.html), which isolates containers with only a `chroot`, giving host-level access to mount and network namespaces for specially-privileged utilities.

In addition to the three provided stage1 images, you can [create your own](https://coreos.com/rkt/docs/latest/devel/stage1-implementors-guide.html) for specific isolation requirements. If no configuration is set, the [default stage1](https://coreos.com/rkt/docs/latest/build-configure.html#parameters-for-setting-up-default-stage1-image) is used. There are two ways to select a different stage1; either per-node, or per-pod:

* Set the kubelet's `--rkt-stage1-image` flag, which tells the kubelet the stage1 image to use for every pod on the node. For example, `--rkt-stage1-image=coreos/rkt/stage1-coreos` selects the default systemd-nspawn stage1.
* Set the annotation `rkt.alpha.kubernetes.io/stage1-name-override` to override the stage1 used to execute a given pod. This allows for mixing different container isolation mechanisms on the same cluster or on the same node. For example, the following (shortened) pod manifest will run its pod with the `fly stage1` to give the application -- the `kubelet` in this case -- access to the host's namespace:

```yaml
apiVersion: v1
kind: Pod
metadata:
name: kubelet
namespace: kube-system
labels:
k8s-app: kubelet
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
spec:
containers:
- name: kubelet
image: quay.io/coreos/hyperkube:v1.3.0-beta.2_coreos.0
command:
- kubelet
- --api-servers=127.0.0.1:8080
- --config=/etc/kubernetes/manifests
- --allow-privileged
- --kubeconfig=/etc/kubernetes/kubeconfig
securityContext:
privileged: true
[...]
```

After you created the cluster, you can start deploying apps to the cluster. For example here is how you can [deploy a simgle nginx app](/docs/user-guide/simple-nginx).
More examples can be found in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/).
#### Notes on using different stage1 images

### Known Issues and Differences
Setting the stage1 annotation could potentially give the pod root privileges. Because of this, the `privileged` boolean in the pod's `securityContext` must be set to `true`.

rkt and Docker have very different designs, as well as ACI and Docker image format.
Users might experience some different experience when switching from one to the other.
More information can be found [here](/docs/getting-started-guides/rkt/notes/).
Use rkt's own [*contained network*](#use-rkts-contained-networking) with the LKVM stage1, because the CNI plugin driver does not yet fully support the hypervisor-based runtime.

### Debugging
### Known issues and differences between rkt and Docker

Here are several tips in case you run into any issues.
rkt and the default node container engine have very different designs, as do rkt's native ACI and the Docker container image format. Users may experience different behaviors when switching from one container engine to the other. More information can be found [in the Kubernetes rkt notes](/docs/getting-started-guides/rkt/notes/).

##### Check logs
### Debugging

By default, the log verbose level is 2. In order to see more logs related to rkt, we can set the verbose level to 4.
For local cluster, we can set the environment variable: `LOG_LEVEL=4`.
If the cluster is using salt, we can edit the [logging.sls](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster/saltbase/pillar/logging.sls) in the saltbase.
Here are a few tips for troubleshooting Kubernetes with the rkt container engine:

##### Check rkt pod status
#### Check rkt pod status

To check the pods' status, we can use rkt command, such as `rkt list`, `rkt status`, `rkt image list`, etc.
More information about rkt command line can be found [here](https://github.com/coreos/rkt/blob/master/Documentation/commands.md).
To check the status of running pods, use the rkt subcommands [`rkt list`](https://coreos.com/rkt/docs/latest/subcommands/list.html), [`rkt status`](https://coreos.com/rkt/docs/latest/subcommands/status.html), and [`rkt image list`](https://coreos.com/rkt/docs/latest/subcommands/image.html#rkt-image-list). See the [rkt commands documentation](https://coreos.com/rkt/docs/latest/commands.html) for more information about rkt subcommands.

##### Check journal logs
#### Check journal logs

As we use systemd to launch/manage rkt pods, we can check the pods' log using `journalctl`:
Check a pod's log using `journalctl`. Pods are managed and named as systemd units. The pod's unit name is formed by concatenating a `k8s_` prefix with the pod UUID, in a format like `k8s_${RKT_UUID}`. Find the pod's UUID with `rkt list` to assemble its service name, then ask journalctl for the logs:

- Check the running state of the systemd service:

```shell
$ sudo journalctl -u ${SERVICE_NAME}
$ sudo journalctl -u k8s_ad623346
```

where `${SERVICE_NAME}` is the name of the service file created for the pod, typically the format is `k8s_${RKT_UUID}`.
#### Log verbosity

By default, the log verbosity level is 2. In order to see more log messages related to rkt, set this level to 4 or above. For a local cluster, set the environment variable: `LOG_LEVEL=4`.

##### Check Kubernetes events, logs.
#### Check Kubernetes events and logs.

Kubernetes also provides various tools for debugging. More information can be found [here](/docs/user-guide/application-troubleshooting).
Kubernetes provides various tools for troubleshooting and examination. More information can be found [in the app troubleshooting guide](/docs/user-guide/application-troubleshooting).

0 comments on commit bccfbc6

Please sign in to comment.