Skip to content

Commit

Permalink
rkt: Add docs for explaining how to choose different stage1 images.
Browse files Browse the repository at this point in the history
  • Loading branch information
Yifan Gu committed Jun 25, 2016
1 parent be2d474 commit 822e577
Showing 1 changed file with 90 additions and 2 deletions.
92 changes: 90 additions & 2 deletions docs/getting-started-guides/rkt/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,88 @@ For more information on flannel configuration, please read [CNI/flannel README](
Each VM on GCE has an additional 256 IP addresses routed to it, so it is possible to forego flannel in smaller clusters.
This can most easily be done by using the builtin kubenet plugin, by setting the kubelet flag `--network-plugin=kubenet`.

### Choose stage1 image

rkt comes with the design that lets you to run the container images inside different isolation environments, which is known as [**rkt stage1**](https://coreos.com/rkt/docs/latest/devel/architecture.html#stage-1).
Today, there are three coreos officially supported stage1s:

- `systemd-nspawn stage1`, which is the default one that runs container images in a namespace/cgroup isolated environment.
- [`lkvm stage1`](https://github.com/coreos/rkt/blob/v1.9.1/Documentation/running-lkvm-stage1.md), which runs container images inside a KVM hypervisor.
- [`fly stage1`](https://github.com/coreos/rkt/blob/v1.9.1/Documentation/running-fly-stage1.md), which runs container images within a `chroot` environment, but without namespace/cgroup isolation.

You can choose either stage1 to meet your use case. There are typically two ways of specifying the stage1 image:

- Setup the kubelet's `--rkt-stage1-image` flag, which tells kubelet the stage1 image to use for every pod on the node.
For example, `--rkt-stage1-image=coreos/rkt/stage1-coreos` chooses the default systemd-nspawn stage1.
- Setup the pod's annotation `rkt.alpha.kubernetes.io/stage1-name-override` to override the stage1 choice.
For example, the following pod will be run with the `fly stage1` and let the application (`kubelet` in this case) to run in the host's namespace.

```yaml
apiVersion: v1
kind: Pod
metadata:
name: kubelet
namespace: kube-system
labels:
k8s-app: kubelet
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: stage1-fly
spec:
containers:
- name: kubelet
image: quay.io/coreos/hyperkube:v1.3.0-beta.2_coreos.0
command:
- kubelet
- --api-servers=127.0.0.1:8080
- --config=/etc/kubernetes/manifests
- --allow-privileged
- --kubeconfig=/etc/kubernetes/kubeconfig
securityContext:
privileged: true
volumeMounts:
- name: dev
mountPath: /dev
- name: run
mountPath: /run
- name: sys
mountPath: /sys
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
- name: etc-ssl-certs
mountPath: /etc/ssl/certs
readOnly: true
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
hostNetwork: true
volumes:
- name: dev
hostPath:
path: /dev
- name: run
hostPath:
path: /run
- name: sys
hostPath:
path: /sys
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: etc-ssl-certs
hostPath:
path: /usr/share/ca-certificates
- name: var-lib-kubelet
hostPath:
path: /var/lib/kubelet
```
##### Notes for using different stage1 image
Setting up the stage1 annotation could potentially gives the pod root privilege, and because of this, the pod is required to set the `privileged` boolean in container's `securityContext` to `true`.

Besides, if you want to the lkvm stage1, we suggest you to use the rkt's own [Contained Network](#use-rkts-contained-networking) mentioned above as the network setup, because today's CNI plugin driver has not fully support hypervisor based container runtime yet.

### Launch a local cluster

To use rkt as the container runtime, we need to supply the following flags to kubelet:
Expand All @@ -93,8 +175,8 @@ If you are using the [hack/local-up-cluster.sh](https://github.com/kubernetes/ku

```shell
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=$NAME_OF_THE_STAGE1_IMAGE
$ export RKT_PATH=<rkt_binary_path>
$ export RKT_STAGE1_IMAGE=<stage1-name>
```

Then we can launch the local cluster using the script:
Expand Down Expand Up @@ -125,6 +207,12 @@ You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
$ export KUBE_RKT_VERSION=1.9.1
```

Besides, you can also override the default stage1 image to use by setting `KUBE_RKT_STAGE1_IMAGE`:

```shell
$ export KUBE_RKT_STAGE1_IMAGE=<stage1-name>
```

Then you can launch the cluster by:

```shell
Expand Down

0 comments on commit 822e577

Please sign in to comment.