Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rkt: Add docs for explaining how to choose different stage1 images. #725

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 92 additions & 4 deletions docs/getting-started-guides/rkt/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,89 @@ For more information on flannel configuration, please read [CNI/flannel README](
Each VM on GCE has an additional 256 IP addresses routed to it, so it is possible to forego flannel in smaller clusters.
This can most easily be done by using the builtin kubenet plugin, by setting the kubelet flag `--network-plugin=kubenet`.

### Choose stage1 image

rkt comes with the design that lets you to run the container images inside different isolation environments, which is known as [**rkt stage1**](https://coreos.com/rkt/docs/latest/devel/architecture.html#stage-1).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rkt lets you run container images inside different different environments, a feature known as ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alternately: "rkt allows you to choose between different isolation environments, called stage1s.

Today, there are three coreos officially supported stage1s:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Today there are three officially supported rkt stage1 environments for CoreOS:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're going for 'supported by CoreOS (company)' not 'on CoreOS (distro)'.

My shot at it:

"The following three stage1s are supported by the rkt maintainers:"


- `systemd-nspawn stage1`, which is the default one that runs container images in a namespace/cgroup isolated environment.
- [`lkvm stage1`](https://github.com/coreos/rkt/blob/v1.9.1/Documentation/running-lkvm-stage1.md), which runs container images inside a KVM hypervisor.
- [`fly stage1`](https://github.com/coreos/rkt/blob/v1.9.1/Documentation/running-fly-stage1.md), which runs container images within a `chroot` environment, but without namespace/cgroup isolation.

You can choose either stage1 to meet your use case. There are typically two ways of specifying the stage1 image:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove: "You can choose either stage1 to meet your use case." This is implied, and "either" is the wrong word.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/are typically two/are two

Copy link
Contributor

@euank euank Jun 27, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like a detail about you being able to use your own.

"""
In addition to the provided stage1 images, you can develop and use your own.

When no configuration value is set, the default stage1 will be used. The following options may be used to override this behaviour:
"""


- Setup the kubelet's `--rkt-stage1-image` flag, which tells kubelet the stage1 image to use for every pod on the node.
Copy link

@johndmulhausen johndmulhausen Jun 27, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Setup/Define

For example, `--rkt-stage1-image=coreos/rkt/stage1-coreos` chooses the default systemd-nspawn stage1.
- Setup the pod's annotation `rkt.alpha.kubernetes.io/stage1-name-override` to override the stage1 choice.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Setup/Set up

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set the annotation rkt.alpha.kubernetes.io/stage1-name-override on a given pod to override the stage1 used to launch it.

This enables users to run a standard Linux namespace based containers and lkvm hypervisor based containers together in one cluster.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/enables/allows
s/run a standard/run standard

Copy link
Contributor

@euank euank Jun 27, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since the below example shows stage1 fly and this can do more than just lkvm + default:

This allows users to run pods under different levels of isolation on the same node, such as running hypervisor based containers alongside namespace isolated ones.

For example, the following pod will be run with the `fly stage1` and let the application (`kubelet` in this case) to run in the host's namespace.

```yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this example is just to demo the stage1s, it might be nice to have a shorter pod to avoid the additional complexity for the reader.

apiVersion: v1
kind: Pod
metadata:
name: kubelet
namespace: kube-system
labels:
k8s-app: kubelet
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: stage1-fly
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rhs should be coreos.com/rkt/stage1-fly

spec:
containers:
- name: kubelet
image: quay.io/coreos/hyperkube:v1.3.0-beta.2_coreos.0
command:
- kubelet
- --api-servers=127.0.0.1:8080
- --config=/etc/kubernetes/manifests
- --allow-privileged
- --kubeconfig=/etc/kubernetes/kubeconfig
securityContext:
privileged: true
volumeMounts:
- name: dev
mountPath: /dev
- name: run
mountPath: /run
- name: sys
mountPath: /sys
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
- name: etc-ssl-certs
mountPath: /etc/ssl/certs
readOnly: true
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
hostNetwork: true
volumes:
- name: dev
hostPath:
path: /dev
- name: run
hostPath:
path: /run
- name: sys
hostPath:
path: /sys
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: etc-ssl-certs
hostPath:
path: /usr/share/ca-certificates
- name: var-lib-kubelet
hostPath:
path: /var/lib/kubelet
```

##### Notes for using different stage1 image

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only four max hashes. Also, s/image/images

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Notes on using different stage1 images maybe


Setting up the stage1 annotation could potentially gives the pod root privilege, and because of this, the pod is required to set the `privileged` boolean in container's `securityContext` to `true`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/gives/give


Besides, if you want to the lkvm stage1, we suggest you to use the rkt's own [Contained Network](#use-rkts-contained-networking) mentioned above as the network setup, because today's CNI plugin driver has not fully support hypervisor based container runtime yet.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Besides, if/If
s/you to use/you use
s/mentioned above as the network setup, because/mentioned above, because
s/has not fully support/does not fully support
s/runtime/runtimes


### Launch a local cluster

To use rkt as the container runtime, we need to supply the following flags to kubelet:
Expand All @@ -93,8 +176,8 @@ If you are using the [hack/local-up-cluster.sh](https://github.com/kubernetes/ku

```shell
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=$NAME_OF_THE_STAGE1_IMAGE
$ export RKT_PATH=<rkt_binary_path>
Copy link
Contributor

@euank euank Jun 27, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment mentioning that both of these are optional if your rkt is in your path and well configured would be nice.

e.g.

# the below are optional if rkt is in your path with a default stage1

$ export RKT_STAGE1_IMAGE=<stage1-name>
```

Then we can launch the local cluster using the script:
Expand Down Expand Up @@ -125,6 +208,12 @@ You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
$ export KUBE_RKT_VERSION=1.9.1
```

Besides, you can also override the default stage1 image to use by setting `KUBE_RKT_STAGE1_IMAGE`:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Besides, you/You
s/image to use by/image by


```shell
$ export KUBE_RKT_STAGE1_IMAGE=<stage1-name>
```

Then you can launch the cluster by:

```shell
Expand Down Expand Up @@ -153,9 +242,8 @@ Here are several tips in case you run into any issues.

##### Check logs

By default, the log verbose level is 2. In order to see more logs related to rkt, we can set the verbose level to 4.
By default, the log verbose level is 2. In order to see more logs related to rkt, we can set the verbose level to 4 or above.
For local cluster, we can set the environment variable: `LOG_LEVEL=4`.
If the cluster is using salt, we can edit the [logging.sls](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster/saltbase/pillar/logging.sls) in the saltbase.

##### Check rkt pod status

Expand Down