Skip to content

Commit

Permalink
Update setup.md (#281)
Browse files Browse the repository at this point in the history
  • Loading branch information
weekface authored and tennix committed Feb 28, 2019
1 parent c8e9cb5 commit ea6934b
Showing 1 changed file with 37 additions and 4 deletions.
41 changes: 37 additions & 4 deletions docs/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,15 @@ TiDB Operator uses [PersistentVolume](https://kubernetes.io/docs/concepts/storag

The Kubernetes cluster is suggested to enable [RBAC](https://kubernetes.io/docs/admin/authorization/rbac). Otherwise you may want to set `rbac.create` to `false` in the values.yaml of both tidb-operator and tidb-cluster charts.

Because TiDB by default will use at most 40960 file descriptors, the [worker node](https://access.redhat.com/solutions/61334) and its [Docker daemon's](https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimit-settings) ulimit must be configured to greater than 40960. Otherwise you have to change TiKV's `max-open-files` to match your work node `ulimit -n` in the configuration file `charts/tidb-cluster/templates/config/_tikv-config.tpl`, but this will impact TiDB performance.
Because TiDB by default will use at most 40960 file descriptors, the [worker node](https://access.redhat.com/solutions/61334) and its Docker daemon's ulimit must be configured to greater than 40960:

```shell
$ sudo vim /etc/systemd/system/docker.service
```

Set `LimitNOFILE` to equal or greater than 40960.

Otherwise you have to change TiKV's `max-open-files` to match your work node `ulimit -n` in the configuration file `charts/tidb-cluster/templates/config/_tikv-config.tpl`, but this will impact TiDB performance.

## Helm

Expand All @@ -52,7 +60,9 @@ You can follow Helm official [documentation](https://helm.sh) to install Helm in

## Local Persistent Volume

Local disks are recommended to be formatted as ext4 filesystem.
Local disks are recommended to be formatted as ext4 filesystem. The local persistent volume directory must be a mount point: a whole disk mount or a [bind mount](https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount):

### Disk mount

Mount local ssd disks of your Kubernetes nodes at subdirectory of /mnt/disks. For example if your data disk is `/dev/nvme0n1`, you can format and mount with the following commands:

Expand All @@ -62,9 +72,30 @@ $ sudo mkfs.ext4 /dev/nvme0n1
$ sudo mount -t ext4 -o nodelalloc /dev/nvme0n1 /mnt/disks/disk0
```

To auto-mount disks when your operating system is booted, you should edit `/etc/fstab` to include these mounting info.
### Bind mount

The disadvantages of bind mount for TiDB: all the volumes has the size of the whole disk and there is no quota and isolation of bind mount volumes. If your data directory is `/data`, you can create a bind mount with the following commands:

```shell
$ sudo mkdir -p /data/local-pv01
$ sudo mkdir -p /mnt/disks/local-pv01
$ sudo mount --bind /data/local-pv01 /mnt/disks/local-pv01
```

Use this command to confirm the mount point exist:

```shell
$ mount | grep /mnt/disks/local-pv01
```

To auto-mount disks when your operating system is booted, you should edit `/etc/fstab` to include these mounting info:

```shell
$ echo "/dev/nvme0n1 /mnt/disks/disk0 none bind 0 0" >> /etc/fstab
$ echo "/data/local-pv01 /mnt/disks/local-pv01 none bind 0 0" >> /etc/fstab
```

After mounting all data disks on Kubernetes nodes, you can deploy [local-volume-provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume) to automatically provision the mounted disks as Local PersistentVolumes.
After mounting all data disks on Kubernetes nodes, you can deploy [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) to automatically provision the mounted disks as Local PersistentVolumes.

```shell
$ kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
Expand All @@ -86,6 +117,8 @@ After the `TidbCluster` custom resource is created, you can install TiDB Operato
Uncomment the `scheduler.kubeSchedulerImage` in `values.yaml`, set it to the same as your kubernetes cluster version.

```shell
$ git clone https://github.com/pingcap/tidb-operator.git
$ cd tidb-operator
$ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin
$ kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator
```
Expand Down

0 comments on commit ea6934b

Please sign in to comment.