Skip to content

Commit

Permalink
en: tidb cluster offline installation (#521)
Browse files Browse the repository at this point in the history
* en: tidb cluster offline installation

Signed-off-by: lucklove <gnu.crazier@gmail.com>

* Apply suggestions from code review

Co-authored-by: Ran <huangran@pingcap.com>

Co-authored-by: Ran <huangran@pingcap.com>
  • Loading branch information
lucklove and ran-huang authored Jul 6, 2020
1 parent 384922e commit 02a5a4c
Show file tree
Hide file tree
Showing 6 changed files with 100 additions and 0 deletions.
65 changes: 65 additions & 0 deletions en/deploy-on-general-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,71 @@ After you configure TiDB cluster, deploy the TiDB cluster by the following steps
>
> It is recommended to organize configurations for a TiDB cluster under a directory of `cluster_name` and save it as `${cluster_name}/tidb-cluster.yaml`.

If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use `docker load` to install the Docker image on the server.

To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v4.0.0):

```shell
pingcap/pd:v4.0.0
pingcap/tikv:v4.0.0
pingcap/tidb:v4.0.0
pingcap/tidb-binlog:v4.0.0
pingcap/ticdc:v4.0.0
pingcap/tiflash:v4.0.0
pingcap/tidb-monitor-reloader:v1.0.1
pingcap/tidb-monitor-initializer:v4.0.0
grafana/grafana:6.0.1
prom/prometheus:v2.18.1
busybox:1.26.2
```

Next, download all these images with the following command:

{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/pd:v4.0.0
docker pull pingcap/tikv:v4.0.0
docker pull pingcap/tidb:v4.0.0
docker pull pingcap/tidb-binlog:v4.0.0
docker pull pingcap/ticdc:v4.0.0
docker pull pingcap/tiflash:v4.0.0
docker pull pingcap/tidb-monitor-reloader:v1.0.1
docker pull pingcap/tidb-monitor-initializer:v4.0.0
docker pull grafana/grafana:6.0.1
docker pull prom/prometheus:v2.18.1
docker pull busybox:1.26.2
docker save -o pd-v4.0.0.tar pingcap/pd:v4.0.0
docker save -o tikv-v4.0.0.tar pingcap/tikv:v4.0.0
docker save -o tidb-v4.0.0.tar pingcap/tidb:v4.0.0
docker save -o tidb-binlog-v4.0.0.tar pingcap/tidb-binlog:v4.0.0
docker save -o ticdc-v4.0.0.tar pingcap/ticdc:v4.0.0
docker save -o tiflash-v4.0.0.tar pingcap/tiflash:v4.0.0
docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1
docker save -o tidb-monitor-initializer-v4.0.0.tar pingcap/tidb-monitor-initializer:v4.0.0
docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1
docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1
docker save -o busybox-1.26.2.tar busybox:1.26.2
```

Next, upload these Docker images to the server, and execute `docker load` to install these Docker images on the server:

{{< copyable "shell-regular" >}}

```shell
docker load -i pd-v4.0.0.tar
docker load -i tikv-v4.0.0.tar
docker load -i tidb-v4.0.0.tar
docker load -i tidb-binlog-v4.0.0.tar
docker load -i ticdc-v4.0.0.tar
docker load -i tiflash-v4.0.0.tar
docker load -i tidb-monitor-reloader-v1.0.1.tar
docker load -i tidb-monitor-initializer-v4.0.0.tar
docker load -i grafana-6.0.1.tar
docker load -i prometheus-v2.18.1.tar
docker load -i busybox-1.26.2.tar
```

3. View the Pod status:

{{< copyable "shell-regular" >}}
Expand Down
2 changes: 2 additions & 0 deletions en/deploy-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,3 +67,5 @@ To deploy TiCDC when deploying the TiDB cluster, refer to [Deploy TiDB in Genera
}
]
```

If the server does not have an external network, refer to [deploy TiDB cluster](deploy-on-general-kubernetes.md#deploy-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server.
2 changes: 2 additions & 0 deletions en/deploy-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,8 @@ To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB clust
helm install pingcap/tidb-drainer --name=${cluster_name} --namespace=${namespace} --version=${chart_version} -f values.yaml
```

If the server does not have an external network, refer to [deploy TiDB cluster](deploy-on-general-kubernetes.md#deploy-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server.

> **Note:**
>
> This chart must be installed to the same namespace as the source TiDB cluster.
Expand Down
2 changes: 2 additions & 0 deletions en/deploy-tiflash.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,3 +65,5 @@ TiFlash supports mounting multiple Persistent Volumes (PVs). If you want to conf
> Since TiDB Operator will mount PVs automatically in the **order** of the items in the `storageClaims` list, if you need to add more disks to TiFlash, make sure to append the new item only to the **end** of the original items, and **DO NOT** modify the order of the original items.

To [add TiFlash component to an existing TiDB cluster](https://pingcap.com/docs/stable/tiflash/deploy-tiflash/#add-tiflash-component-to-an-existing-tidb-cluster), `replication.enable-placement-rules` should be set to `true` in PD. After you add the TiFlash configuration in TidbCluster by taking the above steps, TiDB Operator will automatically configure `replication.enable-placement-rules: "true"` in PD.

If the server does not have an external network, refer to [deploy TiDB cluster](deploy-on-general-kubernetes.md#deploy-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server.
27 changes: 27 additions & 0 deletions en/initialize-a-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,3 +75,30 @@ kubectl apply -f ${cluster_name}/tidb-initializer.yaml --namespace=${namespace}
The above command automatically creates an initialized Job. This Job tries to set the initial password for the `root` account using the `secret` object provided. It also tries to create other accounts and passwords, if they are specified.

After the initialization, the Pod state becomes `Completed`. If you log in via MySQL client later, you need to specify the password created by the Job.

If the server does not have an external network, you need to download the Docker image used for cluster initialization on a machine with an external network and upload it to the server, and then use `docker load` to install the Docker image on the server.

The following Docker images are used to initialize a TiDB cluster:

{{< copyable "shell-regular" >}}

```shell
tnir/mysqlclient:latest
```

Next, download all these images with the following command:

{{< copyable "shell-regular" >}}

```shell
docker pull tnir/mysqlclient:latest
docker save -o mysqlclient-latest.tar tnir/mysqlclient:latest
```

Next, upload these Docker images to the server, and execute `docker load` to install these Docker images on the server:

{{< copyable "shell-regular" >}}

```shell
docker load -i mysqlclient-latest.tar
```
2 changes: 2 additions & 0 deletions en/monitor-using-tidbmonitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,8 @@ You can also quickly deploy TidbMonitor using the following command:
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml -n ${namespace}
```

If the server does not have an external network, refer to [deploy TiDB cluster](deploy-on-general-kubernetes.md#deploy-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server.

After the deployment is finished, you can check whether TidbMonitor is started by executing the `kubectl get pod` command:

{{< copyable "shell-regular" >}}
Expand Down

0 comments on commit 02a5a4c

Please sign in to comment.