From 02a5a4c7f49a1b82b01bddc38208aa022b7057d2 Mon Sep 17 00:00:00 2001 From: SIGSEGV Date: Mon, 6 Jul 2020 14:49:52 +0800 Subject: [PATCH] en: tidb cluster offline installation (#521) * en: tidb cluster offline installation Signed-off-by: lucklove * Apply suggestions from code review Co-authored-by: Ran Co-authored-by: Ran --- en/deploy-on-general-kubernetes.md | 65 ++++++++++++++++++++++++++++++ en/deploy-ticdc.md | 2 + en/deploy-tidb-binlog.md | 2 + en/deploy-tiflash.md | 2 + en/initialize-a-cluster.md | 27 +++++++++++++ en/monitor-using-tidbmonitor.md | 2 + 6 files changed, 100 insertions(+) diff --git a/en/deploy-on-general-kubernetes.md b/en/deploy-on-general-kubernetes.md index e2e15758ab..3618e4b439 100644 --- a/en/deploy-on-general-kubernetes.md +++ b/en/deploy-on-general-kubernetes.md @@ -44,6 +44,71 @@ After you configure TiDB cluster, deploy the TiDB cluster by the following steps > > It is recommended to organize configurations for a TiDB cluster under a directory of `cluster_name` and save it as `${cluster_name}/tidb-cluster.yaml`. + If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use `docker load` to install the Docker image on the server. + + To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v4.0.0): + + ```shell + pingcap/pd:v4.0.0 + pingcap/tikv:v4.0.0 + pingcap/tidb:v4.0.0 + pingcap/tidb-binlog:v4.0.0 + pingcap/ticdc:v4.0.0 + pingcap/tiflash:v4.0.0 + pingcap/tidb-monitor-reloader:v1.0.1 + pingcap/tidb-monitor-initializer:v4.0.0 + grafana/grafana:6.0.1 + prom/prometheus:v2.18.1 + busybox:1.26.2 + ``` + + Next, download all these images with the following command: + + {{< copyable "shell-regular" >}} + + ```shell + docker pull pingcap/pd:v4.0.0 + docker pull pingcap/tikv:v4.0.0 + docker pull pingcap/tidb:v4.0.0 + docker pull pingcap/tidb-binlog:v4.0.0 + docker pull pingcap/ticdc:v4.0.0 + docker pull pingcap/tiflash:v4.0.0 + docker pull pingcap/tidb-monitor-reloader:v1.0.1 + docker pull pingcap/tidb-monitor-initializer:v4.0.0 + docker pull grafana/grafana:6.0.1 + docker pull prom/prometheus:v2.18.1 + docker pull busybox:1.26.2 + docker save -o pd-v4.0.0.tar pingcap/pd:v4.0.0 + docker save -o tikv-v4.0.0.tar pingcap/tikv:v4.0.0 + docker save -o tidb-v4.0.0.tar pingcap/tidb:v4.0.0 + docker save -o tidb-binlog-v4.0.0.tar pingcap/tidb-binlog:v4.0.0 + docker save -o ticdc-v4.0.0.tar pingcap/ticdc:v4.0.0 + docker save -o tiflash-v4.0.0.tar pingcap/tiflash:v4.0.0 + docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1 + docker save -o tidb-monitor-initializer-v4.0.0.tar pingcap/tidb-monitor-initializer:v4.0.0 + docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1 + docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1 + docker save -o busybox-1.26.2.tar busybox:1.26.2 + ``` + + Next, upload these Docker images to the server, and execute `docker load` to install these Docker images on the server: + + {{< copyable "shell-regular" >}} + + ```shell + docker load -i pd-v4.0.0.tar + docker load -i tikv-v4.0.0.tar + docker load -i tidb-v4.0.0.tar + docker load -i tidb-binlog-v4.0.0.tar + docker load -i ticdc-v4.0.0.tar + docker load -i tiflash-v4.0.0.tar + docker load -i tidb-monitor-reloader-v1.0.1.tar + docker load -i tidb-monitor-initializer-v4.0.0.tar + docker load -i grafana-6.0.1.tar + docker load -i prometheus-v2.18.1.tar + docker load -i busybox-1.26.2.tar + ``` + 3. View the Pod status: {{< copyable "shell-regular" >}} diff --git a/en/deploy-ticdc.md b/en/deploy-ticdc.md index 3ca0bf1086..137e6b0198 100644 --- a/en/deploy-ticdc.md +++ b/en/deploy-ticdc.md @@ -67,3 +67,5 @@ To deploy TiCDC when deploying the TiDB cluster, refer to [Deploy TiDB in Genera } ] ``` + + If the server does not have an external network, refer to [deploy TiDB cluster](deploy-on-general-kubernetes.md#deploy-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server. diff --git a/en/deploy-tidb-binlog.md b/en/deploy-tidb-binlog.md index fe04b42a14..be6dfcbbe8 100644 --- a/en/deploy-tidb-binlog.md +++ b/en/deploy-tidb-binlog.md @@ -201,6 +201,8 @@ To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB clust helm install pingcap/tidb-drainer --name=${cluster_name} --namespace=${namespace} --version=${chart_version} -f values.yaml ``` + If the server does not have an external network, refer to [deploy TiDB cluster](deploy-on-general-kubernetes.md#deploy-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server. + > **Note:** > > This chart must be installed to the same namespace as the source TiDB cluster. diff --git a/en/deploy-tiflash.md b/en/deploy-tiflash.md index 31d5701bd8..bb13e3c3d1 100644 --- a/en/deploy-tiflash.md +++ b/en/deploy-tiflash.md @@ -65,3 +65,5 @@ TiFlash supports mounting multiple Persistent Volumes (PVs). If you want to conf > Since TiDB Operator will mount PVs automatically in the **order** of the items in the `storageClaims` list, if you need to add more disks to TiFlash, make sure to append the new item only to the **end** of the original items, and **DO NOT** modify the order of the original items. To [add TiFlash component to an existing TiDB cluster](https://pingcap.com/docs/stable/tiflash/deploy-tiflash/#add-tiflash-component-to-an-existing-tidb-cluster), `replication.enable-placement-rules` should be set to `true` in PD. After you add the TiFlash configuration in TidbCluster by taking the above steps, TiDB Operator will automatically configure `replication.enable-placement-rules: "true"` in PD. + +If the server does not have an external network, refer to [deploy TiDB cluster](deploy-on-general-kubernetes.md#deploy-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server. diff --git a/en/initialize-a-cluster.md b/en/initialize-a-cluster.md index 8b5390ad79..0e1fb3608d 100644 --- a/en/initialize-a-cluster.md +++ b/en/initialize-a-cluster.md @@ -75,3 +75,30 @@ kubectl apply -f ${cluster_name}/tidb-initializer.yaml --namespace=${namespace} The above command automatically creates an initialized Job. This Job tries to set the initial password for the `root` account using the `secret` object provided. It also tries to create other accounts and passwords, if they are specified. After the initialization, the Pod state becomes `Completed`. If you log in via MySQL client later, you need to specify the password created by the Job. + +If the server does not have an external network, you need to download the Docker image used for cluster initialization on a machine with an external network and upload it to the server, and then use `docker load` to install the Docker image on the server. + +The following Docker images are used to initialize a TiDB cluster: + +{{< copyable "shell-regular" >}} + +```shell +tnir/mysqlclient:latest +``` + +Next, download all these images with the following command: + +{{< copyable "shell-regular" >}} + +```shell +docker pull tnir/mysqlclient:latest +docker save -o mysqlclient-latest.tar tnir/mysqlclient:latest +``` + +Next, upload these Docker images to the server, and execute `docker load` to install these Docker images on the server: + +{{< copyable "shell-regular" >}} + +```shell +docker load -i mysqlclient-latest.tar +``` diff --git a/en/monitor-using-tidbmonitor.md b/en/monitor-using-tidbmonitor.md index 0cbacc24b2..be2d4b8d01 100644 --- a/en/monitor-using-tidbmonitor.md +++ b/en/monitor-using-tidbmonitor.md @@ -73,6 +73,8 @@ You can also quickly deploy TidbMonitor using the following command: kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml -n ${namespace} ``` +If the server does not have an external network, refer to [deploy TiDB cluster](deploy-on-general-kubernetes.md#deploy-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server. + After the deployment is finished, you can check whether TidbMonitor is started by executing the `kubectl get pod` command: {{< copyable "shell-regular" >}}