Skip to content

Commit

Permalink
zh: tidb cluster offline installation (#420)
Browse files Browse the repository at this point in the history
* tidb cluster offline installazation

* address comment

Co-authored-by: DanielZhangQD <36026334+DanielZhangQD@users.noreply.github.com>
Co-authored-by: ti-srebot <66930949+ti-srebot@users.noreply.github.com>
  • Loading branch information
3 people committed Jun 18, 2020
1 parent 6c5553e commit cbdfcba
Show file tree
Hide file tree
Showing 6 changed files with 102 additions and 1 deletion.
66 changes: 66 additions & 0 deletions zh/deploy-on-general-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,72 @@ TiDB Operator 部署并配置完成后,可以通过下面命令部署 TiDB 集
kubectl apply -f ${cluster_name} -n ${namespace}
```

如果服务器没有外网,需要在有外网的机器上将 TiDB 集群用到的 Docker 镜像下载下来并上传到服务器上,然后使用 `docker load` 将 Docker 镜像安装到服务器上。

部署一套 TiDB 集群会用到下面这些 Docker 镜像(假设 TiDB 集群的版本是 v4.0.0):

```shell
pingcap/pd:v4.0.0
pingcap/tikv:v4.0.0
pingcap/tidb:v4.0.0
pingcap/tidb-binlog:v4.0.0
pingcap/ticdc:v4.0.0
pingcap/tiflash:v4.0.0
pingcap/tidb-monitor-reloader:v1.0.1
pingcap/tidb-monitor-initializer:v4.0.0
grafana/grafana:6.0.1
prom/prometheus:v2.18.1
busybox:1.26.2
```

接下来通过下面的命令将所有这些镜像下载下来:

{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/pd:v4.0.0
docker pull pingcap/tikv:v4.0.0
docker pull pingcap/tidb:v4.0.0
docker pull pingcap/tidb-binlog:v4.0.0
docker pull pingcap/ticdc:v4.0.0
docker pull pingcap/tiflash:v4.0.0
docker pull pingcap/tidb-monitor-reloader:v1.0.1
docker pull pingcap/tidb-monitor-initializer:v4.0.0
docker pull grafana/grafana:6.0.1
docker pull prom/prometheus:v2.18.1
docker pull busybox:1.26.2
docker save -o pd-v4.0.0.tar pingcap/pd:v4.0.0
docker save -o tikv-v4.0.0.tar pingcap/tikv:v4.0.0
docker save -o tidb-v4.0.0.tar pingcap/tidb:v4.0.0
docker save -o tidb-binlog-v4.0.0.tar pingcap/tidb-binlog:v4.0.0
docker save -o ticdc-v4.0.0.tar pingcap/ticdc:v4.0.0
docker save -o tiflash-v4.0.0.tar pingcap/tiflash:v4.0.0
docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1
docker save -o tidb-monitor-initializer-v4.0.0.tar pingcap/tidb-monitor-initializer:v4.0.0
docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1
docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1
docker save -o busybox-1.26.2.tar busybox:1.26.2
```

接下来将这些 Docker 镜像上传到服务器上,并执行 `docker load` 将这些 Docker 镜像安装到服务器上:

{{< copyable "shell-regular" >}}

```shell
docker load -i pd-v4.0.0.tar
docker load -i tikv-v4.0.0.tar
docker load -i tidb-v4.0.0.tar
docker load -i tidb-binlog-v4.0.0.tar
docker load -i ticdc-v4.0.0.tar
docker load -i tiflash-v4.0.0.tar
docker load -i tidb-monitor-reloader-v1.0.1.tar
docker load -i tidb-monitor-initializer-v4.0.0.tar
docker load -i grafana-6.0.1.tar
docker load -i prometheus-v2.18.1.tar
docker load -i busybox-1.26.2.tar
```

3. 通过下面命令查看 Pod 状态:

{{< copyable "shell-regular" >}}
Expand Down
2 changes: 2 additions & 0 deletions zh/deploy-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,3 +63,5 @@ category: how-to
}
]
```

如果服务器没有外网,请参考 [部署 TiDB 集群](deploy-on-general-kubernetes.md#部署-tidb-集群) 在有外网的机器上将用到的 Docker 镜像下载下来并上传到服务器上。
4 changes: 3 additions & 1 deletion zh/deploy-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,11 +193,13 @@ spec
```shell
helm install pingcap/tidb-drainer --name=${cluster_name} --namespace=${namespace} --version=${chart_version} -f values.yaml
```

如果服务器没有外网,请参考 [部署 TiDB 集群](deploy-on-general-kubernetes.md#部署-tidb-集群) 在有外网的机器上将用到的 Docker 镜像下载下来并上传到服务器上。

> **注意:**
>
> 该 chart 必须与源 TiDB 集群安装在相同的命名空间中。

## 开启 TLS

如果要为 TiDB 集群及 TiDB Binlog 开启 TLS,请参考[为 TiDB 组件间开启 TLS](enable-tls-between-components.md) 进行配置。
如果要为 TiDB 集群及 TiDB Binlog 开启 TLS,请参考[为 TiDB 组件间开启 TLS](enable-tls-between-components.md) 进行配置。
2 changes: 2 additions & 0 deletions zh/deploy-tiflash.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,3 +60,5 @@ TiFlash 支持挂载多个 PV,如果要为 TiFlash 配置多个 PV,可以在
```

[新增部署 TiFlash](https://pingcap.com/docs-cn/stable/reference/tiflash/deploy/#%E5%9C%A8%E5%8E%9F%E6%9C%89-tidb-%E9%9B%86%E7%BE%A4%E4%B8%8A%E6%96%B0%E5%A2%9E-tiflash-%E7%BB%84%E4%BB%B6) 需要 PD 配置 `replication.enable-placement-rules: "true"`,通过上述步骤在 TidbCluster 中增加 TiFlash 配置后,TiDB Operator 会自动为 PD 配置 `replication.enable-placement-rules: "true"`。

如果服务器没有外网,请参考[部署 TiDB 集群](deploy-on-general-kubernetes.md#部署-tidb-集群)在有外网的机器上将用到的 Docker 镜像下载下来并上传到服务器上。
27 changes: 27 additions & 0 deletions zh/initialize-a-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,3 +72,30 @@ kubectl apply -f ${cluster_name}/tidb-initializer.yaml --namespace=${namespace}
```

以上命令会自动创建一个初始化的 Job,该 Job 会尝试利用提供的 secret 给 root 账号创建初始密码,并且创建其它账号和密码(如果指定了的话)。初始化完成后 Pod 状态会变成 Completed,之后通过 MySQL 客户端登录时需要指定这里设置的密码。

如果服务器没有外网,需要在有外网的机器上将集群初始化用到的 Docker 镜像下载下来并上传到服务器上,然后使用 `docker load` 将 Docker 镜像安装到服务器上。

初始化一套 TiDB 集群会用到下面这些 Docker 镜像:

{{< copyable "shell-regular" >}}

```shell
tnir/mysqlclient:latest
```

接下来通过下面的命令将所有这些镜像下载下来:

{{< copyable "shell-regular" >}}

```shell
docker pull tnir/mysqlclient:latest
docker save -o mysqlclient-latest.tar tnir/mysqlclient:latest
```

接下来将这些 Docker 镜像上传到服务器上,并执行 `docker load` 将这些 Docker 镜像安装到服务器上:

{{< copyable "shell-regular" >}}

```shell
docker load -i mysqlclient-latest.tar
```
2 changes: 2 additions & 0 deletions zh/monitor-using-tidbmonitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,8 @@ spec:
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml -n ${namespace}
```

如果服务器没有外网,请参考 [部署 TiDB 集群](deploy-on-general-kubernetes.md#部署-tidb-集群) 在有外网的机器上将用到的 Docker 镜像下载下来并上传到服务器上。

然后我们通过 kubectl get pod 命令来检查 TidbMonitor 启动完毕:

{{< copyable "shell-regular" >}}
Expand Down

0 comments on commit cbdfcba

Please sign in to comment.