Skip to content

Commit

Permalink
en,zh: bump operator to v1.6.0-beta.1 (pingcap#2530)
Browse files Browse the repository at this point in the history
  • Loading branch information
csuzhangxc authored and ti-chi-bot committed Mar 27, 2024
1 parent f33b3cd commit 18fe8ea
Show file tree
Hide file tree
Showing 85 changed files with 293 additions and 263 deletions.
31 changes: 31 additions & 0 deletions .github/workflows/link-fail-fast.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
name: Links (Fail Fast)

on:
pull_request:

jobs:
linkChecker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2

- name: 'Get a list of changed markdown files to process'
id: changed-files
run: |
CHANGED_FILES=$(git diff-tree --name-only --diff-filter 'AM' -r HEAD^1 HEAD -- "zh/*.md" "en/*.md" | sed -z "s/\n$//;s/\n/' '/g")
echo "all_changed_files=${CHANGED_FILES}" >> $GITHUB_OUTPUT
- name: Download Exclude Path
run: |
curl https://raw.githubusercontent.com/pingcap/docs/master/.lycheeignore -O
- name: Link Checker
if: ${{ steps.changed-files.outputs.all_changed_files }}
uses: lycheeverse/lychee-action@v1.6.1
with:
fail: true
args: -E --exclude-mail -i -n -t 45 -- '${{ steps.changed-files.outputs.all_changed_files }}'
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
1 change: 0 additions & 1 deletion .github/workflows/link.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ jobs:
with:
title: Broken Link Detected
content-filepath: out.md
assignees: ran-huang

- name: Fail if there were link errors
run: exit ${{ steps.lychee.outputs.exit_code }}
2 changes: 1 addition & 1 deletion en/TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@
- [Advanced StatefulSet Controller](advanced-statefulset.md)
- [Admission Controller](enable-admission-webhook.md)
- [Sysbench Performance Test](benchmark-sysbench.md)
- [API References](https://github.com/pingcap/tidb-operator/blob/master/docs/api-references/docs.md)
- [API References](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/docs/api-references/docs.md)
- [Cheat Sheet](cheat-sheet.md)
- [Required RBAC Rules](tidb-operator-rbac.md)
- Tools
Expand Down
2 changes: 1 addition & 1 deletion en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ hide_commit: true

<LearningPath label="Reference" icon="cloud-dev">

[API Docs](https://github.com/pingcap/tidb-operator/blob/master/docs/api-references/docs.md)
[API Docs](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/docs/api-references/docs.md)

[Tools](https://docs.pingcap.com/tidb-in-kubernetes/dev/tidb-toolkit)

Expand Down
2 changes: 1 addition & 1 deletion en/access-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operat
EOF
```

For more configuration items of the TidbNGMonitoring CR, see [example in tidb-operator](https://github.com/pingcap/tidb-operator/blob/master/examples/advanced/tidb-ng-monitoring.yaml).
For more configuration items of the TidbNGMonitoring CR, see [example in tidb-operator](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/examples/advanced/tidb-ng-monitoring.yaml).

3. Enable Continuous Profiling.

Expand Down
2 changes: 1 addition & 1 deletion en/advanced-statefulset.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The [advanced StatefulSet controller](https://github.com/pingcap/advanced-statef
{{< copyable "shell-regular" >}}

```
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/advanced-statefulset-crd.v1.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0-beta.1/manifests/advanced-statefulset-crd.v1.yaml
```

2. Enable the `AdvancedStatefulSet` feature in `values.yaml` of the TiDB Operator chart:
Expand Down
6 changes: 3 additions & 3 deletions en/aggregate-multiple-cluster-monitor-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Thanos provides [Thanos Query](https://thanos.io/tip/components/query.md/) compo
{{< copyable "shell-regular" >}}

```shell
kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/monitor-with-thanos/tidb-monitor.yaml
kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0-beta.1/examples/monitor-with-thanos/tidb-monitor.yaml
```

2. Deploy the Thanos Query component.
Expand All @@ -34,7 +34,7 @@ Thanos provides [Thanos Query](https://thanos.io/tip/components/query.md/) compo
{{< copyable "shell-regular" >}}

```
curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/monitor-with-thanos/thanos-query.yaml
curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0-beta.1/examples/monitor-with-thanos/thanos-query.yaml
```

2. Manually modify the `--store` parameter in the `thanos-query.yaml` file by updating `basic-prometheus:10901` to `basic-prometheus.${namespace}:10901`.
Expand Down Expand Up @@ -182,4 +182,4 @@ spec:

After RemoteWrite is enabled, Prometheus pushes the monitoring data to [Thanos Receiver](https://thanos.io/tip/components/receive.md/). For more information, refer to [the design of Thanos Receiver](https://thanos.io/v0.8/proposals/201812_thanos-remote-receive/).

For details on the deployment, refer to [this example of integrating TidbMonitor with Thanos Receiver](https://github.com/pingcap/tidb-operator/tree/master/examples/monitor-prom-remotewrite).
For details on the deployment, refer to [this example of integrating TidbMonitor with Thanos Receiver](https://github.com/pingcap/tidb-operator/tree/v1.6.0-beta.1/examples/monitor-prom-remotewrite).
2 changes: 1 addition & 1 deletion en/backup-by-ebs-snapshot-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Snapshot backup is defined in a customized `VolumeBackup` custom resource (CR) o

**You must execute the following steps in every data plane**.

1. Download the [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml) file to the backup server.
1. Download the [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml) file to the backup server.

2. If you have deployed the TiDB cluster in `${namespace}`, create the RBAC-related resources required for the backup in this namespace by running the following command:

Expand Down
4 changes: 2 additions & 2 deletions en/backup-restore-cr.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ This section introduces the fields in the `Backup` CR.
- If an image is specified without the version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup.
- When using Dumpling for backup, you can specify the Dumpling version in this field.
- If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v8.0.0`, the image of the specified version is used for backup.
- If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for backup by default.
- If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/images/tidb-backup-manager/Dockerfile) is used for backup by default.

* `.spec.backupType`: the backup type. This field is valid only when you use BR for backup. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
* `full`: back up all databases in a TiDB cluster.
Expand Down Expand Up @@ -261,7 +261,7 @@ This section introduces the fields in the `Restore` CR.
* `.spec.toolImage`:the tools image used by `Restore`. TiDB Operator supports this configuration starting from v1.1.9.
- When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v8.0.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v8.0.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v8.0.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/images/tidb-backup-manager/Dockerfile) is used for restoring by default.
* `.spec.backupType`: the restore type. This field is valid only when you use BR to restore data. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
* `full`: restore all databases in a TiDB cluster.
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-aws-s3-by-snapshot.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ The following sections exemplify how to back up data of the TiDB cluster `demo1`

### Step 1. Set up the environment for EBS volume snapshot backup

1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml) to the backup server.
1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml) to the backup server.

2. Create the RBAC-related resources required for the backup in the `test1` namespace by running the following command:

Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-aws-s3-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ This document provides an example about how to back up the data of the `demo1` T
kubectl create namespace backup-test
```

2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace:
2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace:

```shell
kubectl apply -f backup-rbac.yaml -n backup-test
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-azblob-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ This document provides an example about how to back up the data of the `demo1` T
kubectl create namespace backup-test
```

2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace:
2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace:

```shell
kubectl apply -f backup-rbac.yaml -n backup-test
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-gcs-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ This document provides an example about how to back up the data of the `demo1` T
kubectl create namespace backup-test
```

2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:
2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:

```shell
kubectl apply -f backup-rbac.yaml -n backup-test
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ To better explain how to perform the backup operation, this document shows an ex

### Step 1: Prepare for ad-hoc full backup

1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:
1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:

{{< copyable "shell-regular" >}}

Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-pv-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ This document provides an example about how to back up the data of the `demo1` T

### Step 1: Prepare for an ad-hoc backup

1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml) to the server that runs the backup task.
1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml) to the server that runs the backup task.

2. Execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:

Expand Down
4 changes: 2 additions & 2 deletions en/backup-to-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,12 +48,12 @@ GRANT

### Step 1: Prepare for ad-hoc full backup

1. Execute the following command to create the role-based access control (RBAC) resources in the `tidb-cluster` namespace based on [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/backup/backup-rbac.yaml):
1. Execute the following command to create the role-based access control (RBAC) resources in the `tidb-cluster` namespace based on [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml):

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/backup/backup-rbac.yaml -n tidb-cluster
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0-beta.1/manifests/backup/backup-rbac.yaml -n tidb-cluster
```

2. Grant permissions to the remote storage.
Expand Down
6 changes: 3 additions & 3 deletions en/cheat-sheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -492,7 +492,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm inspect values pingcap/tidb-operator --version=v1.5.2 > values-tidb-operator.yaml
helm inspect values pingcap/tidb-operator --version=v1.6.0-beta.1 > values-tidb-operator.yaml
```

### Deploy using Helm chart
Expand All @@ -508,7 +508,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.5.2 -f values-tidb-operator.yaml
helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.6.0-beta.1 -f values-tidb-operator.yaml
```

### View the deployed Helm release
Expand All @@ -532,7 +532,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.2 -f values-tidb-operator.yaml
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.6.0-beta.1 -f values-tidb-operator.yaml
```

### Delete Helm release
Expand Down
2 changes: 1 addition & 1 deletion en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ If you are using a NUMA-based CPU, you need to enable `Static`'s CPU management

## Configure TiDB deployment

To configure a TiDB deployment, you need to configure the `TiDBCluster` CR. Refer to the [TidbCluster example](https://github.com/pingcap/tidb-operator/blob/master/examples/advanced/tidb-cluster.yaml) for an example. For the complete configurations of `TiDBCluster` CR, refer to [API documentation](https://github.com/pingcap/tidb-operator/blob/master/docs/api-references/docs.md).
To configure a TiDB deployment, you need to configure the `TiDBCluster` CR. Refer to the [TidbCluster example](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/examples/advanced/tidb-cluster.yaml) for an example. For the complete configurations of `TiDBCluster` CR, refer to [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.6.0-beta.1/docs/api-references/docs.md).

> **Note:**
>
Expand Down
4 changes: 2 additions & 2 deletions en/configure-storage-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directori
1. Download the deployment file for the local-volume-provisioner.
```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/local-pv/local-volume-provisioner.yaml
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0-beta.1/examples/local-pv/local-volume-provisioner.yaml
```
2. If you are using the same discovery directory as described in [Step 1: Pre-allocate local storage](#step-1-pre-allocate-local-storage), you can skip this step. If you are using a different path for the discovery directory than in the previous step, you need to modify the ConfigMap and DaemonSet spec.
Expand Down Expand Up @@ -162,7 +162,7 @@ The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directori
3. Deploy the `local-volume-provisioner`.
```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0-beta.1/manifests/local-dind/local-volume-provisioner.yaml
```
4. Check the status of the Pod and PV.
Expand Down
8 changes: 4 additions & 4 deletions en/deploy-br-federation.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ To deploy the BR Federation, you need to select one Kubernetes cluster as the co
The BR Federation uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) to extend Kubernetes. Before using the BR Federation, you must create the CRD in your Kubernetes cluster. After using the BR Federation Manager, you only need to perform the operation once.

```shell
kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/federation-crd.yaml
kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.6.0-beta.1/manifests/federation-crd.yaml
```

### Step 2.2: Prepare the kubeconfig secret
Expand Down Expand Up @@ -190,7 +190,7 @@ This section describes how to install the BR Federation using [Helm 3](https://h
4. Install the BR Federation:

```shell
helm install --namespace br-fed-admin br-federation pingcap/br-federation --version v1.5.2
helm install --namespace br-fed-admin br-federation pingcap/br-federation --version v1.6.0-beta.1
```

</div>
Expand Down Expand Up @@ -218,15 +218,15 @@ This section describes how to install the BR Federation using [Helm 3](https://h

```shell
mkdir -p ${HOME}/br-federation && \
helm inspect values pingcap/br-federation --version=v1.5.2 > ${HOME}/br-federation/values.yaml
helm inspect values pingcap/br-federation --version=v1.6.0-beta.1 > ${HOME}/br-federation/values.yaml
```

5. Configure the BR Federation by modifying fields such as `image`, `limits`, `requests`, and `replicas` according to your needs.

6. Deploy the BR Federation.

```shell
helm install --namespace br-fed-admin br-federation pingcap/br-federation --version v1.5.2 -f ${HOME}/br-federation/values.yaml && \
helm install --namespace br-fed-admin br-federation pingcap/br-federation --version v1.6.0-beta.1 -f ${HOME}/br-federation/values.yaml && \
kubectl get po -n br-fed-admin -l app.kubernetes.io/instance=br-federation
```

Expand Down
2 changes: 1 addition & 1 deletion en/deploy-heterogeneous-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ After creating certificates, take the following steps to deploy a TLS-enabled he
In the configuration file, `spec.tlsCluster.enabled`controls whether to enable TLS between the components and `spec.tidb.tlsClient.enabled`controls whether to enable TLS for the MySQL client.
- For more configurations of a TLS-enabled heterogeneous cluster, see the ['heterogeneous-tls'](https://github.com/pingcap/tidb-operator/tree/master/examples/heterogeneous-tls) example.
- For more configurations of a TLS-enabled heterogeneous cluster, see the ['heterogeneous-tls'](https://github.com/pingcap/tidb-operator/tree/v1.6.0-beta.1/examples/heterogeneous-tls) example.
- For more configurations and field meanings of a TiDB cluster, see the [TiDB cluster configuration document](configure-a-tidb-cluster.md).
2. In the configuration file of your heterogeneous cluster, modify the configurations of each node according to your need.
Expand Down
Loading

0 comments on commit 18fe8ea

Please sign in to comment.