Skip to content

Commit

Permalink
TiUP: add three documents for cluster import, scale-in, and dm list (#…
Browse files Browse the repository at this point in the history
  • Loading branch information
ti-srebot authored Apr 16, 2021
1 parent 5a551e3 commit 531cd25
Show file tree
Hide file tree
Showing 3 changed files with 172 additions and 0 deletions.
67 changes: 67 additions & 0 deletions tiup/tiup-component-cluster-import.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
title: tiup cluster import
---

# tiup cluster import

Before TiDB v4.0, TiDB clusters were mainly deployed using TiDB Ansible. For TiDB v4.0 and later releases, TiUP Cluster provides the `import` command to transfer the clusters to the tiup-cluster component for management.

> **Note:**
>
> + After importing the TiDB Ansible configuration to TiUP for management, **DO NOT** use TiDB Ansible for cluster operations anymore. Otherwise, conflicts might be caused due to inconsistent meta information.
> + If the clusters deployed using TiDB Ansible are in any of the following situations, do not use the `import` command.
> + Clusters with TLS encryption enabled
> + Pure KV clusters (clusters without TiDB instances)
> + Clusters with Kafka enabled
> + Clusters with Spark enabled
> + Clusters with TiDB Lightning/TiKV Importer enabled
> + Clusters still using the old `push` mode to collect monitoring metrics (if you keep the default mode `pull` unchanged, using the `import` command is supported)
> + Clusters in which the non-default ports (the ports configured in the `group_vars` directory are compatible) are separately configured in the `inventory.ini` configuration file using `node_exporter_port` / `blackbox_exporter_port`
## Syntax

```shell
tiup cluster import [flags]
```

## Options

### -d, --dir

- Specifies the directory where TiDB Ansible is located.
- Data type: `STRING`
- The option is enabled by default with the current directory (the default value) passed in.

### --ansible-config

- Specifies the path of the Ansible configuration file.
- Data type: `STRING`
- The option is enabled by default with `. /ansible.cfg` (the default value) passed in.

### --inventory

- Specifies the name of the Ansible inventory file.
- Data type: `STRING`
- The option is enabled by default with `inventory.ini` (the default value) passed in.

### --no-backup

- Controls whether to disable the backup of files in the directory where TiDB Ansible is located.
- Data type: `BOOLEAN`
- This option is disabled by default with the `false` value. After a successful import, everything in the directory specified by the `-dir` option is backed up to the `${TIUP_HOME}/.tiup/storage/cluster/clusters/{cluster-name}/ansible-backup` directory. If there are multiple inventory files (when multiple clusters are deployed) in this directory, it is recommended to enable this option. To enable this option, add this option to the command, and either pass the `true` value or do not pass any value.

### --rename

- Renames the imported cluster.
- Data type: `STRING`
- Default: NULL. If this option is not specified in the command, the cluster_name specified in inventory is used as the cluster name.

### -h, --help

- Prints the help information.
- Data type: `BOOLEAN`
- This option is disabled by default with the `false` value. To enable this option, add this option to the command, and either pass the `true` value or do not pass any value.

## Output

Shows the logs of the import process.
70 changes: 70 additions & 0 deletions tiup/tiup-component-cluster-scale-in.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
title: tiup cluster scale-in
---

# tiup cluster scale-in

The `tiup cluster scale-in` command is used to scale in the cluster, which takes the services of the specified nodes offline, removes the specified nodes from the cluster, and deletes the remaining files from those nodes.

Because the TiKV, TiFlash, and TiDB Binlog components are taken offline asynchronously (which requires TiUP to remove the node through API first) and the stopping process takes a long time (which requires TiUP to continuously check whether the node is successfully taken offline), the TiKV, TiFlash, and TiDB Binlog components are handled particularly as follows:

- For TiKV, TiFlash and, TiDB Binlog components:

1. TiUP Cluster takes the node offline through API and directly exits without waiting for the process to be completed.
2. To check the status of the nodes being scaled in, you need to execute the `tiup cluster display` command and wait for the status to become `Tombstone`.
3. To clean up the nodes in the `Tombstone` status, you need to execute the `tiup cluster prune` command. The `tiup cluster prune` command performs the following operations:

- Stops the services of the nodes that have been taken offline.
- Cleans up the data files of the nodes that have been taken offline.
- Updates the cluster topology and removes the nodes that have been taken offline.

For other components:

- When taking the PD components offline, TiUP Cluster quickly deletes the specified nodes from the cluster through API, stops the service of the specified PD nodes, and then deletes the related data files from the nodes.
- When taking other components down, TiUP Cluster directly stops the node services and deletes the related data files from the specified nodes.

## Syntax

```shell
tiup cluster scale-in <cluster-name> [flags]
```

`<cluster-name>` is the name of the cluster to scale in. If you forget the cluster name, you can check it using the [`tiup cluster list`](/tiup/tiup-component-cluster-list.md) command.

## Options

### -N, --node

- Specifies the nodes to take down. Multiple nodes are separated by commas.
- Data type: `STRING`
- There is no default value. This option is mandatory and the value must be not null.

### --force

- Controls whether to forcibly remove the specified nodes from the cluster. Sometimes, the host of the node to take offline might be down, which makes it impossible to connect to the node via SSH for operations, so you can forcibly remove the node from the cluster using the `-force` option.
- Data type: `BOOLEAN`
- This option is disabled by default with the `false` value. To enable this option, add this option to the command, and either pass the `true` value or do not pass any value.

> **Note:**
>
> Because the forced removal of a TiKV node does not wait for data to be scheduled, removing more than one serving TiKV node is at the risk of data loss.
### --transfer-timeout

- When a PD or TiKV node is to be removed, the Region leader on the node will be transferred to another node first. Because the transferring process takes some time, you can set the maximum waiting time (in seconds) by configuring `--transfer-timeout`. After the timeout, the `tiup cluster scale-in` command skips waiting and starts the scaling-in directly.
- Data type: `UINT`
- The option is enabled by default with `300` seconds (the default value) passed in.

> **Note:**
>
> If a PD or TiKV node is taken offline directly without waiting for the leader transfer to be completed, the service performance might jitter.
### -h, --help

- Prints the help information.
- Data type: `BOOLEAN`
- This option is disabled by default with the `false` value. To enable this option, add this option to the command, and either pass the `true` value or do not pass any value.

## Output

Shows the logs of the scaling-in process.
35 changes: 35 additions & 0 deletions tiup/tiup-component-dm-list.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
title: tiup dm list
---

# tiup dm list

`tiup-dm` supports deploying multiple clusters using the same control machine. You can use the `tiup dm list` command to check which clusters are deployed using the control machine by the currently logged-in user.

> **Note:**
>
> By default, the data of the deployed clusters is stored in the `~/.tiup/storage/dm/clusters/` directory. The currently logged-in user cannot view the clusters deployed by other users on the same control machine.
## Syntax

```shell
tiup dm list [flags]
```

## Options

### -h, --help

- Prints the help information.
- Data type: `BOOLEAN`
- This option is disabled by default with the `false` value. To enable this option, add this option to the command, and either pass the `true` value or do not pass any value.

## Output

A table consisting of the following fields:

- `Name`: the cluster name.
- `User`: the user who deployed the cluster.
- `Version`: the cluster version.
- `Path`: the path of the cluster deployment data on the control machine.
- `PrivateKey`: the path of the private key to the cluster.

0 comments on commit 531cd25

Please sign in to comment.