From 965cd1df83233b9361e0081d295a896aaea6e189 Mon Sep 17 00:00:00 2001 From: Louis Date: Fri, 31 Aug 2018 11:01:50 +0800 Subject: [PATCH] op-guide: update tikv rolling udpate policy (#592) --- op-guide/ansible-deployment-rolling-update.md | 2 +- tispark/tispark-quick-start-guide.md | 8 +++----- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/op-guide/ansible-deployment-rolling-update.md b/op-guide/ansible-deployment-rolling-update.md index 92ed9b70190e8..3b711d2df4e1c 100644 --- a/op-guide/ansible-deployment-rolling-update.md +++ b/op-guide/ansible-deployment-rolling-update.md @@ -65,7 +65,7 @@ wget http://download.pingcap.org/tidb-v2.0.3-linux-amd64-unportable.tar.gz $ ansible-playbook rolling_update.yml --tags=tikv ``` - When you apply a rolling update to the TiKV instance, Ansible migrates the Region leader to other nodes. The concrete logic is as follows: Call the PD API to add the `evict leader scheduler` -> Inspect the `leader_count` of this TiKV instance every 10 seconds -> Wait the `leader_count` to reduce to below 10, or until the times of inspecting the `leader_count` is more than 12 -> Start closing the rolling update of TiKV after two minutes of timeout -> Delete the `evict leader scheduler` after successful start. The operations are executed serially. + When you apply a rolling update to the TiKV instance, Ansible migrates the Region leader to other nodes. The concrete logic is as follows: Call the PD API to add the `evict leader scheduler` -> Inspect the `leader_count` of this TiKV instance every 10 seconds -> Wait the `leader_count` to reduce to below 1, or until the times of inspecting the `leader_count` is more than 18 -> Start closing the rolling update of TiKV after three minutes of timeout -> Delete the `evict leader scheduler` after successful start. The operations are executed serially. If the rolling update fails in the process, log in to `pd-ctl` to execute `scheduler show` and check whether `evict-leader-scheduler` exists. If it does exist, delete it manually. Replace `{PD_IP}` and `{STORE_ID}` with your PD IP and the `store_id` of the TiKV instance: diff --git a/tispark/tispark-quick-start-guide.md b/tispark/tispark-quick-start-guide.md index 254578743760c..fc19378e63c76 100644 --- a/tispark/tispark-quick-start-guide.md +++ b/tispark/tispark-quick-start-guide.md @@ -6,7 +6,7 @@ category: User Guide # TiSpark Quick Start Guide -To make it easy to [try TiSpark](tispark-user-guide.md), the TiDB cluster integrates Spark, TiSpark jar package and TiSpark sample data by default, in both the Pre-GA and master versions installed using TiDB-Ansible. +To make it easy to [try TiSpark](tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. ## Deployment information @@ -14,9 +14,9 @@ To make it easy to [try TiSpark](tispark-user-guide.md), the TiDB cluster integr - The TiSpark jar package is deployed by default in the `jars` folder in the Spark deployment directory. ``` - spark/jars/tispark-0.1.0-beta-SNAPSHOT-jar-with-dependencies.jar + spark/jars/tispark-SNAPSHOT-jar-with-dependencies.jar ``` - + - TiSpark sample data and import scripts are deployed by default in the TiDB-Ansible directory. ``` @@ -108,8 +108,6 @@ MySQL [TPCH_001]> show tables; ## Use example -Assume that the IP of your PD node is `192.168.0.2`, and the port is `2379`. - First start the spark-shell in the spark deployment directory: ```