Skip to content

Commit

Permalink
[KYUUBI #638] Refine kyuubi extension config docs
Browse files Browse the repository at this point in the history
<!--
Thanks for sending a pull request!

Here are some tips for you:
  1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html
  2. If the PR is related to an issue in https://github.com/NetEase/kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'.
  3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'.
-->

### _Why are the changes needed?_
<!--
Please clarify why the changes are needed. For instance,
  1. If you add a feature, you can talk about the use case of it.
  2. If you fix a bug, you can clarify why it is a bug.
-->
Refine docs.

### _How was this patch tested?_
Not needed.

Closes #638 from ulysses-you/refine-config-docs.

Closes #638

e9f3f37 [ulysses-you] refine

Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: ulysses-you <ulyssesyou18@gmail.com>
(cherry picked from commit ccb891d)
Signed-off-by: ulysses-you <ulyssesyou18@gmail.com>
  • Loading branch information
ulysses-you committed May 14, 2021
1 parent babedec commit cb1b5d8
Showing 1 changed file with 9 additions and 9 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,16 @@ object KyuubiSQLConf {

val INSERT_REPARTITION_BEFORE_WRITE =
buildConf("spark.sql.optimizer.insertRepartitionBeforeWrite.enabled")
.doc("Add repartition node at the top of plan. A approach of merging small files.")
.doc("Add repartition node at the top of query plan. An approach of merging small files.")
.version("1.2.0")
.booleanConf
.createWithDefault(true)

val INSERT_REPARTITION_NUM =
buildConf("spark.sql.optimizer.insertRepartitionNum")
.doc(s"The partition number if ${INSERT_REPARTITION_BEFORE_WRITE.key} is enabled. " +
s"If AQE is disabled, the default value is ${SQLConf.SHUFFLE_PARTITIONS}. " +
s"If AQE is enabled, the default value is none that means depend on AQE.")
s"If AQE is disabled, the default value is ${SQLConf.SHUFFLE_PARTITIONS.key}. " +
"If AQE is enabled, the default value is none that means depend on AQE.")
.version("1.2.0")
.intConf
.createOptional
Expand All @@ -42,25 +42,25 @@ object KyuubiSQLConf {
buildConf("spark.sql.optimizer.dynamicPartitionInsertionRepartitionNum")
.doc(s"The partition number of each dynamic partition if " +
s"${INSERT_REPARTITION_BEFORE_WRITE.key} is enabled. " +
s"We will repartition by dynamic partition columns to reduce the small file but that " +
s"can cause data skew. This config is to extend the partition of dynamic " +
s"partition column to avoid skew but may generate some small files.")
"We will repartition by dynamic partition columns to reduce the small file but that " +
"can cause data skew. This config is to extend the partition of dynamic " +
"partition column to avoid skew but may generate some small files.")
.version("1.2.0")
.intConf
.createWithDefault(100)

val FORCE_SHUFFLE_BEFORE_JOIN =
buildConf("spark.sql.optimizer.forceShuffleBeforeJoin.enabled")
.doc("Ensure shuffle node exists before shuffled join (shj and smj) to make AQE " +
"`OptimizeSkewedJoin` works (extra shuffle, multi table join).")
"`OptimizeSkewedJoin` works (complex scenario join, multi table join).")
.version("1.2.0")
.booleanConf
.createWithDefault(false)

val FINAL_STAGE_CONFIG_ISOLATION =
buildConf("spark.sql.optimizer.finalStageConfigIsolation.enabled")
.doc("If true, the final stage support use different config with previous stage. The final " +
"stage config key prefix should be `spark.sql.finalStage.`." +
.doc("If true, the final stage support use different config with previous stage. " +
"The prefix of final stage config key should be `spark.sql.finalStage.`." +
"For example, the raw spark config: `spark.sql.adaptive.advisoryPartitionSizeInBytes`, " +
"then the final stage config should be: " +
"`spark.sql.finalStage.adaptive.advisoryPartitionSizeInBytes`.")
Expand Down

0 comments on commit cb1b5d8

Please sign in to comment.