Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

variable: mark analyze-partition-concurrency-quota as deprecated #55409

Conversation

Rustin170506
Copy link
Member

@Rustin170506 Rustin170506 commented Aug 14, 2024

What problem does this PR solve?

Issue Number: ref #55043

Problem Summary:

See my article at #55043 (comment)

What changed and how does it work?

  • Marked analyze-partition-concurrency-quota as deprecated.
  • Removed useless dedicated sessions for analysis.
  • Set the correct type for the AnalyzePartitionConcurrency variable.

Check List

Tests

  • Unit test
  • Integration test
  • Manual test
  • No need to test
    • I checked and no code files have been changed.

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

Release note

Please refer to Release Notes Language Style Guide to write a quality release note.

将 analyze-partition-concurrency-quota 配置标记为弃用
mark analyze-partition-concurrency-quota as deprecated

@ti-chi-bot ti-chi-bot bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Aug 14, 2024
@ti-chi-bot ti-chi-bot bot added the needs-1-more-lgtm Indicates a PR needs 1 more LGTM. label Aug 14, 2024
Copy link

codecov bot commented Aug 14, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 76.3835%. Comparing base (70381ea) to head (600cb71).
Report is 46 commits behind head on master.

Additional details and impacted files
@@               Coverage Diff                @@
##             master     #55409        +/-   ##
================================================
+ Coverage   72.9180%   76.3835%   +3.4655%     
================================================
  Files          1576       1580         +4     
  Lines        440611     447426      +6815     
================================================
+ Hits         321285     341760     +20475     
+ Misses        99553      85435     -14118     
- Partials      19773      20231       +458     
Flag Coverage Δ
integration 51.4964% <97.1428%> (?)
unit 73.0353% <100.0000%> (+1.0630%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
dumpling 52.9567% <ø> (ø)
parser ∅ <ø> (∅)
br 63.1715% <ø> (+17.1843%) ⬆️

@Rustin170506
Copy link
Member Author

Tested locally:

  1. Start the TiDB cluster: tiup playground nightly --db.binpath /Volumes/t7/code/tidb/bin/tidb-server
  2. Create a table with 20 partitions:
#!/usr/bin/env -S cargo +nightly -Zscript
---cargo
[dependencies]
clap = { version = "4.2", features = ["derive"] }
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "mysql"] }
tokio = { version = "1", features = ["full"] }
fake = { version = "2.5", features = ["derive"] }
---

use clap::Parser;
use fake::{Fake, Faker};
use sqlx::mysql::MySqlPoolOptions;

#[derive(Parser, Debug)]
#[clap(version)]
struct Args {
    #[clap(short, long, help = "MySQL connection string")]
    database_url: String,
}

#[derive(Debug)]
struct TableRow {
    id: i64,
    partition_key: u32,
    column1: String,
    column2: i32,
    column3: i32,
    column4: String,
}

#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
    let args = Args::parse();

    let pool = MySqlPoolOptions::new()
        .max_connections(5)
        .connect(&args.database_url)
        .await?;

    // Create partitioned table if not exists
    sqlx::query(
        "CREATE TABLE IF NOT EXISTS t (
            id BIGINT NOT NULL,
            partition_key INT NOT NULL,
            column1 VARCHAR(255) NOT NULL,
            column2 INT NOT NULL,
            column3 INT NOT NULL,
            column4 VARCHAR(255) NOT NULL,
            PRIMARY KEY (id, partition_key),
            index idx_column1 (column1)
        ) PARTITION BY RANGE (partition_key) (
            PARTITION p0 VALUES LESS THAN (3000),
            PARTITION p1 VALUES LESS THAN (6000),
            PARTITION p2 VALUES LESS THAN (9000),
            PARTITION p3 VALUES LESS THAN (12000),
            PARTITION p4 VALUES LESS THAN (15000),
            PARTITION p5 VALUES LESS THAN (18000),
            PARTITION p6 VALUES LESS THAN (21000),
            PARTITION p7 VALUES LESS THAN (24000),
            PARTITION p8 VALUES LESS THAN (27000),
            PARTITION p9 VALUES LESS THAN (30000),
            PARTITION p10 VALUES LESS THAN (33000),
            PARTITION p11 VALUES LESS THAN (36000),
            PARTITION p12 VALUES LESS THAN (39000),
            PARTITION p13 VALUES LESS THAN (42000),
            PARTITION p14 VALUES LESS THAN (45000),
            PARTITION p15 VALUES LESS THAN (48000),
            PARTITION p16 VALUES LESS THAN (51000),
            PARTITION p17 VALUES LESS THAN (54000),
            PARTITION p18 VALUES LESS THAN (57000),
            PARTITION p19 VALUES LESS THAN (60000),
            PARTITION p20 VALUES LESS THAN (63000)
        )"
    )
    .execute(&pool)
    .await?;

    // Insert 3000 rows into each of the 20 partitions
    for partition in 1..=20 {
        let partition_key = partition * 3000 + 1; // This ensures each partition key is unique

        for _ in 0..3000 {
            let row = TableRow {
                id: Faker.fake::<i64>(), // Generate a unique id
                partition_key, // Use the current partition key
                column1: Faker.fake::<String>(),
                column2: Faker.fake::<i32>(),
                column3: Faker.fake::<i32>(),
                column4: Faker.fake::<String>(),
            };

            sqlx::query(
                "INSERT INTO t (id, partition_key, column1, column2, column3, column4)
                VALUES (?, ?, ?, ?, ?, ?)"
            )
            .bind(row.id)
            .bind(row.partition_key)
            .bind(&row.column1)
            .bind(row.column2)
            .bind(row.column3)
            .bind(&row.column4)
            .execute(&pool)
            .await?;
        }

        println!("Successfully inserted 3000 rows into partition {} of the 't' table.", partition);
    }

    Ok(())
}
  1. Check the logs:
[2024/08/14 16:13:14.549 +08:00] [INFO] [analyze.go:391] ["save analyze results concurrently"] [buildStatsConcurrency=1] [saveStatsConcurrency=2]
  1. Set tidb_analyze_partition_concurrency to 1:
set global tidb_analyze_partition_concurrency =1;
mysql> select @@tidb_analyze_partition_concurrency;
+--------------------------------------+
| @@tidb_analyze_partition_concurrency |
+--------------------------------------+
| 1                                    |
+--------------------------------------+
1 row in set (0.00 sec)
  1. Analyze the table again:
mysql> analyze table t;
Query OK, 0 rows affected, 22 warnings (2.87 sec)
  1. Check the logs again:
[2024/08/14 16:18:31.626 +08:00] [INFO] [analyze.go:397] ["save analyze results in single-thread"] [buildStatsConcurrency=2] [saveStatsConcurrency=1]

Copy link
Member Author

@Rustin170506 Rustin170506 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔢 Self-check (PR reviewed by myself and ready for feedback.)

@Rustin170506
Copy link
Member Author

/retest

Copy link

@songrijie songrijie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

  • analyze-partition-concurrency-quota was not exposed to user docs.
  • it makes sense to have an upper bound value for tidb_analyze_partition_concurrency

Copy link

ti-chi-bot bot commented Aug 14, 2024

@songrijie: adding LGTM is restricted to approvers and reviewers in OWNERS files.

In response to this:

LGTM

  • analyze-partition-concurrency-quota was not included in user docs.
  • it makes sense to have an upper bound value for tidb_analyze_partition_concurrency

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@@ -457,7 +438,7 @@ func (e *AnalyzeExec) handleResultsErrorWithConcurrency(
if isAnalyzeWorkerPanic(err) {
panicCnt++
} else {
logutil.Logger(ctx).Error("analyze failed", zap.Error(err))
logutil.BgLogger().Error("receive error when saving analyze results", zap.Error(err))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the different between the logger and Bglogger ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logger will try to get the current contextual logger and log some common fields from the context.

In this case, I think there is no difference.

logutil.Logger(internalCtx).Info("use multiple sessions to save analyze results", zap.Int("sessionCount", sessionCount))
[2024/08/15 15:10:14.671 +08:00] [INFO] [analyze.go:410] ["use multiple sessions to save analyze results"] [sessionCount=2]

Copy link
Contributor

@elsa0520 elsa0520 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-chi-bot ti-chi-bot bot added lgtm and removed needs-1-more-lgtm Indicates a PR needs 1 more LGTM. labels Aug 19, 2024
Copy link

ti-chi-bot bot commented Aug 19, 2024

[LGTM Timeline notifier]

Timeline:

  • 2024-08-14 07:19:05.386654979 +0000 UTC m=+338830.090124627: ☑️ agreed by hawkingrei.
  • 2024-08-19 08:58:14.195661307 +0000 UTC m=+169489.330111423: ☑️ agreed by elsa0520.

@easonn7
Copy link

easonn7 commented Aug 20, 2024

/approve

Copy link

ti-chi-bot bot commented Aug 20, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: easonn7, elsa0520, hawkingrei, lance6716, songrijie

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added the approved label Aug 20, 2024
@ti-chi-bot ti-chi-bot bot merged commit e30408e into pingcap:master Aug 20, 2024
24 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants