Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lightning: The performance drop 100% from 470s to 970s #45094

Closed
Yui-Song opened this issue Jun 30, 2023 · 5 comments
Closed

lightning: The performance drop 100% from 470s to 970s #45094

Yui-Song opened this issue Jun 30, 2023 · 5 comments
Assignees
Labels

Comments

@Yui-Song
Copy link
Contributor

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

  1. deploy a tidb cluster with 1tidb and 6 tikv
  2. prepare data of ycsb with configurations:
-p recordcount=100000000 -p operationcount=1000000 -p workload=core -p fieldcount=10 -p fieldlength=100 -p requestdistribution=uniform1
  1. export the data with dumpling
  2. import the data into a new deploy cluster with lightning

2. What did you expect to see? (Required)

No performance regression

3. What did you see instead (Required)

  1. With the default configuration, the performance drops 100%, the duration of import increases from 470s to 560+s.
    image

  2. With lightning setting duplicate-resolution = "remove", the performance drops 100%, the duration of import increases from 470s to 970s.
    image

4. What is your TiDB version? (Required)

bad commit: 89bf743
good commit: 0c2d07d

@Yui-Song Yui-Song added type/bug The issue is confirmed as a bug. severity/critical component/lightning This issue is related to Lightning of TiDB. labels Jun 30, 2023
@ti-chi-bot ti-chi-bot bot added may-affects-5.2 This bug maybe affects 5.2.x versions. may-affects-5.3 This bug maybe affects 5.3.x versions. may-affects-5.4 This bug maybe affects 5.4.x versions. may-affects-6.1 may-affects-6.5 may-affects-7.1 labels Jun 30, 2023
@Yui-Song Yui-Song added type/performance type/regression and removed may-affects-5.2 This bug maybe affects 5.2.x versions. may-affects-5.3 This bug maybe affects 5.3.x versions. may-affects-5.4 This bug maybe affects 5.4.x versions. may-affects-6.1 may-affects-6.5 may-affects-7.1 labels Jun 30, 2023
@D3Hunter D3Hunter self-assigned this Jun 30, 2023
@D3Hunter
Copy link
Contributor

D3Hunter commented Jul 3, 2023

the issue might caused by checksum-via-sql, but i tries to run with either checksum-via-sql set to false or true cannot reproduce the perf degradation in the same env.(after testbed workflow finished)

will try to run with checksum-via-sql=false in the same testbed workflow

@D3Hunter
Copy link
Contributor

D3Hunter commented Jul 4, 2023

image
the slow might caused by unstable test env, there's no much change in lightning in those days, but it's fast in 2023-07-02 22:36:53.000 ~ 2023-07-04 03:03:59.000

@Yui-Song
Copy link
Contributor Author

/close

@ti-chi-bot ti-chi-bot bot closed this as completed Jul 25, 2023
@ti-chi-bot
Copy link

ti-chi-bot bot commented Jul 25, 2023

@Yui-Song: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@D3Hunter
Copy link
Contributor

D3Hunter commented Aug 8, 2023

#44887 hasn't merged into 6.5 branch, so this issue doesn't affect it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants