Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Backport 6.2] fix(test-case): update 5000 tables test case configuration #9848

Merged
merged 1 commit into from
Jan 19, 2025

Conversation

scylladbbot
Copy link

List of changes:

  • Disable per-table metrics due to significant perf impact.
  • Enable cluster health checks which work with this case just fine.
  • Decrease the nemesis interval from 60 minutes to just 3 keeping in mind that health checks will take some time too.
  • Reduce stress time for each of the 5000 commands. Having 20 minutes per cmd we will get about 1.5 days long test runs instead of the 2.5 days.
  • Reduce number of loaders from 5 to 3 to use resources more efficiently. In current case the bottleneck is the RAM.

Note that this scenario hits following bug:

If destroy_data_then_repair nemesis gets triggered aganst the setup of this scenario.

Testing

PR pre-checks (self review)

  • I added the relevant backport labels
  • I didn't leave commented-out/debugging code

Reminders

  • Add New configuration option and document them (in sdcm/sct_config.py)

  • Add unit tests to cover my changes (under unit-test/ folder)

  • Update the Readme/doc folder relevant to this change (if needed)

  • (cherry picked from commit 0c7fa60)

Parent PR: #9843

List of changes:
- Disable per-table metrics due to significant perf impact.
- Enable cluster health checks which work with this case just fine.
- Decrease the nemesis interval from 60 minutes to just 3 keeping
  in mind that health checks will take some time too.
- Reduce stress time for each of the 5000 commands.
  Having 20 minutes per cmd we will get about 1.5 days long test runs
  instead of the 2.5 days.
- Reduce number of loaders from 5 to 3 to use resources more
  efficiently. In current case the bottleneck is the RAM.

Note that this scenario hits following bug:
- https://github.com/scylladb/scylla-enterprise/issues/5093

If 'destroy_data_then_repair' nemesis gets triggered aganst the setup
of this scenario.

(cherry picked from commit 0c7fa60)
@fruch fruch merged commit 6852be9 into scylladb:branch-6.2 Jan 19, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants