Can a download rate from long-term s3 object storage be managed when compacting data? #7221
Replies: 3 comments 1 reply
-
Have you tried lowering |
Beta Was this translation helpful? Give feedback.
-
Oh, I thought this settings for processing data already downloaded to the --data-dir, thank you for the tip, I'll try it definitely. |
Beta Was this translation helpful? Give feedback.
-
So, running with |
Beta Was this translation helpful? Give feedback.
-
We use sefl-hosted s3 object storage for long-stored metrics. Recently we found the storage overloaded when the download rate exceeds 200MB/sec. A higher download rate leads to maxing out CPU usage.
Unfortunately reducing parallelism did not help much. Or at least the options
--wait --wait-interval=1m --block-viewer.global.sync-block-timeout=30m --consistency-delay=2h --log.level=debug --block-files-concurrency=48 --block-meta-fetch-concurrency=48 --compact.blocks-fetch-concurrency=2 --compact.concurrency=8 --downsample.concurrency=8 --hash-func=SHA256 --compact.enable-vertical-compaction
Is there a proper way to limit thanos compactors download rate with its settings and still have data compacted/downsampled?
Beta Was this translation helpful? Give feedback.
All reactions