-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restore to new server with different disk count, implements least_used for normal disk and roud_robin for Object disks #561
Comments
sorry for late reply currently it is not possible automatically you can manually spread parts this is good candidate for feature request, but need formalize how to spread parts between disks |
Unfortunately, too many corner cases for multi-disk volumes, different storage policies and S3 \ HDFS disks and backwards compatibility, currently we doesn't store inside backup_name/metadata.json full information for volumes, storage_policies and disk types Need time to think how to implements it properly |
Unfortunately, fair full implementation requires store data part size and requires rewrite too many parts I added a partial implementation, which could cover 60-70 percent of use-cases, I hope |
* add connection to gcs and use different context for upload incase it got cancel by another thread * save * keep ctx * keep ctx * use v2 * change to GCS_CLIENT_POOL_SIZE * pin zookeeper to 3.8.2 version for resolve incompatibility between clickhouse and zookeeper 3.9.0, see details in apache/zookeeper#1837 (comment) return `:latest` default value after resolve ClickHouse/ClickHouse#53749 * Revert "add more precise disk re-balancing for not exists disks, during download, partial fix Altinity#561" This reverts commit 20e250c. * fix S3 head object Server Side Encryption parameters, fix Altinity#709 * change timeout to 60m, TODO make tests Parallel --------- Co-authored-by: Slach <bloodjazman@gmail.com>
=(( too hard to implements with combination with object disks ... |
… by prefix instead of reading metadata files, applying macros for `object_disks_path`, related to #561
There is disk_mapping already, but looks like it can remap disks 1-1. I want to restore a backup at a system that has more disks and distribute parts between the disks. Backup server had only 1 "default" disk, new server has 7 disks. Is that possible and how?
The text was updated successfully, but these errors were encountered: