-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use CLICKHOUSE_USE_EMBEDDED_BACKUP_RESTORE start error #876
Comments
provide more context |
apply |
Can you take a look at my configuration
The image was added later for demonstration purposes. Changing it to true will result in an error. |
The error is this. It has nothing to do with either.
|
just add in your configuration - name: CLICKHOUSE_TIMEOUT
value: 4h cause BACKUP / RESTORE SQL command could take a lot of time during execution
look example in |
Thank you for your response. Now I have three questions: 1、I didn't find the corresponding environment variable for embedded_backup_disk. For example, timeout corresponds to CLICKHOUSE_TIMEOUT. Is embedded_backup_disk corresponding to CLICKHOUSE_EMBEDDED_BACKUP_DISK? 2、Do these need to be added as configurations within ClickHouse? 3、How to backup and restore? Is it done through API requests? Is there any documentation explaining the backup and restore principles? |
I haven't found any API for performing both backup and upload simultaneously. Currently, it seems that the APIs for backup and upload are separate. Is that correct? And for incremental backup, is the only option the /backup/watch endpoint? |
Look to there is the list of all available variable names
backup disk requires to be created explicitly for spec:
configuration:
files:
config.d/backup_disk.xml: |
<clickhouse>
...
</clickhouse> |
Please read |
look to
env:
- name: API_ALLOW_PARALLEL
value: "true" be careful because this use-case is rare, and you can make high workload for your system and degrade performance if you don't understand how backup works and how it affects |
|
Can this parameter be added in this way? Currently, it seems to be causing errors.
I saw the following error in the logs:
I don't want to do a full backup every day. Can I remove this requirement? What is the relationship between them? After performing an incremental backup, is a full backup still necessary? Will the previous incremental backups be deleted after a full backup? |
Yes you can but you can't change exists running watch command |
Can't I change the parameters? For example, how often to perform backups, how often to perform full backups? If I don't specify tables, does it mean it's a full database backup? I've reviewed this example. Am I unable to change the parameters? Aren't they optional parameters?
How should I interpret this sentence? I didn't modify it, did I? I only changed the parameters. |
Additionally, what is the reason for this log error? Is it because I allocated too little disk space? What is its calculation logic?
|
It means look to When you run watch first time, then it just run infinite internal commands to create full+incremental backups sequence
when you call second API call with different parameters and
No, this is not related to disk size, you defined Don't be shy to make Pull Request with better error message Please try to read and figure out with links which I shared so, please kill exists watch command if it's running after that you can success execute
if you already applied |
When I execute config
logs error info
|
Could you share without sensitive information?
|
|
you don't need separate volume for - mountPath: /var/lib/clickhouse/backups_embedded
name: backups-s3 and second - name: backups-s3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: open-local-lvm-xfs and add
to second container's item with |
deploy config
If my MinIO bucket is private, I will encounter the following error.
Switching it to public resolves the issue, but I prefer to use it as private. |
uncomment credentials in XML config for backups_s3
|
Another issue I encountered is that when I use incremental backup, I get an error:
Why is this error happening? Could it be because I have set |
Look to |
|
You shared not the full log from clickhouse-backup container |
check your current configuration use for minio
|
you not applied my recommendation from #876 (comment) |
I have implemented your advice.
|
ok. i see
please use and share logs from scracth with - name: S3_DEBUG
value: "true" |
@dxygit1 are you sure you install minio properly? i see in logs
it should return 200 do you use could you share?
|
During download incremental backup |
acos/dstak/0 also check check |
so do you have two different buckets? |
No this is different data
v2.5 will support empty value |
does clickhouse-backup define that only metadata is stored in S3, and during restoration, it will search for the physical data in the backups_s3 disk? |
When will v2.5 be released approximately? |
When When When When It is clear? |
Okay, I understand now. Thank you very much for your help. |
error info
deploy
The text was updated successfully, but these errors were encountered: