-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/backup/status should return latest command status if no one in progress commands executed #887
Comments
according to
When it return empty list, it means no one operation is currently running. Check |
Do you have logs? |
I was using the clickhouse-operator and a minio deployment from the README.md apiVersion: v1
kind: Secret
metadata:
name: clickhouse-backup-config
stringData:
config.yml: |
general:
remote_storage: s3
log_level: debug
restore_schema_on_cluster: "{cluster}"
allow_empty_backups: true
backups_to_keep_remote: 3
clickhouse:
use_embedded_backup_restore: true
embedded_backup_disk: backups
timeout: 4h
skip_table_engines:
- GenerateRandom
api:
listen: "0.0.0.0:7171"
create_integration_tables: true
s3:
acl: private
endpoint: http://s3-backup-minio:9000
bucket: clickhouse
path: backup/shard-{shard}
access_key: backup-access-key
secret_key: backup-secret-key
force_path_style: true
disable_ssl: true
debug: true
---
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
name: one-sidecar-embedded
spec:
defaults:
templates:
podTemplate: clickhouse-backup
dataVolumeClaimTemplate: data-volume
configuration:
profiles:
default/distributed_ddl_task_timeout: 14400
files:
config.d/backup_disk.xml: |
<clickhouse>
<storage_configuration>
<disks>
<backups>
<type>local</type>
<path>/var/lib/clickhouse/backups/</path>
</backups>
</disks>
</storage_configuration>
<backups>
<allowed_disk>backups</allowed_disk>
<allowed_path>backups/</allowed_path>
</backups>
</clickhouse>
settings:
# to allow scrape metrics via embedded prometheus protocol
prometheus/endpoint: /metrics
prometheus/port: 8888
prometheus/metrics: true
prometheus/events: true
prometheus/asynchronous_metrics: true
# need install zookeeper separately, look to https://github.com/Altinity/clickhouse-operator/tree/master/deploy/zookeeper/ for details
zookeeper:
nodes:
- host: zookeeper
port: 2181
session_timeout_ms: 5000
operation_timeout_ms: 5000
clusters:
- name: default
layout:
# 2 shards one replica in each
shardsCount: 2
replicas:
- templates:
podTemplate: pod-with-backup
- templates:
podTemplate: pod-clickhouse-only
templates:
volumeClaimTemplates:
- name: data-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
podTemplates:
- name: pod-with-backup
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8888'
prometheus.io/path: '/metrics'
# need separate prometheus scrape config, look to https://github.com/prometheus/prometheus/issues/3756
clickhouse.backup/scrape: 'true'
clickhouse.backup/port: '7171'
clickhouse.backup/path: '/metrics'
spec:
securityContext:
runAsUser: 101
runAsGroup: 101
fsGroup: 101
containers:
- name: clickhouse-pod
image: clickhouse/clickhouse-server
command:
- clickhouse-server
- --config-file=/etc/clickhouse-server/config.xml
- name: clickhouse-backup
image: clickhouse-backup:build-docker
# image: altinity/clickhouse-backup:master
imagePullPolicy: IfNotPresent
command:
# - bash
# - -xc
# - "/bin/clickhouse-backup server"
- "/src/build/linux/amd64/clickhouse-backup"
- "server"
# require to avoid double scraping clickhouse and clickhouse-backup containers
ports:
- name: backup-rest
containerPort: 7171
volumeMounts:
- name: config-volume
mountPath: /etc/clickhouse-backup/config.yml
subPath: config.yml
volumes:
- name: config-volume
secret:
secretName: clickhouse-backup-config
- name: pod-clickhouse-only
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8888'
prometheus.io/path: '/metrics'
spec:
securityContext:
runAsUser: 101
runAsGroup: 101
fsGroup: 101
containers:
- name: clickhouse-pod
image: clickhouse/clickhouse-server
command:
- clickhouse-server
- --config-file=/etc/clickhouse-server/config.xml |
logs from clickhouse-backup
|
Ok. configuration looks correct,
one suggestion
have sense only for standalone hardware servers where /var/lib/clickhouse/backups/ mounted as separate HDD disk for example in kubernetes better use s3 |
|
Root reason in the logs
just add
into your secret |
Would you mind adding the following instead of an empty string as the response? {
"command": "create_remote <backup_name>",
"status": "error",
"start": "2024-03-26 08:15:42",
"finish": "2024-03-26 08:17:12",
"error": "`general->remote_storage: s3` `clickhouse->use_embedded_backup_restore: true` require s3->compression_format: none"
} |
@frankwg good suggestion, thanks |
I used the locally built 2.5.0 for the testing and found the /backup/status endpoint returns empty after /backup/actions with {"command","create_remote <backup_name>"} or /backup/upload/<local_backup_name> was issued. But, it returns correctly while previous request is /backup/list or /backup/clean.
Note: the
use_embedded_backup_restore: true
was used. Also, the upload to the s3 was not successful.The text was updated successfully, but these errors were encountered: